Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage
ERIC Educational Resources Information Center
Galyardt, April
2012-01-01
This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Transformations to multiple trait mixed model equations (MME) which are intended to improve computational efficiency in best linear unbiased prediction (BLUP) and restricted maximum likelihood (REML) are described. It is shown that traits that are expected or estimated to have zero residual variance...
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Effects of mixing states on the multiple-scattering properties of soot aerosols.
Cheng, Tianhai; Wu, Yu; Gu, Xingfa; Chen, Hao
2015-04-20
The radiative properties of soot aerosols are highly sensitive to the mixing states of black carbon particles and other aerosol components. Light absorption properties are enhanced by the mixing state of soot aerosols. Quantification of the effects of mixing states on the scattering properties of soot aerosol are still not completely resolved, especially for multiple-scattering properties. This study focuses on the effects of the mixing state on the multiple scattering of soot aerosols using the vector radiative transfer model. Two types of soot aerosols with different mixing states such as external mixture soot aerosols and internal mixture soot aerosols are studied. Upward radiance/polarization and hemispheric flux are studied with variable soot aerosol loadings for clear and haze scenarios. Our study showed dramatic changes in upward radiance/polarization due to the effects of the mixing state on the multiple scattering of soot aerosols. The relative difference in upward radiance due to the different mixing states can reach 16%, whereas the relative difference of upward polarization can reach 200%. The effects of the mixing state on the multiple-scattering properties of soot aerosols increase with increasing soot aerosol loading. The effects of the soot aerosol mixing state on upwelling hemispheric flux are much smaller than in upward radiance/polarization, which increase with increasing solar zenith angle. The relative difference in upwelling hemispheric flux due to the different soot aerosol mixing states can reach 18% when the solar zenith angle is 75°. The findings should improve our understanding of the effects of mixing states on the optical properties of soot aerosols and their effects on climate. The mixing mechanism of soot aerosols is of critical importance in evaluating the climate effects of soot aerosols, which should be explicitly included in radiative forcing models and aerosol remote sensing.
NASA Astrophysics Data System (ADS)
Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.
2008-12-01
The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.
The Performance of IRT Model Selection Methods with Mixed-Format Tests
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2012-01-01
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
NASA Technical Reports Server (NTRS)
Bauer, Susanne E.; Ault, Andrew; Prather, Kimberly A.
2013-01-01
Aerosol particles in the atmosphere are composed of multiple chemical species. The aerosol mixing state, which describes how chemical species are mixed at the single-particle level, provides critical information on microphysical characteristics that determine the interaction of aerosols with the climate system. The evaluation of mixing state has become the next challenge. This study uses aerosol time-of-flight mass spectrometry (ATOFMS) data and compares the results to those of the Goddard Institute for Space Studies modelE-MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) model, a global climate model that includes a detailed aerosol microphysical scheme. We use data from field campaigns that examine a variety of air mass regimens (urban, rural, and maritime). At all locations, polluted areas in California (Riverside, La Jolla, and Long Beach), a remote location in the Sierra Nevada Mountains (Sugar Pine) and observations from Jeju (South Korea), the majority of aerosol species are internally mixed. Coarse aerosol particles, those above 1 micron, are typically aged, such as coated dust or reacted sea-salt particles. Particles below 1 micron contain large fractions of organic material, internally-mixed with sulfate and black carbon, and few external mixtures. We conclude that observations taken over multiple weeks characterize typical air mass types at a given location well; however, due to the instrumentation, we could not evaluate mass budgets. These results represent the first detailed comparison of single-particle mixing states in a global climate model with real-time single-particle mass spectrometry data, an important step in improving the representation of mixing state in global climate models.
Selection of latent variables for multiple mixed-outcome models
ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI
2014-01-01
Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2015-01-01
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…
Unifying error structures in commonly used biotracer mixing models.
Stock, Brian C; Semmens, Brice X
2016-10-01
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...
Software engineering the mixed model for genome-wide association studies on large samples
USDA-ARS?s Scientific Manuscript database
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, K. N.; Takano, Y.; He, Cenlin
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
Coding response to a case-mix measurement system based on multiple diagnoses.
Preyra, Colin
2004-08-01
To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.
Stable Isotope Mixing Models as a Tool for Tracking Sources of Water and Water Pollutants
One goal of monitoring pollutants is to be able to trace the pollutant to its source. Here we review how mixing models using stable isotope information on water and water pollutants can help accomplish this goal. A number of elements exist in multiple stable (non-radioactive) i...
A mixed integer program to model spatial wildfire behavior and suppression placement decisions
Erin J. Belval; Yu Wei; Michael Bevers
2015-01-01
Wildfire suppression combines multiple objectives and dynamic fire behavior to form a complex problem for decision makers. This paper presents a mixed integer program designed to explore integrating spatial fire behavior and suppression placement decisions into a mathematical programming framework. Fire behavior and suppression placement decisions are modeled using...
“SNP Snappy”: A Strategy for Fast Genome-Wide Association Studies Fitting a Full Mixed Model
Meyer, Karin; Tier, Bruce
2012-01-01
A strategy to reduce computational demands of genome-wide association studies fitting a mixed model is presented. Improvements are achieved by utilizing a large proportion of calculations that remain constant across the multiple analyses for individual markers involved, with estimates obtained without inverting large matrices. PMID:22021386
Genomic-based multiple-trait evaluation in Eucalyptus grandis using dominant DArT markers.
Cappa, Eduardo P; El-Kassaby, Yousry A; Muñoz, Facundo; Garcia, Martín N; Villalba, Pamela V; Klápště, Jaroslav; Marcucci Poltri, Susana N
2018-06-01
We investigated the impact of combining the pedigree- and genomic-based relationship matrices in a multiple-trait individual-tree mixed model (a.k.a., multiple-trait combined approach) on the estimates of heritability and on the genomic correlations between growth and stem straightness in an open-pollinated Eucalyptus grandis population. Additionally, the added advantage of incorporating genomic information on the theoretical accuracies of parents and offspring breeding values was evaluated. Our results suggested that the use of the combined approach for estimating heritabilities and additive genetic correlations in multiple-trait evaluations is advantageous and including genomic information increases the expected accuracy of breeding values. Furthermore, the multiple-trait combined approach was proven to be superior to the single-trait combined approach in predicting breeding values, in particular for low-heritability traits. Finally, our results advocate the use of the combined approach in forest tree progeny testing trials, specifically when a multiple-trait individual-tree mixed model is considered. Copyright © 2018 Elsevier B.V. All rights reserved.
Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses
Preyra, Colin
2004-01-01
Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
ERIC Educational Resources Information Center
Han, Kyung T.; Rudner, Lawrence M.
2014-01-01
This study uses mixed integer quadratic programming (MIQP) to construct multiple highly equivalent item pools simultaneously, and compares the results from mixed integer programming (MIP). Three different MIP/MIQP models were implemented and evaluated using real CAT item pool data with 23 different content areas and a goal of equal information…
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Transport theory and the WKB approximation for interplanetary MHD fluctuations
NASA Technical Reports Server (NTRS)
Matthaeus, William H.; Zhou, YE; Zank, G. P.; Oughton, S.
1994-01-01
An alternative approach, based on a multiple scale analysis, is presented in order to reconcile the traditional Wentzel-Kramer-Brillouin (WKB) approach to the modeling of interplanetary fluctuations in a mildly inhomogeneous large-scale flow with a more recently developed transport theory. This enables us to compare directly, at a formal level, the inherent structure of the two models. In the case of noninteracting, incompressible (Alven) waves, the principle difference between the two models is the presence of leading-order couplings (called 'mixing effects') in the non-WKB turbulence model which are absent in a WKB development. Within the context of linearized MHD, two cases have been identified for which the leading order non-WJB 'mixing term' does not vanish at zero wavelength. For these cases the WKB expansion is divergent, whereas the multiple-scale theory is well behaved. We have thus established that the WKB results are contained within the multiple-scale theory, but leading order mixing effects, which are likely to have important observational consequences, can never be recovered in the WKB style expansion. Properties of the higher-order terms in each expansion are also discussed, leading to the conclusion that the non-WKB hierarchy may be applicable even when the scale separation parameter is not small.
2008-03-01
multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space
Koerner, Tess K.; Zhang, Yang
2017-01-01
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422
A mixed-effects regression model for longitudinal multivariate ordinal data.
Liu, Li C; Hedeker, Donald
2006-03-01
A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.
Fast Mix Table Construction for Material Discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Seth R
2013-01-01
An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less
Magnetic properties of checkerboard lattice: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Jabar, A.; Masrour, R.; Hamedoun, M.; Benyoussef, A.
2017-12-01
The magnetic properties of ferrimagnetic mixed-spin Ising model in the checkerboard lattice are studied using Monte Carlo simulations. The variation of total magnetization and magnetic susceptibility with the crystal field has been established. We have obtained a transition from an order to a disordered phase in some critical value of the physical variables. The reduced transition temperature is obtained for different exchange interactions. The magnetic hysteresis cycles have been established. The multiples hysteresis cycle in checkerboard lattice are obtained. The multiples hysteresis cycle have been established. The ferrimagnetic mixed-spin Ising model in checkerboard lattice is very interesting from the experimental point of view. The mixed spins system have many technological applications such as in domain opto-electronics, memory, nanomedicine and nano-biological systems. The obtained results show that that crystal field induce long-range spin-spin correlations even bellow the reduced transition temperature.
Multiple scales in metapopulations of public goods producers
NASA Astrophysics Data System (ADS)
Bauer, Marianne; Frey, Erwin
2018-04-01
Multiple scales in metapopulations can give rise to paradoxical behavior: in a conceptual model for a public goods game, the species associated with a fitness cost due to the public good production can be stabilized in the well-mixed limit due to the mere existence of these scales. The scales in this model involve a length scale corresponding to separate patches, coupled by mobility, and separate time scales for reproduction and interaction with a local environment. Contrary to the well-mixed high mobility limit, we find that for low mobilities, the interaction rate progressively stabilizes this species due to stochastic effects, and that the formation of spatial patterns is not crucial for this stabilization.
A multiple-scale turbulence model for incompressible flow
NASA Technical Reports Server (NTRS)
Duncan, B. S.; Liou, W. W.; Shih, T. H.
1993-01-01
A multiple-scale eddy viscosity model is described. This model splits the energy spectrum into a high wave number regime and a low wave number regime. Dividing the energy spectrum into multiple regimes simplistically emulates the cascade of energy through the turbulence spectrum. The constraints on the model coefficients are determined by examining decaying turbulence and homogeneous turbulence. A direct link between the partitioned energies and the energy transfer process is established through the coefficients. This new model was calibrated and tested for boundary-free turbulent shear flows. Calculations of mean and turbulent properties show good agreement with experimental data for two mixing layers, a plane jet and a round jet.
A Parameter Subset Selection Algorithm for Mixed-Effects Models
Schmidt, Kathleen L.; Smith, Ralph C.
2016-01-01
Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less
NASA Astrophysics Data System (ADS)
Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.
2017-06-01
An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.
Ben-Ami, Frida; Mouton, Laurence; Ebert, Dieter
2008-07-01
Multiple infections of a host by different strains of the same microparasite are common in nature. Although numerous models have been developed in an attempt to predict the evolutionary effects of intrahost competition, tests of the assumptions of these models are rare and the outcome is diverse. In the present study we examined the outcome of mixed-isolate infections in individual hosts, using a single clone of the waterflea Daphnia magna and three isolates of its semelparous endoparasite Pasteuria ramosa. We exposed individual Daphnia to single- and mixed-isolate infection treatments, both simultaneously and sequentially. Virulence was assessed by monitoring host mortality and fecundity, and parasite spore production was used as a measure of parasite fitness. Consistent with most assumptions, in multiply infected hosts we found that the virulence of mixed infections resembled that of the more virulent competitor, both in simultaneous multiple infections and in sequential multiple infections in which the virulent isolate was first to infect. The more virulent competitor also produced the vast majority of transmission stages. Only when the less virulent isolate was first to infect, the intrahost contest resembled scramble competition, whereby both isolates suffered by producing fewer transmission stages. Surprisingly, mixed-isolate infections resulted in lower fecundity-costs for the hosts, suggesting that parasite competition comes with an advantage for the host relative to single infections. Finally, spore production correlated positively with time-to-host-death. Thus, early-killing of more competitive isolates produces less transmission stages than less virulent, inferior isolates. Our results are consistent with the idea that less virulent parasite lines may be replaced by more virulent strains under conditions with high rates of multiple infections.
Fast mix table construction for material discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S. R.
2013-07-01
An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less
Functional mixed effects spectral analysis
KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG
2011-01-01
SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437
NASA Astrophysics Data System (ADS)
Jonrinaldi; Rahman, T.; Henmaidi; Wirdianto, E.; Zhang, D. Z.
2018-03-01
This paper proposed a mathematical model for multiple items Economic Production and Order Quantity (EPQ/EOQ) with considering continuous and discrete demand simultaneously in a system consisting of a vendor and multiple buyers. This model is used to investigate the optimal production lot size of the vendor and the number of shipments policy of orders to multiple buyers. The model considers the multiple buyers’ holding cost as well as transportation cost, which minimize the total production and inventory costs of the system. The continuous demand from any other customers can be fulfilled anytime by the vendor while the discrete demand from multiple buyers can be fulfilled by the vendor using the multiple delivery policy with a number of shipments of items in the production cycle time. A mathematical model is developed to illustrate the system based on EPQ and EOQ model. Solution procedures are proposed to solve the model using a Mixed Integer Non Linear Programming (MINLP) and algorithm methods. Then, the numerical example is provided to illustrate the system and results are discussed.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
NASA Astrophysics Data System (ADS)
Kim, Ji-Hyun; Kim, Kyoung-Ho; Thao, Nguyen Thi; Batsaikhan, Bayartungalag; Yun, Seong-Taek
2017-06-01
In this study, we evaluated the water quality status (especially, salinity problems) and hydrogeochemical processes of an alluvial aquifer in a floodplain of the Red River delta, Vietnam, based on the hydrochemical and isotopic data of groundwater samples (n = 23) from the Kien Xuong district of the Thai Binh province. Following the historical inundation by paleo-seawater during coastal progradation, the aquifer has been undergone progressive freshening and land reclamation to enable settlements and farming. The hydrochemical data of water samples showed a broad hydrochemical change, from Na-Cl through Na-HCO3 to Ca-HCO3 types, suggesting that groundwater was overall evolved through the freshening process accompanying cation exchange. The principal component analysis (PCA) of the hydrochemical data indicates the occurrence of three major hydrogeochemical processes occurring in an aquifer, namely: 1) progressive freshening of remaining paleo-seawater, 2) water-rock interaction (i.e., dissolution of silicates), and 3) redox process including sulfate reduction, as indicated by heavy sulfur and oxygen isotope compositions of sulfate. To quantitatively assess the hydrogeochemical processes, the end-member mixing analysis (EMMA) and the forward mixing modeling using PHREEQC code were conducted. The EMMA results show that the hydrochemical model with the two-dimensional mixing space composed of PC 1 and PC 2 best explains the mixing in the study area; therefore, we consider that the groundwater chemistry mainly evolved by mixing among three end-members (i.e., paleo-seawater, infiltrating rain, and the K-rich groundwater). The distinct depletion of sulfate in groundwater, likely due to bacterial sulfate reduction, can also be explained by EMMA. The evaluation of mass balances using geochemical modeling supports the explanation that the freshening process accompanying direct cation exchange occurs through mixing among three end-members involving the K-rich groundwater. This study shows that the multiple end-members mixing model is useful to more successfully assess complex hydrogeochemical processes occurring in a salinized aquifer under freshening, as compared to the conventional interpretation using the theoretical mixing line based on only two end-members (i.e., seawater and rainwater).
Fully-coupled analysis of jet mixing problems. Part 1. Shock-capturing model, SCIPVIS
NASA Technical Reports Server (NTRS)
Dash, S. M.; Wolf, D. E.
1984-01-01
A computational model, SCIPVIS, is described which predicts the multiple cell shock structure in imperfectly expanded, turbulent, axisymmetric jets. The model spatially integrates the parabolized Navier-Stokes jet mixing equations using a shock-capturing approach in supersonic flow regions and a pressure-split approximation in subsonic flow regions. The regions are coupled using a viscous-characteristic procedure. Turbulence processes are represented via the solution of compressibility-corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive analysis of the wake-like mixing process occurring behind Mach discs is handled in a rigorous manner. Calculations are presented exhibiting the fundamental interactive processes occurring in supersonic jets and the model is assessed via comparisons with detailed laboratory data for a variety of under- and overexpanded jets.
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.
A multiple-scale turbulence model for incompressible flow
NASA Technical Reports Server (NTRS)
Duncan, B. S.; Liou, W. W.; Shih, T. H.
1993-01-01
A multiple-scale eddy viscosity model is described in this paper. This model splits the energy spectrum into a high wave number regime and a low wave number regime. Dividing the energy spectrum into multiple regimes simplistically emulates the cascade of energy through the turbulence spectrum. The constraints on the model coefficients are determined by examining decaying turbulence and homogeneous turbulence. A direct link between the partitioned energies and the energy transfer process is established through the coefficients. This new model has been calibrated and tested for boundary-free turbulent shear flows. Calculations of mean and turbulent properties show good agreement with experimental data for two mixing layers, a plane jet and a round jet.
Combustor cap having non-round outlets for mixing tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Michael John; Boardman, Gregory Allen; McConnaughhay, Johnie Franklin
2016-12-27
A system includes a a combustor cap configured to be coupled to a plurality of mixing tubes of a multi-tube fuel nozzle, wherein each mixing tube of the plurality of mixing tubes is configured to mix air and fuel to form an air-fuel mixture. The combustor cap includes multiple nozzles integrated within the combustor cap. Each nozzle of the multiple nozzles is coupled to a respective mixing tube of the multiple mixing tubes. In addition, each nozzle of the multiple nozzles includes a first end and a second end. The first end is coupled to the respective mixing tube ofmore » the multiple mixing tubes. The second end defines a non-round outlet for the air-fuel mixture. Each nozzle of the multiple nozzles includes an inner surface having first and second portions, the first portion radially diverges along an axial direction from the first end to the second end, and the second portion radially converges along the axial direction from the first end to the second end.« less
A demonstration of mixed-methods research in the health sciences.
Katz, Janet; Vandermause, Roxanne; McPherson, Sterling; Barbosa-Leiker, Celestina
2016-11-18
Background The growth of patient, community and population-centred nursing research is a rationale for the use of research methods that can examine complex healthcare issues, not only from a biophysical perspective, but also from cultural, psychosocial and political viewpoints. This need for multiple perspectives requires mixed-methods research. Philosophy and practicality are needed to plan, conduct, and make mixed-methods research more broadly accessible to the health sciences research community. The traditions and dichotomy between qualitative and quantitative research makes the application of mixed methods a challenge. Aim To propose an integrated model for a research project containing steps from start to finish, and to use the unique strengths brought by each approach to meet the health needs of patients and communities. Discussion Mixed-methods research is a practical approach to inquiry, that focuses on asking questions and how best to answer them to improve the health of individuals, communities and populations. An integrated model of research begins with the research question(s) and moves in a continuum. The lines dividing methods do not dissolve, but become permeable boundaries where two or more methods can be used to answer research questions more completely. Rigorous and expert methodologists work together to solve common problems. Conclusion Mixed-methods research enables discussion among researchers from varied traditions. There is a plethora of methodological approaches available. Combining expertise by communicating across disciplines and professions is one way to tackle large and complex healthcare issues. Implications for practice The model presented in this paper exemplifies the integration of multiple approaches in a unified focus on identified phenomena. The dynamic nature of the model signals a need to be open to the data generated and the methodological directions implied by findings.
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
Kumar, M Praveen; Patil, Suneel G; Dheeraj, Bhandari; Reddy, Keshav; Goel, Dinker; Krishna, Gopi
2015-06-01
The difficulty in obtaining an acceptable impression increases exponentially as the number of abutments increases. Accuracy of the impression material and the use of a suitable impression technique are of utmost importance in the fabrication of a fixed partial denture. This study compared the accuracy of the matrix impression system with conventional putty reline and multiple mix technique for individual dies by comparing the inter-abutment distance in the casts obtained from the impressions. Three groups, 10 impressions each with three impression techniques (matrix impression system, putty reline technique and multiple mix technique) were made of a master die. Typodont teeth were embedded in a maxillary frasaco model base. The left first premolar was removed to create a three-unit fixed partial denture situation and the left canine and second premolar were prepared conservatively, and hatch marks were made on the abutment teeth. The final casts obtained from the impressions were examined under a profile projector and the inter-abutment distance was calculated for all the casts and compared. The results from this study showed that in the mesiodistal dimensions the percentage deviation from master model in Group I was 0.1 and 0.2, in Group II was 0.9 and 0.3, and Group III was 1.6 and 1.5, respectively. In the labio-palatal dimensions the percentage deviation from master model in Group I was 0.01 and 0.4, Group II was 1.9 and 1.3, and Group III was 2.2 and 2.0, respectively. In the cervico-incisal dimensions the percentage deviation from the master model in Group I was 1.1 and 0.2, Group II was 3.9 and 1.7, and Group III was 1.9 and 3.0, respectively. In the inter-abutment dimension of dies, percentage deviation from master model in Group I was 0.1, Group II was 0.6, and Group III was 1.0. The matrix impression system showed more accuracy of reproduction for individual dies when compared with putty reline technique and multiple mix technique in all the three directions, as well as the inter-abutment distance.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
Understanding the Identities of Mixed-Race College Students through a Developmental Ecology Lens.
ERIC Educational Resources Information Center
Renn, Kristen A.
2003-01-01
Using an ecology model of human development, frames the exploration of racial identities of 38 college students with multiple racial heritages. Maps the influence of interactions within and between specific environments on students' decisions to identify in one or more of five patterns of mixed race identity found in a previous study. (Contains 43…
Modeling Multiple Human-Automation Distributed Systems using Network-form Games
NASA Technical Reports Server (NTRS)
Brat, Guillaume
2012-01-01
The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
Mixing in the shear superposition micromixer: three-dimensional analysis.
Bottausci, Frederic; Mezić, Igor; Meinhart, Carl D; Cardonne, Caroline
2004-05-15
In this paper, we analyse mixing in an active chaotic advection micromixer. The micromixer consists of a main rectangular channel and three cross-stream secondary channels that provide ability for time-dependent actuation of the flow stream in the direction orthogonal to the main stream. Three-dimensional motion in the mixer is studied. Numerical simulations and modelling of the flow are pursued in order to understand the experiments. It is shown that for some values of parameters a simple model can be derived that clearly represents the flow nature. Particle image velocimetry measurements of the flow are compared with numerical simulations and the analytical model. A measure for mixing, the mixing variance coefficient (MVC), is analysed. It is shown that mixing is substantially improved with multiple side channels with oscillatory flows, whose frequencies are increasing downstream. The optimization of MVC results for single side-channel mixing is presented. It is shown that dependence of MVC on frequency is not monotone, and a local minimum is found. Residence time distributions derived from the analytical model are analysed. It is shown that, while the average Lagrangian velocity profile is flattened over the steady flow, Taylor-dispersion effects are still present for the current micromixer configuration.
NASA Astrophysics Data System (ADS)
Osman, M. K.; Hocking, W. K.; Tarasick, D. W.
2016-06-01
Vertical diffusion and mixing of tracers in the upper troposphere and lower stratosphere (UTLS) are not uniform, but primarily occur due to patches of turbulence that are intermittent in time and space. The effective diffusivity of regions of patchy turbulence is related to statistical parameters describing the morphology of turbulent events, such as lifetime, number, width, depth and local diffusivity (i.e., diffusivity within the turbulent patch) of the patches. While this has been recognized in the literature, the primary focus has been on well-mixed layers, with few exceptions. In such cases the local diffusivity is irrelevant, but this is not true for weakly and partially mixed layers. Here, we use both theory and numerical simulations to consider the impact of intermediate and weakly mixed layers, in addition to well-mixed layers. Previous approaches have considered only one dimension (vertical), and only a small number of layers (often one at each time step), and have examined mixing of constituents. We consider a two-dimensional case, with multiple layers (10 and more, up to hundreds and even thousands), having well-defined, non-infinite, lengths and depths. We then provide new formulas to describe cases involving well-mixed layers which supersede earlier expressions. In addition, we look in detail at layers that are not well mixed, and, as an interesting variation on previous models, our procedure is based on tracking the dispersion of individual particles, which is quite different to the earlier approaches which looked at mixing of constituents. We develop an expression which allows determination of the degree of mixing, and show that layers used in some previous models were in fact not well mixed and so produced erroneous results. We then develop a generalized model based on two dimensional random-walk theory employing Rayleigh distributions which allows us to develop a universal formula for diffusion rates for multiple two-dimensional layers with general degrees of mixing. We show that it is the largest, most vigorous and less common turbulent layers that make the major contribution to global diffusion. Finally, we make estimates of global-scale diffusion coefficients in the lower stratosphere and upper troposphere. For the lower stratosphere, κeff ≈ 2x10-2 m2 s-1, assuming no other processes contribute to large-scale diffusion.
Modeling stream network-scale variation in coho salmon overwinter survival and smolt size
We used multiple regression and hierarchical mixed-effects models to examine spatial patterns of overwinter survival and size at smolting in juvenile coho salmon Oncorhynchus kisutch in relation to habitat attributes across an extensive stream network in southwestern Oregon over ...
Implementation of a diffusion convection surface evolution model in WallDYN
NASA Astrophysics Data System (ADS)
Schmid, K.
2013-07-01
In thermonuclear fusion experiments with multiple plasma facing materials the formation of mixed materials is inevitable. The formation of these mixed material layers is a dynamic process driven the tight interaction between transport in the plasma scrape off layer and erosion/(re-) deposition at the surface. To track this global material erosion/deposition balance and the resulting formation of mixed material layers the WallDYN code has been developed which couples surface processes and plasma transport. The current surface model in WallDYN cannot fully handle the growth of layers nor does it include diffusion. However at elevated temperatures diffusion is a key process in the formation of mixed materials. To remedy this shortcoming a new surface model has been developed which, for the first time, describes both layer growth/recession and diffusion in a single continuous diffusion/convection equation. The paper will detail the derivation of the new surface model and compare it to TRIDYN calculations.
Magnetic properties of magnetic bilayer Kekulene structure: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Jabar, A.; Masrour, R.
2018-06-01
In the present work, we have studied the magnetic properties of magnetic bilayer Kekulene structure with mixed spin-5/2 and spin-2 Ising model using Monte Carlo study. The magnetic phase diagrams of mixed spins Ising model have been given. The thermal total, partial magnetization and magnetic susceptibilities of the mixed spin-5/2 and spin-2 Ising model on a magnetic bilayer Kekulene structure are obtained. The transition temperature has been deduced. The effect of crystal field and exchange interactions on the this bilayers has been studied. The partial and total magnetic hysteresis cycles of the mixed spin-5/2 and spin-2 Ising model on a magnetic bilayer Kekulene structure have been given. The superparamagnetism behavior is observed in magnetic bilayer Kekulene structure. The magnetic coercive field decreases with increasing the exchange interactions between σ-σ and temperatures values and increases with increasing the absolute value of exchange interactions between σ-S. The multiple hysteresis behavior appears.
Automated macromolecular crystallization screening
Segelke, Brent W.; Rupp, Bernhard; Krupka, Heike I.
2005-03-01
An automated macromolecular crystallization screening system wherein a multiplicity of reagent mixes are produced. A multiplicity of analysis plates is produced utilizing the reagent mixes combined with a sample. The analysis plates are incubated to promote growth of crystals. Images of the crystals are made. The images are analyzed with regard to suitability of the crystals for analysis by x-ray crystallography. A design of reagent mixes is produced based upon the expected suitability of the crystals for analysis by x-ray crystallography. A second multiplicity of mixes of the reagent components is produced utilizing the design and a second multiplicity of reagent mixes is used for a second round of automated macromolecular crystallization screening. In one embodiment the multiplicity of reagent mixes are produced by a random selection of reagent components.
Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu
2016-01-01
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
New Models for Predicting Diameter at Breast Height from Stump Dimensions
James A. Westfall
2010-01-01
Models to predict dbh from stump dimensions are presented for 18 species groups. Data used to fit the models were collected across thirteen states in the northeastern United States. Primarily because of the presence of multiple measurements from each tree, a mixed-effects modeling approach was used to account for the lack of independence among observations. The...
Kumar, M Praveen; Patil, Suneel G; Dheeraj, Bhandari; Reddy, Keshav; Goel, Dinker; Krishna, Gopi
2015-01-01
Background: The difficulty in obtaining an acceptable impression increases exponentially as the number of abutments increases. Accuracy of the impression material and the use of a suitable impression technique are of utmost importance in the fabrication of a fixed partial denture. This study compared the accuracy of the matrix impression system with conventional putty reline and multiple mix technique for individual dies by comparing the inter-abutment distance in the casts obtained from the impressions. Materials and Methods: Three groups, 10 impressions each with three impression techniques (matrix impression system, putty reline technique and multiple mix technique) were made of a master die. Typodont teeth were embedded in a maxillary frasaco model base. The left first premolar was removed to create a three-unit fixed partial denture situation and the left canine and second premolar were prepared conservatively, and hatch marks were made on the abutment teeth. The final casts obtained from the impressions were examined under a profile projector and the inter-abutment distance was calculated for all the casts and compared. Results: The results from this study showed that in the mesiodistal dimensions the percentage deviation from master model in Group I was 0.1 and 0.2, in Group II was 0.9 and 0.3, and Group III was 1.6 and 1.5, respectively. In the labio-palatal dimensions the percentage deviation from master model in Group I was 0.01 and 0.4, Group II was 1.9 and 1.3, and Group III was 2.2 and 2.0, respectively. In the cervico-incisal dimensions the percentage deviation from the master model in Group I was 1.1 and 0.2, Group II was 3.9 and 1.7, and Group III was 1.9 and 3.0, respectively. In the inter-abutment dimension of dies, percentage deviation from master model in Group I was 0.1, Group II was 0.6, and Group III was 1.0. Conclusion: The matrix impression system showed more accuracy of reproduction for individual dies when compared with putty reline technique and multiple mix technique in all the three directions, as well as the inter-abutment distance. PMID:26124599
Biological auctions with multiple rewards
Reiter, Johannes G.; Kanodia, Ayush; Gupta, Raghav; Nowak, Martin A.; Chatterjee, Krishnendu
2015-01-01
The competition for resources among cells, individuals or species is a fundamental characteristic of evolution. Biological all-pay auctions have been used to model situations where multiple individuals compete for a single resource. However, in many situations multiple resources with various values exist and single reward auctions are not applicable. We generalize the model to multiple rewards and study the evolution of strategies. In biological all-pay auctions the bid of an individual corresponds to its strategy and is equivalent to its payment in the auction. The decreasingly ordered rewards are distributed according to the decreasingly ordered bids of the participating individuals. The reproductive success of an individual is proportional to its fitness given by the sum of the rewards won minus its payments. Hence, successful bidding strategies spread in the population. We find that the results for the multiple reward case are very different from the single reward case. While the mixed strategy equilibrium in the single reward case with more than two players consists of mostly low-bidding individuals, we show that the equilibrium can convert to many high-bidding individuals and a few low-bidding individuals in the multiple reward case. Some reward values lead to a specialization among the individuals where one subpopulation competes for the rewards and the other subpopulation largely avoids costly competitions. Whether the mixed strategy equilibrium is an evolutionarily stable strategy (ESS) depends on the specific values of the rewards. PMID:26180069
CFD simulation of mechanical draft tube mixing in anaerobic digester tanks.
Meroney, Robert N; Colorado, P E
2009-03-01
Computational Fluid Dynamics (CFD) was used to simulate the mixing characteristics of four different circular anaerobic digester tanks (diameters of 13.7, 21.3, 30.5, and 33.5m) equipped with single and multiple draft impeller tube mixers. Rates of mixing of step and slug injection of tracers were calculated from which digester volume turnover time (DVTT), mixture diffusion time (MDT), and hydraulic retention time (HRT) could be calculated. Washout characteristics were compared to analytic formulae to estimate any presence of partial mixing, dead volume, short-circuiting, or piston flow. CFD satisfactorily predicted performance of both model and full-scale circular tank configurations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran
Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less
Koral, Kenneth F.; Avram, Anca M.; Kaminski, Mark S.; Dewaraja, Yuni K.
2012-01-01
Abstract Background For individualized treatment planning in radioimmunotherapy (RIT), correlations must be established between tracer-predicted and therapy-delivered absorbed doses. The focus of this work was to investigate this correlation for tumors. Methods The study analyzed 57 tumors in 19 follicular lymphoma patients treated with I-131 tositumomab and imaged with SPECT/CT multiple times after tracer and therapy administrations. Instead of the typical least-squares fit to a single tumor's measured time-activity data, estimation was accomplished via a biexponential mixed model in which the curves from multiple subjects were jointly estimated. The tumor-absorbed dose estimates were determined by patient-specific Monte Carlo calculation. Results The mixed model gave realistic tumor time-activity fits that showed the expected uptake and clearance phases even with noisy data or missing time points. Correlation between tracer and therapy tumor-residence times (r=0.98; p<0.0001) and correlation between tracer-predicted and therapy-delivered mean tumor-absorbed doses (r=0.86; p<0.0001) were very high. The predicted and delivered absorbed doses were within±25% (or within±75 cGy) for 80% of tumors. Conclusions The mixed-model approach is feasible for fitting tumor time-activity data in RIT treatment planning when individual least-squares fitting is not possible due to inadequate sampling points. The good correlation between predicted and delivered tumor doses demonstrates the potential of using a pretherapy tracer study for tumor dosimetry-based treatment planning in RIT. PMID:22947086
NASA Astrophysics Data System (ADS)
Yuan, Cadmus C. A.
2015-12-01
Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.
Multiple jet study data correlations. [data correlation for jet mixing flow of air jets
NASA Technical Reports Server (NTRS)
Walker, R. E.; Eberhardt, R. G.
1975-01-01
Correlations are presented which allow determination of penetration and mixing of multiple cold air jets injected normal to a ducted subsonic heated primary air stream. Correlations were obtained over jet-to-primary stream momentum flux ratios of 6 to 60 for locations from 1 to 30 jet diameters downstream of the injection plane. The range of geometric and operating variables makes the correlations relevant to gas turbine combustors. Correlations were obtained for the mixing efficiency between jets and primary stream using an energy exchange parameter. Also jet centerplane velocity and temperature trajectories were correlated and centerplane dimensionless temperature distributions defined. An assumption of a Gaussian vertical temperature distribution at all stations is shown to result in a reasonable temperature field model. Data are presented which allow comparison of predicted and measured values over the range of conditions specified above.
NASA Astrophysics Data System (ADS)
Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.
2015-05-01
The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.
Water sources and mixing in riparian wetlands revealed by tracers and geospatial analysis.
Lessels, Jason S; Tetzlaff, Doerthe; Birkel, Christian; Dick, Jonathan; Soulsby, Chris
2016-01-01
Mixing of waters within riparian zones has been identified as an important influence on runoff generation and water quality. Improved understanding of the controls on the spatial and temporal variability of water sources and how they mix in riparian zones is therefore of both fundamental and applied interest. In this study, we have combined topographic indices derived from a high-resolution Digital Elevation Model (DEM) with repeated spatially high-resolution synoptic sampling of multiple tracers to investigate such dynamics of source water mixing. We use geostatistics to estimate concentrations of three different tracers (deuterium, alkalinity, and dissolved organic carbon) across an extended riparian zone in a headwater catchment in NE Scotland, to identify spatial and temporal influences on mixing of source waters. The various biogeochemical tracers and stable isotopes helped constrain the sources of runoff and their temporal dynamics. Results show that spatial variability in all three tracers was evident in all sampling campaigns, but more pronounced in warmer dryer periods. The extent of mixing areas within the riparian area reflected strong hydroclimatic controls and showed large degrees of expansion and contraction that was not strongly related to topographic indices. The integrated approach of using multiple tracers, geospatial statistics, and topographic analysis allowed us to classify three main riparian source areas and mixing zones. This study underlines the importance of the riparian zones for mixing soil water and groundwater and introduces a novel approach how this mixing can be quantified and the effect on the downstream chemistry be assessed.
Willcox, Jon A L; Kim, Hyung J
2017-02-28
A molecular dynamics graphene oxide model is used to shed light on commonly overlooked features of graphene oxide membranes. The model features both perpendicular and parallel water flow across multiple sheets of pristine and/or oxidized graphene to simulate "brick-and-mortar" microstructures. Additionally, regions of pristine/oxidized graphene overlap that have thus far been overlooked in the literature are explored. Differences in orientational and hydrogen-bonding features between adjacent layers of water in this mixed region are found to be even more prominent than differences between pristine and oxidized channels. This region also shows lateral water flow in equilibrium simulations and orthogonal flow in non-equilibrium simulations significantly greater than those in the oxidized region, suggesting it may play a non-negligible role in the mechanism of water flow across graphene oxide membranes.
MixGF: spectral probabilities for mixture spectra from more than one peptide.
Wang, Jian; Bourne, Philip E; Bandeira, Nuno
2014-12-01
In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30-390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
MixGF: Spectral Probabilities for Mixture Spectra from more than One Peptide*
Wang, Jian; Bourne, Philip E.; Bandeira, Nuno
2014-01-01
In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30–390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. PMID:25225354
Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei
2017-11-01
A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.
Applying the Mixed Rasch Model to the Runco Ideational Behavior Scale
ERIC Educational Resources Information Center
Sen, Sedat
2016-01-01
Previous research using creativity assessments has used latent class models and identified multiple classes (a 3-class solution) associated with various domains. This study explored the latent class structure of the Runco Ideational Behavior Scale, which was designed to quantify ideational capacity. A robust state-of the-art technique called the…
Modeling stream network-scale variation in Coho salmon overwinter survival and smolt size
Joseph L. Ebersole; Mike E. Colvin; Parker J. Wigington; Scott G. Leibowitz; Joan P. Baker; Jana E. Compton; Bruce A. Miller; Michael A. Carins; Bruce P. Hansen; Henry R. La Vigne
2009-01-01
We used multiple regression and hierarchical mixed-effects models to examine spatial patterns of overwinter survival and size at smolting in juvenile coho salmon Oncorhynchus kisutch in relation to habitat attributes across an extensive stream network in southwestern Oregon over 3 years. Contributing basin area explained the majority of spatial...
A bayesian hierarchical model for classification with selection of functional predictors.
Zhu, Hongxiao; Vannucci, Marina; Cox, Dennis D
2010-06-01
In functional data classification, functional observations are often contaminated by various systematic effects, such as random batch effects caused by device artifacts, or fixed effects caused by sample-related factors. These effects may lead to classification bias and thus should not be neglected. Another issue of concern is the selection of functions when predictors consist of multiple functions, some of which may be redundant. The above issues arise in a real data application where we use fluorescence spectroscopy to detect cervical precancer. In this article, we propose a Bayesian hierarchical model that takes into account random batch effects and selects effective functions among multiple functional predictors. Fixed effects or predictors in nonfunctional form are also included in the model. The dimension of the functional data is reduced through orthonormal basis expansion or functional principal components. For posterior sampling, we use a hybrid Metropolis-Hastings/Gibbs sampler, which suffers slow mixing. An evolutionary Monte Carlo algorithm is applied to improve the mixing. Simulation and real data application show that the proposed model provides accurate selection of functional predictors as well as good classification.
Scheduling Real-Time Mixed-Criticality Jobs
NASA Astrophysics Data System (ADS)
Baruah, Sanjoy K.; Bonifaci, Vincenzo; D'Angelo, Gianlorenzo; Li, Haohan; Marchetti-Spaccamela, Alberto; Megow, Nicole; Stougie, Leen
Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixed-criticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques.
Lidar observation of marine mixed layer
NASA Technical Reports Server (NTRS)
Yamagishi, Susumu; Yamanouchi, Hiroshi; Tsuchiya, Masayuki
1992-01-01
Marine mixed layer is known to play an important role in the transportation of pollution exiting ship funnels. The application of a diffusion model is critically dependent upon a reliable estimate of a lid. However, the processes that form lids are not well understood, though considerable progress toward marine boundary layer has been achieved. This report describes observations of the marine mixed layer from the course Ise-wan to Nii-jima with the intention of gaining a better understanding of their structure by a shipboard lidar. These observations were made in the summer of 1991. One interesting feature of the observations was that the multiple layers of aerosols, which is rarely numerically modeled, was encountered. No attempt is yet made to present a systematic analysis of all the data collected. Instead we focus on observations that seem to be directly relevant to the structure of the mixed layer.
Mixing of shallow and deep groundwater as indicated by the chemistry and age of karstic springs
NASA Astrophysics Data System (ADS)
Toth, David J.; Katz, Brian G.
2006-06-01
Large karstic springs in east-central Florida, USA were studied using multi-tracer and geochemical modeling techniques to better understand groundwater flow paths and mixing of shallow and deep groundwater. Spring water types included Ca-HCO3 (six), Na-Cl (four), and mixed (one). The evolution of water chemistry for Ca-HCO3 spring waters was modeled by reactions of rainwater with soil organic matter, calcite, and dolomite under oxic conditions. The Na-Cl and mixed-type springs were modeled by reactions of either rainwater or Upper Floridan aquifer water with soil organic matter, calcite, and dolomite under oxic conditions and mixed with varying proportions of saline Lower Floridan aquifer water, which represented 4-53% of the total spring discharge. Multiple-tracer data—chlorofluorocarbon CFC-113, tritium (3H), helium-3 (3Hetrit), sulfur hexafluoride (SF6)—for four Ca-HCO3 spring waters were consistent with binary mixing curves representing water recharged during 1980 or 1990 mixing with an older (recharged before 1940) tracer-free component. Young-water mixing fractions ranged from 0.3 to 0.7. Tracer concentration data for two Na-Cl spring waters appear to be consistent with binary mixtures of 1990 water with older water recharged in 1965 or 1975. Nitrate-N concentrations are inversely related to apparent ages of spring waters, which indicated that elevated nitrate-N concentrations were likely contributed from recent recharge.
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
Overview of Global/Regional Models Used to Evaluate Tropospheric Ozone in North America
NASA Technical Reports Server (NTRS)
Johnson, Matthew S.
2015-01-01
Ozone (O3) is an important greenhouse gas, toxic pollutant, and plays a major role in atmospheric chemistry. Tropospheric O3 which resides in the planetary boundary layer (PBL) is highly reactive and has a lifetime on the order of days, however, O3 in the free troposphere and stratosphere has a lifetime on the order of weeks or months. Modeling O3 mixing ratios at and above the surface is difficult due to the multiple formation/destruction processes and transport pathways that cause large spatio-temporal variability in O3 mixing ratios. This talk will summarize in detail the global/regional models that are commonly used to simulate/predict O3 mixing ratios in the United States. The major models which will be focused on are the: 1) Community Multi-scale Air Quality Model (CMAQ), 2) Comprehensive Air Quality Model with Extensions (CAMx), 3) Goddard Earth Observing System with Chemistry (GEOS-Chem), 4) Real Time Air Quality Modeling System (RAQMS), 5) Weather Research and Forecasting/Chemistry (WRF-Chem) model, National Center for Atmospheric Research (NCAR)'s Model for OZone And Related chemical Tracers (MOZART), and 7) Geophysical Fluid Dynamics Laboratory (GFDL) AM3 model. I will discuss the major modeling components which impact O3 mixing ratio calculations in each model and the similarities/differences between these models. This presentation is vital to the 2nd Annual Tropospheric Ozone Lidar Network (TOLNet) Conference as it will provide an overview of tools, which can be used in conjunction with TOLNet data, to evaluate the complex chemistry and transport pathways controlling tropospheric O3 mixing ratios.
A theoretical study of mixing downstream of transverse injection into a supersonic boundary layer
NASA Technical Reports Server (NTRS)
Baker, A. J.; Zelazny, S. W.
1972-01-01
A theoretical and analytical study was made of mixing downstream of transverse hydrogen injection, from single and multiple orifices, into a Mach 4 air boundary layer over a flat plate. Numerical solutions to the governing three-dimensional, elliptic boundary layer equations were obtained using a general purpose computer program. Founded upon a finite element solution algorithm. A prototype three-dimensional turbulent transport model was developed using mixing length theory in the wall region and the mass defect concept in the outer region. Excellent agreement between the computed flow field and experimental data for a jet/freestream dynamic pressure ratio of unity was obtained in the centerplane region of the single-jet configuration. Poorer agreement off centerplane suggests an inadequacy of the extrapolated two-dimensional turbulence model. Considerable improvement in off-centerplane computational agreement occured for a multi-jet configuration, using the same turbulent transport model.
Kondo, Yumi; Zhao, Yinshan; Petkau, John
2017-05-30
Identification of treatment responders is a challenge in comparative studies where treatment efficacy is measured by multiple longitudinally collected continuous and count outcomes. Existing procedures often identify responders on the basis of only a single outcome. We propose a novel multiple longitudinal outcome mixture model that assumes that, conditionally on a cluster label, each longitudinal outcome is from a generalized linear mixed effect model. We utilize a Monte Carlo expectation-maximization algorithm to obtain the maximum likelihood estimates of our high-dimensional model and classify patients according to their estimated posterior probability of being a responder. We demonstrate the flexibility of our novel procedure on two multiple sclerosis clinical trial datasets with distinct data structures. Our simulation study shows that incorporating multiple outcomes improves the responder identification performance; this can occur even if some of the outcomes are ineffective. Our general procedure facilitates the identification of responders who are comprehensively defined by multiple outcomes from various distributions. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio
2015-06-01
Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
Modeling of the competition of stimulated Raman and Brillouin scatter in multiple beam experiments
NASA Astrophysics Data System (ADS)
Cohen, Bruce I.; Baldis, Hector A.; Berger, Richard L.; Estabrook, Kent G.; Williams, Edward A.; Labaune, Christine
2001-02-01
Multiple laser beam experiments with plastic target foils at the Laboratoire pour L'Utilisation des Lasers Intenses (LULI) facility [Baldis et al., Phys. Rev. Lett. 77, 2957 (1996)] demonstrated anticorrelation of stimulated Brillouin and Raman backscatter (SBS and SRS). Detailed Thomson scattering diagnostics showed that SBS always precedes SRS, that secondary electron plasma waves sometimes accompanied SRS appropriate to the Langmuir Decay Instability (LDI), and that, with multiple interaction laser beams, the SBS direct backscatter signal in the primary laser beam was reduced while the SRS backscatter signal was enhanced and occurred earlier in time. Analysis and numerical calculations are presented here that evaluate the influences on the competition of SBS and SRS, of local pump depletion in laser hot spots due to SBS, of mode coupling of SBS and LDI ion waves, and of optical mixing of secondary and primary laser beams. These influences can be significant. The calculations take into account simple models of the laser beam hot-spot intensity probability distributions and assess whether ponderomotive and thermal self-focusing are significant. Within the limits of the model, which omits several other potentially important nonlinearities, the calculations suggest the effectiveness of local pump depletion, ion wave mode coupling, and optical mixing in affecting the LULI observations.
STABLE ISOTOPES IN ECOLOGICAL STUDIES: NEW DEVELOPMENTS IN MIXING MODELS
Stable isotopes are increasingly being used as tracers in ecological studies. One application uses isotopic ratios to quantify the proportional contributions of multiple sources to a mixture. Examples include food sources for animals, water sources for plants, pollution sources...
THE ROLE OF THERMOHALINE MIXING IN INTERMEDIATE- AND LOW-METALLICITY GLOBULAR CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelou, George C.; Stancliffe, Richard J.; Church, Ross P.
It is now widely accepted that globular cluster red giant branch (RGB) stars owe their strange abundance patterns to a combination of pollution from progenitor stars and in situ extra mixing. In this hybrid theory a first generation of stars imprints abundance patterns into the gas from which a second generation forms. The hybrid theory suggests that extra mixing is operating in both populations and we use the variation of [C/Fe] with luminosity to examine how efficient this mixing is. We investigate the observed RGBs of M3, M13, M92, M15, and NGC 5466 as a means to test a theorymore » of thermohaline mixing. The second parameter pair M3 and M13 are of intermediate metallicity and our models are able to account for the evolution of carbon along the RGB in both clusters, although in order to fit the most carbon-depleted main-sequence stars in M13 we require a model whose initial [C/Fe] abundance leads to a carbon abundance lower than is observed. Furthermore, our results suggest that stars in M13 formed with some primary nitrogen (higher C+N+O than stars in M3). In the metal-poor regime only NGC 5466 can be tentatively explained by thermohaline mixing operating in multiple populations. We find thermohaline mixing unable to model the depletion of [C/Fe] with magnitude in M92 and M15. It appears as if extra mixing is occurring before the luminosity function bump in these clusters. To reconcile the data with the models would require first dredge-up to be deeper than found in extant models.« less
NASA Astrophysics Data System (ADS)
He, Cenlin; Liou, Kuo-Nan; Takano, Yoshi; Yang, Ping; Qi, Ling; Chen, Fei
2018-01-01
We quantify the effects of grain shape and multiple black carbon (BC)-snow internal mixing on snow albedo by explicitly resolving shape and mixing structures. Nonspherical snow grains tend to have higher albedos than spheres with the same effective sizes, while the albedo difference due to shape effects increases with grain size, with up to 0.013 and 0.055 for effective radii of 1,000 μm at visible and near-infrared bands, respectively. BC-snow internal mixing reduces snow albedo at wavelengths < 1.5 μm, with negligible effects at longer wavelengths. Nonspherical snow grains show less BC-induced albedo reductions than spheres with the same effective sizes by up to 0.06 at ultraviolet and visible bands. Compared with external mixing, internal mixing enhances snow albedo reduction by a factor of 1.2-2.0 at visible wavelengths depending on BC concentration and snow shape. The opposite effects on albedo reductions due to snow grain nonsphericity and BC-snow internal mixing point toward a careful investigation of these two factors simultaneously in climate modeling. We further develop parameterizations for snow albedo and its reduction by accounting for grain shape and BC-snow internal/external mixing. Combining the parameterizations with BC-in-snow measurements in China, North America, and the Arctic, we estimate that nonspherical snow grains reduce BC-induced albedo radiative effects by up to 50% compared with spherical grains. Moreover, BC-snow internal mixing enhances the albedo effects by up to 30% (130%) for spherical (nonspherical) grains relative to external mixing. The overall uncertainty induced by snow shape and BC-snow mixing state is about 21-32%.
Prewhitening of Colored Noise Fields for Detection of Threshold Sources
1993-11-07
determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6
Mixed infections reveal virulence differences between host-specific bee pathogens.
Klinger, Ellen G; Vojvodic, Svjetlana; DeGrandi-Hoffman, Gloria; Welker, Dennis L; James, Rosalind R
2015-07-01
Dynamics of host-pathogen interactions are complex, often influencing the ecology, evolution and behavior of both the host and pathogen. In the natural world, infections with multiple pathogens are common, yet due to their complexity, interactions can be difficult to predict and study. Mathematical models help facilitate our understanding of these evolutionary processes, but empirical data are needed to test model assumptions and predictions. We used two common theoretical models regarding mixed infections (superinfection and co-infection) to determine which model assumptions best described a group of fungal pathogens closely associated with bees. We tested three fungal species, Ascosphaera apis, Ascosphaera aggregata and Ascosphaera larvis, in two bee hosts (Apis mellifera and Megachile rotundata). Bee survival was not significantly different in mixed infections vs. solo infections with the most virulent pathogen for either host, but fungal growth within the host was significantly altered by mixed infections. In the host A. mellifera, only the most virulent pathogen was present in the host post-infection (indicating superinfective properties). In M. rotundata, the most virulent pathogen co-existed with the lesser-virulent one (indicating co-infective properties). We demonstrated that the competitive outcomes of mixed infections were host-specific, indicating strong host specificity among these fungal bee pathogens. Published by Elsevier Inc.
A ternary age-mixing model to explain contaminant occurrence in a deep supply well
Jurgens, Bryant; Bexfield, Laura M.; Eberts, Sandra
2014-01-01
The age distribution of water from a public-supply well in a deep alluvial aquifer was estimated and used to help explain arsenic variability in the water. The age distribution was computed using a ternary mixing model that combines three lumped parameter models of advection-dispersion transport of environmental tracers, which represent relatively recent recharge (post- 1950s) containing volatile organic compounds (VOCs), old intermediate depth groundwater (about 6500 years) that was free of drinking-water contaminants, and very old, deep groundwater (more than 21,000 years) containing arsenic above the USEPA maximum contaminant level of 10 µg/L. The ternary mixing model was calibrated to tritium, chloroflorocarbon-113, and carbon-14 (14C) concentrations that were measured in water samples collected on multiple occasions. Variability in atmospheric 14C over the past 50,000 years was accounted for in the interpretation of 14C as a tracer. Calibrated ternary models indicate the fraction of deep, very old groundwater entering the well varies substantially throughout the year and was highest following long periods of nonoperation or infrequent operation, which occured during the winter season when water demand was low. The fraction of young water entering the well was about 11% during the summer when pumping peaked to meet water demand and about 3% to 6% during the winter months. This paper demonstrates how collection of multiple tracers can be used in combination with simplified models of fluid flow to estimate the age distribution and thus fraction of contaminated groundwater reaching a supply well under different pumping conditions.
A Ternary Age-Mixing Model to Explain Contaminant Occurrence in a Deep Supply Well
Jurgens, Bryant C; Bexfield, Laura M; Eberts, Sandra M
2014-01-01
The age distribution of water from a public-supply well in a deep alluvial aquifer was estimated and used to help explain arsenic variability in the water. The age distribution was computed using a ternary mixing model that combines three lumped parameter models of advection-dispersion transport of environmental tracers, which represent relatively recent recharge (post-1950s) containing volatile organic compounds (VOCs), old intermediate depth groundwater (about 6500 years) that was free of drinking-water contaminants, and very old, deep groundwater (more than 21,000 years) containing arsenic above the USEPA maximum contaminant level of 10 µg/L. The ternary mixing model was calibrated to tritium, chloroflorocarbon-113, and carbon-14 (14C) concentrations that were measured in water samples collected on multiple occasions. Variability in atmospheric 14C over the past 50,000 years was accounted for in the interpretation of 14C as a tracer. Calibrated ternary models indicate the fraction of deep, very old groundwater entering the well varies substantially throughout the year and was highest following long periods of nonoperation or infrequent operation, which occured during the winter season when water demand was low. The fraction of young water entering the well was about 11% during the summer when pumping peaked to meet water demand and about 3% to 6% during the winter months. This paper demonstrates how collection of multiple tracers can be used in combination with simplified models of fluid flow to estimate the age distribution and thus fraction of contaminated groundwater reaching a supply well under different pumping conditions. PMID:24597520
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.
Non-linear mixing effects on mass-47 CO2 clumped isotope thermometry: Patterns and implications.
Defliese, William F; Lohmann, Kyger C
2015-05-15
Mass-47 CO(2) clumped isotope thermometry requires relatively large (~20 mg) samples of carbonate minerals due to detection limits and shot noise in gas source isotope ratio mass spectrometry (IRMS). However, it is unreasonable to assume that natural geologic materials are homogenous on the scale required for sampling. We show that sample heterogeneities can cause offsets from equilibrium Δ(47) values that are controlled solely by end member mixing and are independent of equilibrium temperatures. A numerical model was built to simulate and quantify the effects of end member mixing on Δ(47). The model was run in multiple possible configurations to produce a dataset of mixing effects. We verified that the model accurately simulated real phenomena by comparing two artificial laboratory mixtures measured using IRMS to model output. Mixing effects were found to be dependent on end member isotopic composition in δ(13)C and δ(18)O values, and independent of end member Δ(47) values. Both positive and negative offsets from equilibrium Δ(47) can occur, and the sign is dependent on the interaction between end member isotopic compositions. The overall magnitude of mixing offsets is controlled by the amount of variability within a sample; the larger the disparity between end member compositions, the larger the mixing offset. Samples varying by less than 2 ‰ in both δ(13)C and δ(18)O values have mixing offsets below current IRMS detection limits. We recommend the use of isotopic subsampling for δ(13)C and δ(18)O values to determine sample heterogeneity, and to evaluate any potential mixing effects in samples suspected of being heterogonous. Copyright © 2015 John Wiley & Sons, Ltd.
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.
Determining the impact of cell mixing on signaling during development.
Uriu, Koichiro; Morelli, Luis G
2017-06-01
Cell movement and intercellular signaling occur simultaneously to organize morphogenesis during embryonic development. Cell movement can cause relative positional changes between neighboring cells. When intercellular signals are local such cell mixing may affect signaling, changing the flow of information in developing tissues. Little is known about the effect of cell mixing on intercellular signaling in collective cellular behaviors and methods to quantify its impact are lacking. Here we discuss how to determine the impact of cell mixing on cell signaling drawing an example from vertebrate embryogenesis: the segmentation clock, a collective rhythm of interacting genetic oscillators. We argue that comparing cell mixing and signaling timescales is key to determining the influence of mixing. A signaling timescale can be estimated by combining theoretical models with cell signaling perturbation experiments. A mixing timescale can be obtained by analysis of cell trajectories from live imaging. After comparing cell movement analyses in different experimental settings, we highlight challenges in quantifying cell mixing from embryonic timelapse experiments, especially a reference frame problem due to embryonic motions and shape changes. We propose statistical observables characterizing cell mixing that do not depend on the choice of reference frames. Finally, we consider situations in which both cell mixing and signaling involve multiple timescales, precluding a direct comparison between single characteristic timescales. In such situations, physical models based on observables of cell mixing and signaling can simulate the flow of information in tissues and reveal the impact of observed cell mixing on signaling. © 2017 Japanese Society of Developmental Biologists.
Mixing of shallow and deep groundwater as indicated by the chemistry and age of karstic springs
Toth, D.J.; Katz, B.G.
2006-01-01
Large karstic springs in east-central Florida, USA were studied using multi-tracer and geochemical modeling techniques to better understand groundwater flow paths and mixing of shallow and deep groundwater. Spring water types included Ca-HCO3 (six), Na-Cl (four), and mixed (one). The evolution of water chemistry for Ca-HCO3 spring waters was modeled by reactions of rainwater with soil organic matter, calcite, and dolomite under oxic conditions. The Na-Cl and mixed-type springs were modeled by reactions of either rainwater or Upper Floridan aquifer water with soil organic matter, calcite, and dolomite under oxic conditions and mixed with varying proportions of saline Lower Floridan aquifer water, which represented 4-53% of the total spring discharge. Multiple-tracer data-chlorofluorocarbon CFC-113, tritium (3H), helium-3 (3Hetrit), sulfur hexafluoride (SF6) - for four Ca-HCO3 spring waters were consistent with binary mixing curves representing water recharged during 1980 or 1990 mixing with an older (recharged before 1940) tracer-free component. Young-water mixing fractions ranged from 0.3 to 0.7. Tracer concentration data for two Na-Cl spring waters appear to be consistent with binary mixtures of 1990 water with older water recharged in 1965 or 1975. Nitrate-N concentrations are inversely related to apparent ages of spring waters, which indicated that elevated nitrate-N concentrations were likely contributed from recent recharge. ?? Springer-Verlag 2006.
NASA Astrophysics Data System (ADS)
Toth, David J.; Katz, Brian G.
2006-09-01
Large karstic springs in east-central Florida, USA were studied using multi-tracer and geochemical modeling techniques to better understand groundwater flow paths and mixing of shallow and deep groundwater. Spring water types included Ca-HCO3 (six), Na-Cl (four), and mixed (one). The evolution of water chemistry for Ca-HCO3 spring waters was modeled by reactions of rainwater with soil organic matter, calcite, and dolomite under oxic conditions. The Na-Cl and mixed-type springs were modeled by reactions of either rainwater or Upper Floridan aquifer water with soil organic matter, calcite, and dolomite under oxic conditions and mixed with varying proportions of saline Lower Floridan aquifer water, which represented 4-53% of the total spring discharge. Multiple-tracer data—chlorofluorocarbon CFC-113, tritium (3H), helium-3 (3Hetrit), sulfur hexafluoride (SF6)—for four Ca-HCO3 spring waters were consistent with binary mixing curves representing water recharged during 1980 or 1990 mixing with an older (recharged before 1940) tracer-free component. Young-water mixing fractions ranged from 0.3 to 0.7. Tracer concentration data for two Na-Cl spring waters appear to be consistent with binary mixtures of 1990 water with older water recharged in 1965 or 1975. Nitrate-N concentrations are inversely related to apparent ages of spring waters, which indicated that elevated nitrate-N concentrations were likely contributed from recent recharge.
STABLE ISOTOPES IN ECOLOGICAL STUDIES: NEW DEVELOPMENTS IN MIXING MODELS (URUGUAY)
Stable isotopes are increasingly being used as tracers in ecological studies. One application uses isotopic ratios to quantify the proportional contributions of multiple sources to a mixture. Examples include pollution sources for air or water bodies, food sources for animals, ...
STABLE ISOTOPES IN ECOLOGICAL STUDIES: NEW DEVELOPMENTS IN MIXING MODELS (BRAZIL)
Stable isotopes are increasingly being used as tracers in ecological studies. One application uses isotopic ratios to quantify the proportional contributions of multiple sources to a mixture. Examples include pollution sources for air or water bodies, food sources for animals, ...
2007-09-30
if the traditional models adequately parameterize and characterize the actual mixing. As an example of the application of this method , we have...2) Deterministic Modelling Results. As noted above, we are working on a stochastic method of modelling transient and short-lived tracers...heterogeneity. RELATED PROJECTS We have worked in collaboration with Peter Jumars (Univ. Maine), and his PhD student Kelley Dorgan, who are measuring
Lu, Tao
2017-01-01
The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.
Experimental comparison of conventional and nonlinear model-based control of a mixing tank
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haeggblom, K.E.
1993-11-01
In this case study concerning control of a laboratory-scale mixing tank, conventional multiloop single-input single-output (SISO) control is compared with model-based'' control where the nonlinearity and multivariable characteristics of the process are explicitly taken into account. It is shown, especially if the operating range of the process is large, that the two outputs (level and temperature) cannot be adequately controlled by multiloop SISO control even if gain scheduling is used. By nonlinear multiple-input multiple-output (MIMO) control, on the other hand, very good control performance is obtained. The basic approach to nonlinear control used in this study is first to transformmore » the process into a globally linear and decoupled system, and then to design controllers for this system. Because of the properties of the resulting MIMO system, the controller design is very easy. Two nonlinear control system designs based on a steady-state and a dynamic model, respectively, are considered. In the dynamic case, both setpoint tracking and disturbance rejection can be addressed separately.« less
Sharifi, N; Ozgoli, S; Ramezani, A
2017-06-01
Mixed immunotherapy and chemotherapy of tumours is one of the most efficient ways to improve cancer treatment strategies. However, it is important to 'design' an effective treatment programme which can optimize the ways of combining immunotherapy and chemotherapy to diminish their imminent side effects. Control engineering techniques could be used for this. The method of multiple model predictive controller (MMPC) is applied to the modified Stepanova model to induce the best combination of drugs scheduling under a better health criteria profile. The proposed MMPC is a feedback scheme that can perform global optimization for both tumour volume and immune competent cell density by performing multiple constraints. Although current studies usually assume that immunotherapy has no side effect, this paper presents a new method of mixed drug administration by employing MMPC, which implements several constraints for chemotherapy and immunotherapy by considering both drug toxicity and autoimmune. With designed controller we need maximum 57% and 28% of full dosage of drugs for chemotherapy and immunotherapy in some instances, respectively. Therefore, through the proposed controller less dosage of drugs are needed, which contribute to suitable results with a perceptible reduction in medicine side effects. It is observed that in the presence of MMPC, the amount of required drugs is minimized, while the tumour volume is reduced. The efficiency of the presented method has been illustrated through simulations, as the system from an initial condition in the malignant region of the state space (macroscopic tumour volume) transfers into the benign region (microscopic tumour volume) in which the immune system can control tumour growth. Copyright © 2017 Elsevier B.V. All rights reserved.
Stochastic Erosion of Fractal Structure in Nonlinear Dynamical Systems
NASA Astrophysics Data System (ADS)
Agarwal, S.; Wettlaufer, J. S.
2014-12-01
We analyze the effects of stochastic noise on the Lorenz-63 model in the chaotic regime to demonstrate a set of general issues arising in the interpretation of data from nonlinear dynamical systems typical in geophysics. The model is forced using both additive and multiplicative, white and colored noise and it is shown that, through a suitable choice of the noise intensity, both additive and multiplicative noise can produce similar dynamics. We use a recently developed measure, histogram distance, to show the similarity between the dynamics produced by additive and multiplicative forcing. This phenomenon, in a nonlinear fractal structure with chaotic dynamics can be explained by understanding how noise affects the Unstable Periodic Orbits (UPOs) of the system. For delta-correlated noise, the UPOs erode the fractal structure. In the presence of memory in the noise forcing, the time scale of the noise starts to interact with the period of some UPO and, depending on the noise intensity, stochastic resonance may be observed. This also explains the mixing in dissipative dynamical systems in presence of white noise; as the fractal structure is smoothed, the decay of correlations is enhanced, and hence the rate of mixing increases with noise intensity.
NASA Technical Reports Server (NTRS)
Holdeman, James D.
1991-01-01
Experimental and computational results on the mixing of single, double, and opposed rows of jets with an isothermal or variable temperature mainstream in a confined subsonic crossflow are summarized. The studies were performed to investigate flow and geometric variations typical of the complex 3D flowfield in the dilution zone of combustion chambers in gas turbine engines. The principal observations from the experiments were that the momentum-flux ratio was the most significant flow variable, and that temperature distributions were similar (independent of orifice diameter) when the orifice spacing and the square-root of the momentum-flux ratio were inversely proportional. The experiments and empirical model for the mixing of a single row of jets from round holes were extended to include several variations typical of gas turbine combustors.
A mixed methods study of multiple health behaviors among individuals with stroke.
Plow, Matthew; Moore, Shirley M; Sajatovic, Martha; Katzan, Irene
2017-01-01
Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. We found statistically significant and moderate correlations between hand function and healthy eating habits ( r = 0.45), sleep disturbances and limitations in activities of daily living ( r = - 0.55), BMI and limitations in activities of daily living ( r = - 0.49), physical activity and limitations in activities of daily living ( r = 0.41), mobility impairments and BMI ( r = - 0.41), sleep disturbances and physical activity ( r = - 0.48), sleep disturbances and BMI ( r = 0.48), and physical activity and BMI ( r = - 0.45). We identified five qualitative themes: (1) Impairments: reduced autonomy, (2) Environmental forces: caregivers and information, (3) Re-evaluation: priorities and attributions, (4) Resiliency: finding motivation and solutions, and (5) Negative affectivity: stress and self-consciousness. Three meta-inferences and a conceptual model described circumstances in which factors could influence single or multiple health behaviors. This is the first mixed methods study of individuals with stroke to elaborate on relationships between multiple health behaviors, BMI, and physical function. A conceptual model illustrates addressing sleep disturbances, activity limitations, self-image, and emotions to promote multiple health behaviors. We discuss the relevance of the meta-inferences in designing multiple behavior change interventions for individuals with stroke.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Studies of the effects of curvature on dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, James D.; Srinivasan, Ram; Reynolds, Robert S.; White, Craig D.
1992-01-01
An analytical program was conducted using both three-dimensional numerical and empirical models to investigate the effects of transition liner curvature on the mixing of jets injected into a confined crossflow. The numerical code is of the TEACH type with hybrid numerics; it uses the power-law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. From the results of the numerical calculations, an existing empirical model for the temperature field downstream of single and multiple rows of jets injected into a straight rectangular duct was extended to model the effects of curvature. Temperature distributions, calculated with both the numerical and empirical models, are presented to show the effects of radius of curvature and inner and outer wall injection for single and opposed rows of cool dilution jets injected into a hot mainstream flow.
Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI
Farooq, Hamza; Xu, Junqian; Nam, Jung Who; Keefe, Daniel F.; Yacoub, Essa; Georgiou, Tryphon; Lenglet, Christophe
2016-01-01
Diffusion MRI (dMRI) reveals microstructural features of the brain white matter by quantifying the anisotropic diffusion of water molecules within axonal bundles. Yet, identifying features such as axonal orientation dispersion, density, diameter, etc., in complex white matter fiber configurations (e.g. crossings) has proved challenging. Besides optimized data acquisition and advanced biophysical models, computational procedures to fit such models to the data are critical. However, these procedures have been largely overlooked by the dMRI microstructure community and new, more versatile, approaches are needed to solve complex biophysical model fitting problems. Existing methods are limited to models assuming single fiber orientation, relevant to limited brain areas like the corpus callosum, or multiple orientations but without the ability to extract detailed microstructural features. Here, we introduce a new and versatile optimization technique (MIX), which enables microstructure imaging of crossing white matter fibers. We provide a MATLAB implementation of MIX, and demonstrate its applicability to general microstructure models in fiber crossings using synthetic as well as ex-vivo and in-vivo brain data. PMID:27982056
Experiments in dilution jet mixing effects of multiple rows and non-circular orifices
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Coleman, E. B.; Meyers, G. D.; White, C. D.
1985-01-01
Experimental and empirical model results are presented that extend previous studies of the mixing of single-sided and opposed rows of jets in a confined duct flow to include effects of non-circular orifices and double rows of jets. Analysis of the mean temperature data obtained in this investigation showed that the effects of orifice shape and double rows are significant only in the region close to the injection plane, provided that the orifices are symmetric with respect to the main flow direction. The penetration and mixing of jets from 45-degree slanted slots is slightly less than that from equivalent-area symmetric orifices. The penetration from 2-dimensional slots is similar to that from equivalent-area closely-spaced rows of holes, but the mixing is slower for the 2-D slots. Calculated mean temperature profiles downstream of jets from non-circular and double rows of orifices, made using an extension developed for a previous empirical model, are shown to be in good agreement with the measured distributions.
Experiments in dilution jet mixing - Effects of multiple rows and non-circular orifices
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Coleman, E. B.; Meyers, G. D.; White, C. D.
1985-01-01
Experimental and empirical model results are presented that extend previous studies of the mixing of single-sided and opposed rows of jets in a confined duct flow to include effects of non-circular orifices and double rows of jets. Analysis of the mean temperature data obtained in this investigation showed that the effects of orifice shape and double rows are significant only in the region close to the injection plane, provided that the orifices are symmetric with respect to the main flow direction. The penetration and mixing of jets from 45-degree slanted slots is slightly less than that from equivalent-area symmetric orifices. The penetration from two-dimensional slots is similar to that from equivalent-area closely-spaced rows of holes, but the mixing is slower for the 2-D slots. Calculated mean temperature profiles downstream of jets from non-circular and double rows of orifices, made using an extension developed for a previous empirical model, are shown to be in good agreement with the measured distributions.
Robust Lee local statistic filter for removal of mixed multiplicative and impulse noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Astola, Jaakko T.
2004-05-01
A robust version of Lee local statistic filter able to effectively suppress the mixed multiplicative and impulse noise in images is proposed. The performance of the proposed modification is studied for a set of test images, several values of multiplicative noise variance, Gaussian and Rayleigh probability density functions of speckle, and different characteris-tics of impulse noise. The advantages of the designed filter in comparison to the conventional Lee local statistic filter and some other filters able to cope with mixed multiplicative+impulse noise are demonstrated.
Parker, Stephen; Dark, Frances; Newman, Ellie; Korman, Nicole; Meurk, Carla; Siskind, Dan; Harris, Meredith
2016-06-02
A novel staffing model integrating peer support workers and clinical staff within a unified team is being trialled at community based residential rehabilitation units in Australia. A mixed-methods protocol for the longitudinal evaluation of the outcomes, expectations and experiences of care by consumers and staff under this staffing model in two units will be compared to one unit operating a traditional clinical staffing. The study is unique with regards to the context, the longitudinal approach and consideration of multiple stakeholder perspectives. The longitudinal mixed methods design integrates a quantitative evaluation of the outcomes of care for consumers at three residential rehabilitation units with an applied qualitative research methodology. The quantitative component utilizes a prospective cohort design to explore whether equivalent outcomes are achieved through engagement at residential rehabilitation units operating integrated and clinical staffing models. Comparative data will be available from the time of admission, discharge and 12-month period post-discharge from the units. Additionally, retrospective data for the 12-month period prior to admission will be utilized to consider changes in functioning pre and post engagement with residential rehabilitation care. The primary outcome will be change in psychosocial functioning, assessed using the total score on the Health of the Nation Outcome Scales (HoNOS). Planned secondary outcomes will include changes in symptomatology, disability, recovery orientation, carer quality of life, emergency department presentations, psychiatric inpatient bed days, and psychological distress and wellbeing. Planned analyses will include: cohort description; hierarchical linear regression modelling of the predictors of change in HoNOS following CCU care; and descriptive comparisons of the costs associated with the two staffing models. The qualitative component utilizes a pragmatic approach to grounded theory, with collection of data from consumers and staff at multiple time points exploring their expectations, experiences and reflections on the care provided by these services. It is expected that the new knowledge gained through this study will guide the adaptation of these and similar services. For example, if differential outcomes are achieved for consumers under the integrated and clinical staffing models this may inform staffing guidelines.
Impact of wave mixing on the sea ice cover
NASA Astrophysics Data System (ADS)
Rynders, Stefanie; Aksenov, Yevgeny; Madec, Gurvan; Nurser, George; Feltham, Daniel
2017-04-01
As information on surface waves in ice-covered regions becomes available in ice-ocean models, there is an opportunity to model wave-related processes more accurate. Breaking waves cause mixing of the upper water column and present mixing schemes in ocean models take this into account through surface roughness. A commonly used approach is to calculate surface roughness from significant wave height, parameterised from wind speed. We present results from simulations using modelled significant wave height instead, which accounts for the presence of sea ice and the effect of swell. The simulations use the NEMO ocean model coupled to the CICE sea ice model, with wave information from the ECWAM model of the European Centre for Medium-Range Weather Forecasts (ECMWF). The new waves-in-ice module allows waves to propagate in sea ice and attenuates waves according to multiple scattering and non-elastic losses. It is found that in the simulations with wave mixing the mixed layer depth (MLD) under ice cover is reduced, since the parameterisation from wind speed overestimates wave height in the ice-covered regions. The MLD change, in turn, affects sea ice concentration and ice thickness. In the Arctic, reduced MLD in winter translates into increased ice thicknesses overall, with higher increases in the Western Arctic and decreases along the Siberian coast. In summer, shallowing of the mixed layer results in more heat accumulating in the surface ocean, increasing ice melting. In the Southern Ocean the meridional gradient in ice thickness and concentration is increased. We argue that coupling waves with sea ice - ocean models can reduce negative biases in sea ice cover, affecting the distribution of nutrients and, thus, biological productivity and ecosystems. This coupling will become more important in the future, when wave heights in a large part of the Arctic are expected to increase due to sea ice retreat and a larger wave fetch. Therefore, wave mixing constitutes a possible positive feedback mechanism.
NASA Astrophysics Data System (ADS)
Tang, Jiafu; Liu, Yang; Fung, Richard; Luo, Xinggang
2008-12-01
Manufacturers have a legal accountability to deal with industrial waste generated from their production processes in order to avoid pollution. Along with advances in waste recovery techniques, manufacturers may adopt various recycling strategies in dealing with industrial waste. With reuse strategies and technologies, byproducts or wastes will be returned to production processes in the iron and steel industry, and some waste can be recycled back to base material for reuse in other industries. This article focuses on a recovery strategies optimization problem for a typical class of industrial waste recycling process in order to maximize profit. There are multiple strategies for waste recycling available to generate multiple byproducts; these byproducts are then further transformed into several types of chemical products via different production patterns. A mixed integer programming model is developed to determine which recycling strategy and which production pattern should be selected with what quantity of chemical products corresponding to this strategy and pattern in order to yield maximum marginal profits. The sales profits of chemical products and the set-up costs of these strategies, patterns and operation costs of production are considered. A simulated annealing (SA) based heuristic algorithm is developed to solve the problem. Finally, an experiment is designed to verify the effectiveness and feasibility of the proposed method. By comparing a single strategy to multiple strategies in an example, it is shown that the total sales profit of chemical products can be increased by around 25% through the simultaneous use of multiple strategies. This illustrates the superiority of combinatorial multiple strategies. Furthermore, the effects of the model parameters on profit are discussed to help manufacturers organize their waste recycling network.
Mixing-controlled reactive transport on travel times in heterogeneous media
NASA Astrophysics Data System (ADS)
Luo, J.; Cirpka, O.
2008-05-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass-transfer coefficients. In most applications, breakthrough curves of conservative and reactive compounds are measured at only a few locations and models are calibrated by matching these breakthrough curves, which is an ill posed inverse problem. By contrast, travel-time based transport models avoid costly aquifer characterization. By considering breakthrough curves measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the travel-time based framework, the breakthrough curve of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct travel-time value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of travel times which also determines the weights associated to each stream tube. Key issues in using the travel-time based framework include the description of mixing mechanisms and the estimation of the travel-time distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the travel-time distribution, given a breakthrough curve integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases where the true travel-time distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and travel-time distributions to fit conservative breakthrough curves and describe the tailing. Reactive transport cases with a bimolecular instantaneous irreversible reaction and a dual Michaelis-Menten problem demonstrate that the mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local breakthrough curves.
Calibrating binary lumped parameter models
NASA Astrophysics Data System (ADS)
Morgenstern, Uwe; Stewart, Mike
2017-04-01
Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.
Comparing Bayesian stable isotope mixing models: Which tools are best for sediments?
NASA Astrophysics Data System (ADS)
Morris, David; Macko, Stephen
2016-04-01
Bayesian stable isotope mixing models have received much attention as a means of coping with multiple sources and uncertainty in isotope ecology (e.g. Phillips et al., 2014), enabling the probabilistic determination of the contributions made by each food source to the total diet of the organism in question. We have applied these techniques to marine sediments for the first time. The sediments of the Chukchi Sea and Beaufort Sea offer an opportunity to utilize these models for organic geochemistry, as there are three likely sources of organic carbon; pelagic phytoplankton, sea ice algae and terrestrial material from rivers and coastal erosion, as well as considerable variation in the marine δ13C values. Bayesian mixing models using bulk δ13C and δ15N data from Shelf Basin Interaction samples allow for the probabilistic determination of the contributions made by each of the sources to the organic carbon budget, and can be compared with existing source contribution estimates based upon biomarker models (e.g. Belicka & Harvey, 2009, Faux, Belicka, & Rodger Harvey, 2011). The δ13C of this preserved material varied from -22.1 to -16.7‰ (mean -19.4±1.3‰), while δ15N varied from 4.1 to 7.6‰ (mean 5.7±1.1‰). Using the SIAR model, we found that water column productivity was the source of between 50 and 70% of the organic carbon buried in this portion of the western Arctic with the remainder mainly supplied by sea ice algal productivity (25-35%) and terrestrial inputs (15%). With many mixing models now available, this study will compare SIAR with MixSIAR and the new FRUITS model. Monte Carlo modeling of the mixing polygon will be used to validate the models, and hierarchical models will be utilised to glean more information from the data set.
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are often used as natural labels to quantify the contributions of multiple sources to a mixture. For example, C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source li...
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Steidinger, Brian S.; Bever, James D.
2016-01-01
Plants in multiple symbioses are exploited by symbionts that consume their resources without providing services. Discriminating hosts are thought to stabilize mutualism by preferentially allocating resources into anatomical structures (modules) where services are generated, with examples of modules including the entire inflorescences of figs and the root nodules of legumes. Modules are often colonized by multiple symbiotic partners, such that exploiters that co-occur with mutualists within mixed modules can share rewards generated by their mutualist competitors. We developed a meta-population model to answer how the population dynamics of mutualists and exploiters change when they interact with hosts with different module occupancies (number of colonists per module) and functionally different patterns of allocation into mixed modules. We find that as module occupancy increases, hosts must increase the magnitude of preferentially allocated resources in order to sustain comparable populations of mutualists. Further, we find that mixed colonization can result in the coexistence of mutualist and exploiter partners, but only when preferential allocation follows a saturating function of the number of mutualists in a module. Finally, using published data from the fig–wasp mutualism as an illustrative example, we derive model predictions that approximate the proportion of exploiter, non-pollinating wasps observed in the field. PMID:26740613
NASA Astrophysics Data System (ADS)
Zeiger, S. J.; Hubbart, J. A.
2016-12-01
A nested-scale watershed study design was used to monitor water quantity and quality of an impaired 3rd order stream in a rapidly urbanizing mixed-land-use watershed of the central USA. Grab samples were collected at each gauging site (n=836 samples x 5 gauging sites) and analyzed for suspended sediment, total phosphorus, and inorganic nitrogen species during the four year study period (2010 - 2013). Observed data were used to quantify relationships between climate, land use and pollutant loading. Additionally, Soil and Water Assessment Tool (SWAT) estimates of monthly stream flow, suspended sediment, total phosphorus, nitrate, nitrite, and ammonium were validated. Total annual precipitation ranged from approximately 650 mm during 2012 (extreme drought year) to 1350 mm during 2010 (record setting wet year) which caused significant (p<0.05) differences in annual pollutant yields (i.e. loads per unit area) that ranged from 115 to 174%. Multiple linear regression analyses showed significant (p<0.05) relationships between pollutant loading, annual total precipitation (positive correlate), urban land use (positive correlate), forested land use (negative correlate), and wetland land use (negative correlate). Results from SWAT model performance assessment indicated calibration was necessary to achieve Nash-Sutcliff Efficiency (NSE) values greater than 0.05 for monthly pollutant loads. Calibrating the SWAT model to multiple gauging sites within the watershed improved estimates of monthly stream flow (NSE=0.83), and pollutant loads (NSE>0.78). However, nitrite and ammonium loads were underestimated by more than four orders of magnitude (NSE<-0.16) indicating a critical need for improved nutrient cycling and routing routines. Results highlight the need for sampling regimens that capture the variability of climate and flow mediated pollutant transport, and the benefits of calibrating the SWAT model to multiple gauging sites in mixed-land-use watersheds.
A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.
Li, Bing; Cui, Wei; Wang, Bin
2015-09-16
Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.
3D Visualization of Global Ocean Circulation
NASA Astrophysics Data System (ADS)
Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.
2015-12-01
Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.
Developing the DESCARTE Model: The Design of Case Study Research in Health Care.
Carolan, Clare M; Forbat, Liz; Smith, Annetta
2016-04-01
Case study is a long-established research tradition which predates the recent surge in mixed-methods research. Although a myriad of nuanced definitions of case study exist, seminal case study authors agree that the use of multiple data sources typify this research approach. The expansive case study literature demonstrates a lack of clarity and guidance in designing and reporting this approach to research. Informed by two reviews of the current health care literature, we posit that methodological description in case studies principally focuses on description of case study typology, which impedes the construction of methodologically clear and rigorous case studies. We draw from the case study and mixed-methods literature to develop the DESCARTE model as an innovative approach to the design, conduct, and reporting of case studies in health care. We examine how case study fits within the overall enterprise of qualitatively driven mixed-methods research, and the potential strengths of the model are considered. © The Author(s) 2015.
Dendrimer-magnetic nanostructure: a Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jabar, A.; Masrour, R.
2017-11-01
In this paper, the magnetic properties of ternary mixed spins (σ,S,q) Ising model on a dendrimer nanostructure are studied using Monte Carlo simulations. The ground state phase diagrams of dendrimer nanostructure with ternary mixed spins σ = 1/2, S = 1 and q = 3/2 Ising model are found. The variation of the thermal total and partial magnetizations with the different exchange interactions, the external magnetic fields and the crystal fields have been also studied. The reduced critical temperatures have been deduced. The magnetic hysteresis cycles have been discussed. In particular, the corresponding magnetic coercive filed values have been deduced. The multiples hysteresis cycles are found. The dendrimer nanostructure has several applications in the medicine.
The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...
An Agitation Experiment with Multiple Aspects
ERIC Educational Resources Information Center
Spencer, Jordan L.
2006-01-01
This paper describes a multifaceted agitation and mixing experiment. The relatively inexpensive apparatus includes a variable-speed stirrer motor, two polycarbonate tanks, and an instrumented torque table. Students measure torque as a function of stirrer speed, and use conductive tracer data to estimate two parameters of a flow model. The effect…
MIXING MODELS IN ANALYSES OF DIET USING MULTIPLE STABLE ISOTOPES: A CRITIQUE
Stable isotopes have become widely used in ecology to quantify the importance of different sources based on their isotopic signature. One example of this has been the determination of food webs, where the isotopic signatures of a predator and various prey items can be used to de...
Everybody Leads: A Model for Collaborative Leadership
ERIC Educational Resources Information Center
Maxfield, C. Robert; Klocko, Barbara A.
2010-01-01
This mixed-methods case study analyzes the perceptions of participants in a year-long collaborative leadership initiative conducted at a small school district situated between larger urban districts and multiple suburban districts in a midwestern state. The initiative was facilitated by the Galileo Institute for Teacher Leadership in cooperation…
UNCERTAINTY IN SOURCE PARTITIONING USING STABLE ISOTOPES
Stable isotope analyses are often used to quantify the contribution of multiple sources to a mixture, such as proportions of food sources in an animal's diet, C3 vs. C4 plant inputs to soil organic carbon, etc. Linear mixing models can be used to partition two sources with a sin...
Factors associated with parasite dominance in fishes from Brazil.
Amarante, Cristina Fernandes do; Tassinari, Wagner de Souza; Luque, Jose Luis; Pereira, Maria Julia Salim
2016-06-14
The present study used regression models to evaluate the existence of factors that may influence the numerical parasite dominance with an epidemiological approximation. A database including 3,746 fish specimens and their respective parasites were used to evaluate the relationship between parasite dominance and biotic characteristics inherent to the studied hosts and the parasite taxa. Multivariate, classical, and mixed effects linear regression models were fitted. The calculations were performed using R software (95% CI). In the fitting of the classical multiple linear regression model, freshwater and planktivorous fish species and body length, as well as the species of the taxa Trematoda, Monogenea, and Hirudinea, were associated with parasite dominance. However, the fitting of the mixed effects model showed that the body length of the host and the species of the taxa Nematoda, Trematoda, Monogenea, Hirudinea, and Crustacea were significantly associated with parasite dominance. Studies that consider specific biological aspects of the hosts and parasites should expand the knowledge regarding factors that influence the numerical dominance of fish in Brazil. The use of a mixed model shows, once again, the importance of the appropriate use of a model correlated with the characteristics of the data to obtain consistent results.
NASA Astrophysics Data System (ADS)
Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham
2014-09-01
Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.
Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T
2009-07-09
Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
NASA Astrophysics Data System (ADS)
Watanabe, T.; Nagata, K.
2016-08-01
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less
[Modeling of mixed infection by tick-borne encephalitis and Powassan viruses in mice].
Khozinskaia, G A; Pogodina, V V
1982-01-01
Simultaneous inoculation of mice with tick-borne and Powassan viruses was shown, depending on experimental conditions, to result either in stimulation of infection or its unchanged course as compared with monoinfection and inoculation with the viruses at 2--3-week intervals in cross protection of mice against the superinfecting virus. Simultaneous inoculation of mice with the two viruses was accompanied by their multiplication in the blood and brains of mice and formation of antihemagglutinating antibodies to each of them. In the virus population in the brains of mice there was either formation of a mixture of two viruses or their phenotypic mixing. In cross protection, multiplication of the superinfecting virus in the blood and brain of mice was slightly inhibited, the antihemagglutinating antibody to a second virus either did not form or appeared in low titres.
Optimization study on multiple train formation scheme of urban rail transit
NASA Astrophysics Data System (ADS)
Xia, Xiaomei; Ding, Yong; Wen, Xin
2018-05-01
The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.
ERIC Educational Resources Information Center
Cason, Jennifer
2016-01-01
This action research study is a mixed methods investigation of doctoral students' preparedness for multiple career paths. PhD students face two challenges preparing for multiple career paths: lack of preparation and limited engagement in conversations about the value of their research across multiple audiences. This study focuses on PhD students'…
NASA Astrophysics Data System (ADS)
Abani, Neerav; Reitz, Rolf D.
2010-09-01
An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.
Internal Mixing Studied for GE/ARL Ejector Nozzle
NASA Technical Reports Server (NTRS)
Zaman, Khairul
2005-01-01
To achieve jet noise reduction goals for the High Speed Civil Transport aircraft, researchers have been investigating the mixer-ejector nozzle concept. For this concept, a primary nozzle with multiple chutes is surrounded by an ejector. The ejector mixes low-momentum ambient air with the hot engine exhaust to reduce the jet velocity and, hence, the jet noise. It is desirable to mix the two streams as fast as possible in order to minimize the length and weight of the ejector. An earlier model of the mixer-ejector nozzle was tested extensively in the Aerodynamic Research Laboratory (ARL) of GE Aircraft Engines at Cincinnati, Ohio. While testing was continuing with later generations of the nozzle, the earlier model was brought to the NASA Lewis Research Center for relatively fundamental measurements. Goals of the Lewis study were to obtain details of the flow field to aid computational fluid dynamics (CFD) efforts and obtain a better understanding of the flow mechanisms, as well as to experiment with mixing enhancement devices, such as tabs. The measurements were made in an open jet facility for cold (unheated) flow without a surrounding coflowing stream.
An empirical study of rape in the context of multiple murder.
DeLisi, Matt
2014-03-01
In recent years, multiple homicide offending has received increased research attention from criminologists; however, there is mixed evidence about the role of rape toward the perpetration of multiple murder. Drawing on criminal career data from a nonprobability sample of 618 confined male homicide offenders selected from eight U.S. states, the current study examines the role of rape as a predictor of multiple homicide offending. Bivariate analyses indicated a significant association between rape and murder charges. Multivariate path regression models indicated that rape had a significant and robust association with multiple murder. This relationship withstood the confounding effects of kidnapping, prior prison confinement, and prior murder, rape, and kidnapping. These results provide evidence that rape potentially serves as a gateway to multiple murder for some serious offenders. Suggestions for future research are proffered.
ERIC Educational Resources Information Center
Maye, Kelly M.
2012-01-01
Cognitive and biophysical factors have been considered contributors linked to identifiable markers of obsessive compulsive and anxiety disorders. Research demonstrates multiple causes and mixed results for the short-term success of educational programs designed to ameliorate problems that children with obsessive compulsive and anxiety disorders…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oppel, Fred J.; Hart, Brian E.; Whitford, Gregg Douglas
2016-08-25
This package contains modules that model sensors in Umbra. There is a mix of modalities for both accumulating and tracking energy sensors: seismic, magnetic, and radiation. Some modules fuss information from multiple sensor types. Sensor devices (e.g., seismic sensors), detect objects such as people and vehicles that have sensor properties attached (e.g., seismic properties).
355 nm and 1064 nm-pulse mixing to identify the laser-induced damage mechanisms in KDP
NASA Astrophysics Data System (ADS)
Reyné, Stéphane; Duchateau, Guillaume; Natoli, Jean-Yves; Lamaignère, Laurent
2011-02-01
Nanosecond laser-induced damage (LID) in potassium dihydrogen phosphate (KH2PO4 or KDP) remains an issue for light-frequency converters in large-aperture lasers such as NIF (National Ignition Facility, in USA) LMJ (Laser MegaJoule, in France). In the final optic assembly, converters are simultaneously illuminated by multiple wavelengths during the frequency conversion. In this configuration, the damage resistance of the KDP crystals becomes a crucial problem and has to be improved. In this study, we propose a refined investigation about the LID mechanisms involved in the case of a multiple wavelengths combination. Experiments based on an original pump-pump set-up have been carried out in the nanosecond regime on a KDP crystal. In particular, the impact of a simultaneous mixing of 355 nm and 1064 nm pulses has been experimentally studied and compared to a model based on heat transfer, the Mie theory and a Drude model. This study sheds light on the physical processes implied in the KDP laser damage. In particular, a three-photon ionization mechanism is shown to be responsible for laser damage in KDP.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
Jayachandrababu, Krishna C; Verploegh, Ross J; Leisen, Johannes; Nieuwendaal, Ryan C; Sholl, David S; Nair, Sankar
2016-06-15
Mixed-linker zeolitic imidazolate frameworks (ZIFs) are nanoporous materials that exhibit continuous and controllable tunability of properties like effective pore size, hydrophobicity, and organophilicity. The structure of mixed-linker ZIFs has been studied on macroscopic scales using gravimetric and spectroscopic techniques. However, it has so far not been possible to obtain information on unit-cell-level linker distribution, an understanding of which is key to predicting and controlling their adsorption and diffusion properties. We demonstrate the use of (1)H combined rotation and multiple pulse spectroscopy (CRAMPS) NMR spin exchange measurements in combination with computational modeling to elucidate potential structures of mixed-linker ZIFs, particularly the ZIF 8-90 series. All of the compositions studied have structures that have linkers mixed at a unit-cell-level as opposed to separated or highly clustered phases within the same crystal. Direct experimental observations of linker mixing were accomplished by measuring the proton spin exchange behavior between functional groups on the linkers. The data were then fitted to a kinetic spin exchange model using proton positions from candidate mixed-linker ZIF structures that were generated computationally using the short-range order (SRO) parameter as a measure of the ordering, clustering, or randomization of the linkers. The present method offers the advantages of sensitivity without requiring isotope enrichment, a straightforward NMR pulse sequence, and an analysis framework that allows one to relate spin diffusion behavior to proposed atomic positions. We find that structures close to equimolar composition of the two linkers show a greater tendency for linker clustering than what would be predicted based on random models. Using computational modeling we have also shown how the window-type distribution in experimentally synthesized mixed-linker ZIF-8-90 materials varies as a function of their composition. The structural information thus obtained can be further used for predicting, screening, or understanding the tunable adsorption and diffusion behavior of mixed-linker ZIFs, for which the knowledge of linker distributions in the framework is expected to be important.
Missing continuous outcomes under covariate dependent missingness in cluster randomised trials
Diaz-Ordaz, Karla; Bartlett, Jonathan W
2016-01-01
Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885
Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.
Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W
2017-06-01
Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.
Dilution jet mixing program, phase 3
NASA Technical Reports Server (NTRS)
Srinivasan, R.; Coleman, E.; Myers, G.; White, C.
1985-01-01
The main objectives for the NASA Jet Mixing Phase 3 program were: extension of the data base on the mixing of single sided rows of jets in a confined cross flow to discrete slots, including streamlined, bluff, and angled injections; quantification of the effects of geometrical and flow parameters on penetration and mixing of multiple rows of jets into a confined flow; investigation of in-line, staggered, and dissimilar hole configurations; and development of empirical correlations for predicting temperature distributions for discrete slots and multiple rows of dilution holes.
Design of a new static micromixer having simple structure and excellent mixing performance.
Kamio, Eiji; Ono, Tsutomu; Yoshizawa, Hidekazu
2009-06-21
A novel micromixer with simple construction and excellent mixing performance is developed. The micromixer is composed of two stainless steel tubes with different diameters: one is an outer tube and another is an inner tube which fits in the outer tube. In this micromixer, one reactant fluid flows in the mixing zone from the inner tube and the other flows from the outer tube. The excellent mixing performance is confirmed by comparing the results of a Villermaux/Dushman reaction with those for the other micromixers. The developed micromixer has a mixing cascade with multiple means and an asymmetric structure to achieve effective mixing. The excellent mixing performance of the developed micromixer suggests that serial addition of multiple phenomena for mixing will give us an efficient micromixing.
Micklash. II, Kenneth James; Dutton, Justin James; Kaye, Steven
2014-06-03
An apparatus for testing of multiple material samples includes a gas delivery control system operatively connectable to the multiple material samples and configured to provide gas to the multiple material samples. Both a gas composition measurement device and pressure measurement devices are included in the apparatus. The apparatus includes multiple selectively openable and closable valves and a series of conduits configured to selectively connect the multiple material samples individually to the gas composition device and the pressure measurement devices by operation of the valves. A mixing system is selectively connectable to the series of conduits and is operable to cause forced mixing of the gas within the series of conduits to achieve a predetermined uniformity of gas composition within the series of conduits and passages.
NASA Technical Reports Server (NTRS)
Ly, Uy-Loi; Schoemig, Ewald
1993-01-01
In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.
A multiple-scales model of the shock-cell structure of imperfectly expanded supersonic jets
NASA Technical Reports Server (NTRS)
Tam, C. K. W.; Jackson, J. A.; Seiner, J. M.
1985-01-01
The present investigation is concerned with the development of an analytical model of the quasi-periodic shock-cell structure of an imperfectly expanded supersonic jet. The investigation represents a part of a program to develop a mathematical theory of broadband shock-associated noise of supersonic jets. Tam and Tanna (1982) have suggested that this type of noise is generated by the weak interaction between the quasi-periodic shock cells and the downstream-propagating large turbulence structures in the mixing layer of the jet. In the model developed in this paper, the effect of turbulence in the mixing layer of the jet is simulated by the addition of turbulent eddy-viscosity terms to the momentum equation. Attention is given to the mean-flow profile and the numerical solution, and a comparison of the numerical results with experimental data.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Wolf, D. E.
1983-01-01
A new computational model, SCIPVIS, has been developed to predict the multiple-cell wave/shock structure in under or over-expanded turbulent jets. SCIPVIS solves the parabolized Navier-Stokes jet mixing equations utilizing a shock-capturing approach in supersonic regions of the jet and a pressure-split approach in subsonic regions. Turbulence processes are represented by the solution of compressibility corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive turbulent mixing process occurring behind the disc are handled in a detailed fashion. SCIPVIS presently analyzes jets exhausting into a quiescent or supersonic external stream for which a single-pass spatial marching solution can be obtained. The iterative coupling of SCIPVIS with a potential flow solver for the analysis of subsonic/transonic external streams is under development.
Gardner, W.P.; Susong, D.D.; Solomon, D.K.; Heasler, H.P.
2011-01-01
Multiple environmental tracers are used to investigate age distribution, evolution, and mixing in local- to regional-scale groundwater circulation around the Norris Geyser Basin area in Yellowstone National Park. Springs ranging in temperature from 3??C to 90??C in the Norris Geyser Basin area were sampled for stable isotopes of hydrogen and oxygen, major and minor element chemistry, dissolved chlorofluorocarbons, and tritium. Groundwater near Norris Geyser Basin is comprised of two distinct systems: a shallow, cool water system and a deep, high-temperature hydrothermal system. These two end-member systems mix to create springs with intermediate temperature and composition. Using multiple tracers from a large number of springs, it is possible constrain the distribution of possible flow paths and refine conceptual models of groundwater circulation in and around a large, complex hydrothermal system. Copyright 2011 by the American Geophysical Union.
Exciton effects in the index of refraction of multiple quantum wells and superlattices
NASA Technical Reports Server (NTRS)
Kahen, K. B.; Leburton, J. P.
1986-01-01
Theoretical calculations of the index of refraction of multiple quantum wells and superlattices are presented. The model incorporates both the bound and continuum exciton contributions for the gamma region transitions. In addition, the electronic band structure model has both superlattice and bulk alloy properties. The results indicate that large light-hole masses, i.e., of about 0.23, produced by band mixing effects, are required to account for the experimental data. Furthermore, it is shown that superlattice effects rapidly decrease for energies greater than the confining potential barriers. Overall, the theoretical results are in very good agreement with the experimental data and show the importance of including exciton effects in the index of refraction.
Analyzing Association Mapping in Pedigree-Based GWAS Using a Penalized Multitrait Mixed Model
Liu, Jin; Yang, Can; Shi, Xingjie; Li, Cong; Huang, Jian; Zhao, Hongyu; Ma, Shuangge
2017-01-01
Genome-wide association studies (GWAS) have led to the identification of many genetic variants associated with complex diseases in the past 10 years. Penalization methods, with significant numerical and statistical advantages, have been extensively adopted in analyzing GWAS. This study has been partly motivated by the analysis of Genetic Analysis Workshop (GAW) 18 data, which have two notable characteristics. First, the subjects are from a small number of pedigrees and hence related. Second, for each subject, multiple correlated traits have been measured. Most of the existing penalization methods assume independence between subjects and traits and can be suboptimal. There are a few methods in the literature based on mixed modeling that can accommodate correlations. However, they cannot fully accommodate the two types of correlations while conducting effective marker selection. In this study, we develop a penalized multitrait mixed modeling approach. It accommodates the two different types of correlations and includes several existing methods as special cases. Effective penalization is adopted for marker selection. Simulation demonstrates its satisfactory performance. The GAW 18 data are analyzed using the proposed method. PMID:27247027
Evaluating the MMI diagnostic on OMEGA direct-drive shots
NASA Astrophysics Data System (ADS)
Baumgaertel, J. A.; Bradley, P. A.; Cobble, J. A.; Fincke, J.; Hakel, P.; Hsu, S. C.; Kanzleiter, R.; Krasheninnikova, N. S.; Murphy, T. J.; Schmitt, M. J.; Shah, R.; Tregillis, I.; Obrey, K.; Mancini, R. C.; Joshi, T.; Johns, H.; Mayes, D.
2013-10-01
The Defect-Induced Mix Experiment (DIME) project utilized Multiple Monochromatic Imagers (MMI) on symmetric and polar direct-drive shots conducted on the OMEGA laser. The MMI provides spatially and spectrally resolved data of capsule implosions and resultant dopant emissions. The capsules had radii of 430 μm, with CH shells that included an inner layer doped with 1-2 atom % Ti, and a gas fill of 5 atm deuterium. Simulations of the target implosion by codes HYDRA and RAGE are post-processed with self-emission and MMI synthetic diagnostic tools and quantitatively compared to the MMI data to determine the utility of using it for mix model validation. MMI data shows the location of dopants, which are used to diagnose mix. Sensitivities of synthetic MMI images and yield to laser drive and mix levels are explored. Finally, RAGE results, clean and with mix, are compared with time-dependent streak camera data. This work is supported by US DOE/NNSA, performed at LANL, operated by LANS LLC under contract DE-AC52-06NA25396.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789
Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.
Edenharter, Günther M; Gartner, Daniel; Pförringer, Dominik
2017-06-01
Increasing costs of material resources challenge hospitals to stay profitable. Particularly in anesthesia departments and intensive care units, bronchoscopes are used for various indications. Inefficient management of single- and multiple-use systems can influence the hospitals' material costs substantially. Using mathematical modeling, we developed a strategic decision support tool to determine the optimum mix of disposable and reusable bronchoscopy devices in the setting of an intensive care unit. A mathematical model with the objective to minimize costs in relation to demand constraints for bronchoscopy devices was formulated. The stochastic model decides whether single-use, multi-use, or a strategically chosen mix of both device types should be used. A decision support tool was developed in which parameters for uncertain demand such as mean, standard deviation, and a reliability parameter can be inserted. Furthermore, reprocessing costs per procedure, procurement, and maintenance costs for devices can be parameterized. Our experiments show for which demand pattern and reliability measure, it is efficient to only use reusable or disposable devices and under which circumstances the combination of both device types is beneficial. To determine the optimum mix of single-use and reusable bronchoscopy devices effectively and efficiently, managers can enter their hospital-specific parameters such as demand and prices into the decision support tool.The software can be downloaded at: https://github.com/drdanielgartner/bronchomix/.
Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.
Lin, Tsung-I; Wang, Wan-Lun
2017-10-01
In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Curriculum-Based Measurement of Oral Reading: Quality of Progress Monitoring Outcomes
ERIC Educational Resources Information Center
Christ, Theodore J.; Zopluoglu, Cengiz; Long, Jeffery D.; Monaghen, Barbara D.
2012-01-01
Curriculum-based measurement of oral reading (CBM-R) is frequently used to set student goals and monitor student progress. This study examined the quality of growth estimates derived from CBM-R progress monitoring data. The authors used a linear mixed effects regression (LMER) model to simulate progress monitoring data for multiple levels of…
ERIC Educational Resources Information Center
Yakubova, Gulnoza; Hughes, Elizabeth M.; Hornberger, Erin
2015-01-01
The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with…
Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis
Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas
2016-01-01
The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246
ERIC Educational Resources Information Center
Wang, Wei
2013-01-01
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…
Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M
2017-04-01
Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.
Clavijo, Gabriel; Williams, Trevor; Muñoz, Delia; Caballero, Primitivo; López-Ferber, Miguel
2010-01-01
An insect nucleopolyhedrovirus naturally survives as a mixture of at least nine genotypes. Infection by multiple genotypes results in the production of virus occlusion bodies (OBs) with greater pathogenicity than those of any genotype alone. We tested the hypothesis that each OB contains a genotypically diverse population of virions. Few insects died following inoculation with an experimental two-genotype mixture at a dose of one OB per insect, but a high proportion of multiple infections were observed (50%), which differed significantly from the frequencies predicted by a non-associated transmission model in which genotypes are segregated into distinct OBs. By contrast, insects that consumed multiple OBs experienced higher mortality and infection frequencies did not differ significantly from those of the non-associated model. Inoculation with genotypically complex wild-type OBs indicated that genotypes tend to be transmitted in association, rather than as independent entities, irrespective of dose. To examine the hypothesis that virions may themselves be genotypically heterogeneous, cell culture plaques derived from individual virions were analysed to reveal that one-third of virions was of mixed genotype, irrespective of the genotypic composition of the OBs. We conclude that co-occlusion of genotypically distinct virions in each OB is an adaptive mechanism that favours the maintenance of virus diversity during insect-to-insect transmission. PMID:19939845
Spatial path models with multiple indicators and multiple causes: mental health in US counties.
Congdon, Peter
2011-06-01
This paper considers a structural model for the impact on area mental health outcomes (poor mental health, suicide) of spatially structured latent constructs: deprivation, social capital, social fragmentation and rurality. These constructs are measured by multiple observed effect indicators, with the constructs allowed to be correlated both between and within areas. However, in the scheme developed here, particular latent constructs may also be influenced by known variables, or, via path sequences, by other constructs, possibly nonlinearly. For example, area social capital may be measured by effect indicators (e.g. associational density, charitable activity), but influenced as causes by other constructs (e.g. area deprivation), and by observed features of the socio-ethnic structure of areas. A model incorporating these features is applied to suicide mortality and the prevalence of poor mental health in 3141 US counties, which are related to the latent spatial constructs and to observed variables (e.g. county ethnic mix). Copyright © 2011 Elsevier Ltd. All rights reserved.
Multivariate meta-analysis using individual participant data
Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.
2016-01-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484
Pennings, Stephanie M; Finn, Joseph; Houtsma, Claire; Green, Bradley A; Anestis, Michael D
2017-10-01
Prior studies examining posttraumatic stress disorder (PTSD) symptom clusters and the components of the interpersonal theory of suicide (ITS) have yielded mixed results, likely stemming in part from the use of divergent samples and measurement techniques. This study aimed to expand on these findings by utilizing a large military sample, gold standard ITS measures, and multiple PTSD factor structures. Utilizing a sample of 935 military personnel, hierarchical multiple regression analyses were used to test the association between PTSD symptom clusters and the ITS variables. Additionally, we tested for indirect effects of PTSD symptom clusters on suicidal ideation through thwarted belongingness, conditional on levels of perceived burdensomeness. Results indicated that numbing symptoms are positively associated with both perceived burdensomeness and thwarted belongingness and hyperarousal symptoms (dysphoric arousal in the 5-factor model) are positively associated with thwarted belongingness. Results also indicated that hyperarousal symptoms (anxious arousal in the 5-factor model) were positively associated with fearlessness about death. The positive association between PTSD symptom clusters and suicidal ideation was inconsistent and modest, with mixed support for the ITS model. Overall, these results provide further clarity regarding the association between specific PTSD symptom clusters and suicide risk factors. © 2016 The American Association of Suicidology.
Water mass mixing: The dominant control on the zinc distribution in the North Atlantic Ocean
NASA Astrophysics Data System (ADS)
Roshan, Saeed; Wu, Jingfeng
2015-07-01
Dissolved zinc (dZn) concentration was determined in the North Atlantic during the U.S. GEOTRACES 2010 and 2011 cruise (GOETRACES GA03). A relatively poor linear correlation (R2 = 0.756) was observed between dZn and silicic acid (Si), the slope of which was 0.0577 nM/µmol/kg. We attribute the relatively poor dZn-Si correlation to the following processes: (a) differential regeneration of zinc relative to silicic acid, (b) mixing of multiple water masses that have different Zn/Si, and (c) zinc sources such as sedimentary or hydrothermal. To quantitatively distinguish these possibilities, we use the results of Optimum Multi-Parameter Water Mass Analysis by Jenkins et al. (2015) to model the zinc distribution below 500 m. We hypothesized two scenarios: conservative mixing and regenerative mixing. The first scenario (conservative) could be modeled to results in a correlation with observations with a R2 = 0.846. In the second scenario, we took a Si-related regeneration into account, which could model the observations with a R2 = 0.867. Through this regenerative mixing scenario, we estimated a Zn/Si = 0.0548 nM/µmol/kg that may be more realistic than linear regression slope due to accounting for process b. However, this did not improve the model substantially (R2 = 0.867 versus0.846), which may indicate the insignificant effect of remineralization on the zinc distribution in this region. The relative weakness in the model-observation correlation (R2~0.85 for both scenarios) implies that processes (a) and (c) may be plausible. Furthermore, dZn in the upper 500 m exhibited a very poor correlation with apparent oxygen utilization, suggesting a minimal role for the organic matter-associated remineralization process.
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-01-01
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. PMID:28952708
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-09-27
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
Fitzsimmons, Eric J; Kvam, Vanessa; Souleyrette, Reginald R; Nambisan, Shashi S; Bonett, Douglas G
2013-01-01
Despite recent improvements in highway safety in the United States, serious crashes on curves remain a significant problem. To assist in better understanding causal factors leading to this problem, this article presents and demonstrates a methodology for collection and analysis of vehicle trajectory and speed data for rural and urban curves using Z-configured road tubes. For a large number of vehicle observations at 2 horizontal curves located in Dexter and Ames, Iowa, the article develops vehicle speed and lateral position prediction models for multiple points along these curves. Linear mixed-effects models were used to predict vehicle lateral position and speed along the curves as explained by operational, vehicle, and environmental variables. Behavior was visually represented for an identified subset of "risky" drivers. Linear mixed-effect regression models provided the means to predict vehicle speed and lateral position while taking into account repeated observations of the same vehicle along horizontal curves. Speed and lateral position at point of entry were observed to influence trajectory and speed profiles. Rural horizontal curve site models are presented that indicate that the following variables were significant and influenced both vehicle speed and lateral position: time of day, direction of travel (inside or outside lane), and type of vehicle.
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes
Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.
2018-01-01
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass—namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production. PMID:29381705
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes.
Nag, Ambarish; St John, Peter C; Crowley, Michael F; Bomble, Yannick J
2018-01-01
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass-namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.
A comparative study of kinetic and connectionist modeling for shelf-life prediction of Basundi mix.
Ruhil, A P; Singh, R R B; Jain, D K; Patel, A A; Patil, G R
2011-04-01
A ready-to-reconstitute formulation of Basundi, a popular Indian dairy dessert was subjected to storage at various temperatures (10, 25 and 40 °C) and deteriorative changes in the Basundi mix were monitored using quality indices like pH, hydroxyl methyl furfural (HMF), bulk density (BD) and insolubility index (II). The multiple regression equations and the Arrhenius functions that describe the parameters' dependence on temperature for the four physico-chemical parameters were integrated to develop mathematical models for predicting sensory quality of Basundi mix. Connectionist model using multilayer feed forward neural network with back propagation algorithm was also developed for predicting the storage life of the product employing artificial neural network (ANN) tool box of MATLAB software. The quality indices served as the input parameters whereas the output parameters were the sensorily evaluated flavour and total sensory score. A total of 140 observations were used and the prediction performance was judged on the basis of per cent root mean square error. The results obtained from the two approaches were compared. Relatively lower magnitudes of percent root mean square error for both the sensory parameters indicated that the connectionist models were better fitted than kinetic models for predicting storage life.
Borgquist, Ola; Wise, Matt P; Nielsen, Niklas; Al-Subaie, Nawaf; Cranshaw, Julius; Cronberg, Tobias; Glover, Guy; Hassager, Christian; Kjaergaard, Jesper; Kuiper, Michael; Smid, Ondrej; Walden, Andrew; Friberg, Hans
2017-08-01
Dysglycemia and glycemic variability are associated with poor outcomes in critically ill patients. Targeted temperature management alters blood glucose homeostasis. We investigated the association between blood glucose concentrations and glycemic variability and the neurologic outcomes of patients randomized to targeted temperature management at 33°C or 36°C after cardiac arrest. Post hoc analysis of the multicenter TTM-trial. Primary outcome of this analysis was neurologic outcome after 6 months, referred to as "Cerebral Performance Category." Thirty-six sites in Europe and Australia. All 939 patients with out-of-hospital cardiac arrest of presumed cardiac cause that had been included in the TTM-trial. Targeted temperature management at 33°C or 36°C. Nonparametric tests as well as multiple logistic regression and mixed effects logistic regression models were used. Median glucose concentrations on hospital admission differed significantly between Cerebral Performance Category outcomes (p < 0.0001). Hyper- and hypoglycemia were associated with poor neurologic outcome (p = 0.001 and p = 0.054). In the multiple logistic regression models, the median glycemic level was an independent predictor of poor Cerebral Performance Category (Cerebral Performance Category, 3-5) with an odds ratio (OR) of 1.13 in the adjusted model (p = 0.008; 95% CI, 1.03-1.24). It was also a predictor in the mixed model, which served as a sensitivity analysis to adjust for the multiple time points. The proportion of hyperglycemia was higher in the 33°C group compared with the 36°C group. Higher blood glucose levels at admission and during the first 36 hours, and higher glycemic variability, were associated with poor neurologic outcome and death. More patients in the 33°C treatment arm had hyperglycemia.
NASA Astrophysics Data System (ADS)
Finsterbusch, Jürgen
2010-12-01
Double- or two-wave-vector diffusion-weighting experiments with short mixing times in which two diffusion-weighting periods are applied in direct succession, are a promising tool to estimate cell sizes in the living tissue. However, the underlying effect, a signal difference between parallel and antiparallel wave vector orientations, is considerably reduced for the long gradient pulses required on whole-body MR systems. Recently, it has been shown that multiple concatenations of the two wave vectors in a single acquisition can double the modulation amplitude if short gradient pulses are used. In this study, numerical simulations of such experiments were performed with parameters achievable with whole-body MR systems. It is shown that the theoretical model yields a good approximation of the signal behavior if an additional term describing free diffusion is included. More importantly, it is demonstrated that the shorter gradient pulses sufficient to achieve the desired diffusion weighting for multiple concatenations, increase the signal modulation considerably, e.g. by a factor of about five for five concatenations. Even at identical echo times, achieved by a shortened diffusion time, a moderate number of concatenations significantly improves the signal modulation. Thus, experiments on whole-body MR systems may benefit from multiple concatenations.
Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC
2015-01-01
The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
2017-02-04
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
Distribution path robust optimization of electric vehicle with multiple distribution centers
Hao, Wei; He, Ruichun; Jia, Xiaoyan; Pan, Fuquan; Fan, Jing; Xiong, Ruiqi
2018-01-01
To identify electrical vehicle (EV) distribution paths with high robustness, insensitivity to uncertainty factors, and detailed road-by-road schemes, optimization of the distribution path problem of EV with multiple distribution centers and considering the charging facilities is necessary. With the minimum transport time as the goal, a robust optimization model of EV distribution path with adjustable robustness is established based on Bertsimas’ theory of robust discrete optimization. An enhanced three-segment genetic algorithm is also developed to solve the model, such that the optimal distribution scheme initially contains all road-by-road path data using the three-segment mixed coding and decoding method. During genetic manipulation, different interlacing and mutation operations are carried out on different chromosomes, while, during population evolution, the infeasible solution is naturally avoided. A part of the road network of Xifeng District in Qingyang City is taken as an example to test the model and the algorithm in this study, and the concrete transportation paths are utilized in the final distribution scheme. Therefore, more robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model. PMID:29518169
Hydrothermal contamination of public supply wells in Napa and Sonoma Valleys, California
Forrest, Matthew J.; Kulongoski, Justin T.; Edwards, Matthew S.; Farrar, Christopher D.; Belitz, Kenneth; Norris, Richard D.
2013-01-01
Groundwater chemistry and isotope data from 44 public supply wells in the Napa and Sonoma Valleys, California were determined to investigate mixing of relatively shallow groundwater with deeper hydrothermal fluids. Multivariate analyses including Cluster Analyses, Multidimensional Scaling (MDS), Principal Components Analyses (PCA), Analysis of Similarities (ANOSIM), and Similarity Percentage Analyses (SIMPER) were used to elucidate constituent distribution patterns, determine which constituents are significantly associated with these hydrothermal systems, and investigate hydrothermal contamination of local groundwater used for drinking water. Multivariate statistical analyses were essential to this study because traditional methods, such as mixing tests involving single species (e.g. Cl or SiO2) were incapable of quantifying component proportions due to mixing of multiple water types. Based on these analyses, water samples collected from the wells were broadly classified as fresh groundwater, saline waters, hydrothermal fluids, or mixed hydrothermal fluids/meteoric water wells. The Multivariate Mixing and Mass-balance (M3) model was applied in order to determine the proportion of hydrothermal fluids, saline water, and fresh groundwater in each sample. Major ions, isotopes, and physical parameters of the waters were used to characterize the hydrothermal fluids as Na–Cl type, with significant enrichment in the trace elements As, B, F and Li. Five of the wells from this study were classified as hydrothermal, 28 as fresh groundwater, two as saline water, and nine as mixed hydrothermal fluids/meteoric water wells. The M3 mixing-model results indicated that the nine mixed wells contained between 14% and 30% hydrothermal fluids. Further, the chemical analyses show that several of these mixed-water wells have concentrations of As, F and B that exceed drinking-water standards or notification levels due to contamination by hydrothermal fluids.
Bayesian function-on-function regression for multilevel functional data.
Meyer, Mark J; Coull, Brent A; Versace, Francesco; Cinciripini, Paul; Morris, Jeffrey S
2015-09-01
Medical and public health research increasingly involves the collection of complex and high dimensional data. In particular, functional data-where the unit of observation is a curve or set of curves that are finely sampled over a grid-is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data on a fine grid, presenting a simple model as well as a more extensive mixed model framework, and introducing various functional Bayesian inferential procedures that account for multiple testing. We examine these models via simulation and a data analysis with data from a study that used event-related potentials to examine how the brain processes various types of images. © 2015, The International Biometric Society.
Volumetric display containing multiple two-dimensional color motion pictures
NASA Astrophysics Data System (ADS)
Hirayama, R.; Shiraki, A.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T.
2014-06-01
We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.
NASA Technical Reports Server (NTRS)
Holdeman, James D.
1991-01-01
Experimental and computational results on the mixing of single, double, and opposed rows of jets with an isothermal or variable temperature mainstream in a confined subsonic crossflow are summarized. The studies were performed to investigate flow and geometric variations typical of the complex 3-D flowfield in the dilution zone of combustion chambers in gas turbine engines. The principal observations from the experiments were that the momentum-flux ratio was the most significant flow variable, and that temperature distributions were similar (independent of orifice diameter) when the orifice spacing and the square-root of the momentum-flux ratio were inversely proportional. The experiments and empirical model for the mixing of a single row of jets from round holes were extended to include several variations typical of gas turbine combustors. Combinations of flow and geometry that gave optimum mixing were identified from the experimental results. Based on results of calculations made with a 3-D numerical model, the empirical model was further extended to model the effects of curvature and convergence. The principle conclusions from this study were that the orifice spacing and momentum-flux relationships were the same as observed previously in a straight duct, but the jet structure was significantly different for jets injected from the inner wall wall of a turn than for those injected from the outer wall. Also, curvature in the axial direction caused a drift of the jet trajectories toward the inner wall, but the mixing in a turning and converging channel did not seem to be inhibited by the convergence, independent of whether the convergence was radial or circumferential. The calculated jet penetration and mixing in an annulus were similar to those in a rectangular duct when the orifice spacing was specified at the radius dividing the annulus into equal areas.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf
2018-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf
2017-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977
ERIC Educational Resources Information Center
Klinger, Don A.; Rogers, W. Todd
2003-01-01
The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…
Substance Use and PTSD Symptoms Impact the Likelihood of Rape and Revictimization in College Women
ERIC Educational Resources Information Center
Messman-Moore, Terri L.; Ward, Rose Marie; Brown, Amy L.
2009-01-01
The present study utilized a mixed retrospective and prospective design with an 8-month follow-up period to test a model of revictimization that included multiple childhood (i.e., child sexual, physical, and emotional abuse) and situational variables (i.e., substance use, sexual behavior) for predicting rape among 276 college women. It was of…
USDA-ARS?s Scientific Manuscript database
PURPOSE: Bacterial cold water disease (BCWD) causes significant economic loss in salmonid aquaculture, and in 2005, a rainbow trout breeding program was initiated at the NCCCWA to select for increased disease survival. The main objectives of this study were to determine the mode of inheritance of di...
A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons
ERIC Educational Resources Information Center
Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.
2016-01-01
Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
A mobile-mobile transport model for simulating reactive transport in connected heterogeneous fields
NASA Astrophysics Data System (ADS)
Lu, Chunhui; Wang, Zhiyuan; Zhao, Yue; Rathore, Saubhagya Singh; Huo, Jinge; Tang, Yuening; Liu, Ming; Gong, Rulan; Cirpka, Olaf A.; Luo, Jian
2018-05-01
Mobile-immobile transport models can be effective in reproducing heavily tailed breakthrough curves of concentration. However, such models may not adequately describe transport along multiple flow paths with intermediate velocity contrasts in connected fields. We propose using the mobile-mobile model for simulating subsurface flow and associated mixing-controlled reactive transport in connected fields. This model includes two local concentrations, one in the fast- and the other in the slow-flow domain, which predict both the concentration mean and variance. The normalized total concentration variance within the flux is found to be a non-monotonic function of the discharge ratio with a maximum concentration variance at intermediate values of the discharge ratio. We test the mobile-mobile model for mixing-controlled reactive transport with an instantaneous, irreversible bimolecular reaction in structured and connected random heterogeneous domains, and compare the performance of the mobile-mobile to the mobile-immobile model. The results indicate that the mobile-mobile model generally predicts the concentration breakthrough curves (BTCs) of the reactive compound better. Particularly, for cases of an elliptical inclusion with intermediate hydraulic-conductivity contrasts, where the travel-time distribution shows bimodal behavior, the prediction of both the BTCs and maximum product concentration is significantly improved. Our results exemplify that the conceptual model of two mobile domains with diffusive mass transfer in between is in general good for predicting mixing-controlled reactive transport, and particularly so in cases where the transfer in the low-conductivity zones is by slow advection rather than diffusion.
Functional Additive Mixed Models
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2014-01-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592
Functional Additive Mixed Models.
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2015-04-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.
Scalability Analysis and Use of Compression at the Goddard DAAC and End-to-End MODIS Transfers
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.
1998-01-01
The goal of this task is to analyze the performance of single and multiple FTP transfer between SCF's and the Goddard DAAC. We developed an analytic model to compute the performance of FTP sessions as a function of various key parameters, implemented the model as a program called FTP Analyzer, and carried out validations with real data obtained by running single and multiple FTP transfer between GSFC and the Miami SCF. The input parameters to the model include the mix to FTP sessions (scenario), and for each FTP session, the file size. The network parameters include the round trip time, packet loss rate, the limiting bandwidth of the network connecting the SCF to a DAAC, TCP's basic timeout, TCP's Maximum Segment Size, and TCP's Maximum Receiver's Window Size. The modeling approach used consisted of modeling TCP's overall throughput, computing TCP's delay per FTP transfer, and then solving a queuing network model that includes the FTP clients and servers.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer.
Collective Interaction of a Compressible Periodic Parallel Jet Flow
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
1997-01-01
A linear instability model for multiple spatially periodic supersonic rectangular jets is solved using Floquet-Bloch theory. The disturbance environment is investigated using a two dimensional perturbation of a mean flow. For all cases large temporal growth rates are found. This work is motivated by an increase in mixing found in experimental measurements of spatially periodic supersonic rectangular jets with phase-locked screech. The results obtained in this paper suggests that phase-locked screech or edge tones may produce correlated spatially periodic jet flow downstream of the nozzles which creates a large span wise multi-nozzle region where a disturbance can propagate. The large temporal growth rates for eddies obtained by model calculation herein are related to the increased mixing since eddies are the primary mechanism that transfer energy from the mean flow to the large turbulent structures. Calculations of growth rates are presented for a range of Mach numbers and nozzle spacings corresponding to experimental test conditions where screech synchronized phase locking was observed. The model may be of significant scientific and engineering value in the quest to understand and construct supersonic mixer-ejector nozzles which provide increased mixing and reduced noise.
CFD study of mixing miscible liquid with high viscosity difference in a stirred tank
NASA Astrophysics Data System (ADS)
Madhania, S.; Cahyani, A. B.; Nurtono, T.; Muharam, Y.; Winardi, S.; Purwanto, W. W.
2018-03-01
The mixing process of miscible liquids with high viscosity difference is crucial role even though the solution mutually dissolved. This paper describes the mixing behaviour of the water-molasses system in a conical-bottomed cylindrical stirred tank (D = 0.28 m and H = 0.395 m) equipped with a side-entry Marine propeller (d = 0.036 m) under the turbulence regime using a three-dimensional and transient CFD-simulation. The objective of this work is to compare the solution strategies was applied in the computational analysis to capture the detail phenomena of mixing two miscible liquid with high viscosity difference. Four solution strategies that have been used are the RANS Standards k-ε (SKE) model as the turbulence model coupled with the Multiple Reference Frame (MRF) method for impeller motion, the RANS Realizable k-ε (RKE) combine with the MRF, the Large Eddy Simulation (LES) coupled with the Sliding Mesh (SM) method and the LES-MRF combination. The transient calculations were conducted with Ansys Fluent 17.1 version. The mixing behaviour and the propeller characteristic are to be compared and discussed in this work. The simulation results show the differences of flow pattern and the molasses distribution profile for every solution strategy. The variation of the flow pattern which happened in each solution strategy showing an instability of the mixing process in stirred tank. The LES-SM strategy shows the realistic direction of flow than another solution strategies.
Two methods for parameter estimation using multiple-trait models and beef cattle field data.
Bertrand, J K; Kriese, L A
1990-08-01
Two methods are presented for estimating variances and covariances from beef cattle field data using multiple-trait sire models. Both methods require that the first trait have no missing records and that the contemporary groups for the second trait be subsets of the contemporary groups for the first trait; however, the second trait may have missing records. One method uses pseudo expectations involving quadratics composed of the solutions and the right-hand sides of the mixed model equations. The other method is an extension of Henderson's Simple Method to the multiple trait case. Neither of these methods requires any inversions of large matrices in the computation of the parameters; therefore, both methods can handle very large sets of data. Four simulated data sets were generated to evaluate the methods. In general, both methods estimated genetic correlations and heritabilities that were close to the Restricted Maximum Likelihood estimates and the true data set values, even when selection within contemporary groups was practiced. The estimates of residual correlations by both methods, however, were biased by selection. These two methods can be useful in estimating variances and covariances from multiple-trait models in large populations that have undergone a minimal amount of selection within contemporary groups.
Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat
2017-05-01
Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
NASA Technical Reports Server (NTRS)
Gao, Chloe Y.; Tsigaridis, Kostas; Bauer, Susanne E.
2017-01-01
The gas-particle partitioning and chemical aging of semi-volatile organic aerosol are presented in a newly developed box model scheme, where its effect on the growth, composition, and mixing state of particles is examined. The volatility-basis set (VBS) framework is implemented into the aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves mass and number aerosol concentrations and in multiple mixing-state classes. The new scheme, MATRIX-VBS, has the potential to significantly advance the representation of organic aerosols in Earth system models by improving upon the conventional representation as non-volatile particulate organic matter, often also with an assumed fixed size distribution. We present results from idealized cases representing Beijing, Mexico City, a Finnish forest, and a southeastern US forest, and investigate the evolution of mass concentrations and volatility distributions for organic species across the gas and particle phases, as well as assessing their mixing state among aerosol populations. Emitted semi-volatile primary organic aerosols evaporate almost completely in the intermediate-volatility range, while they remain in the particle phase in the low-volatility range. Their volatility distribution at any point in time depends on the applied emission factors, oxidation by OH radicals, and temperature. We also compare against parallel simulations with the original scheme, which represented only the particulate and non-volatile component of the organic aerosol, examining how differently the condensed-phase organic matter is distributed across the mixing states in the model. The results demonstrate the importance of representing organic aerosol as a semi-volatile aerosol, and explicitly calculating the partitioning of organic species between the gas and particulate phases.
Traveltime-based descriptions of transport and mixing in heterogeneous domains
NASA Astrophysics Data System (ADS)
Luo, Jian; Cirpka, Olaf A.
2008-09-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass transfer coefficients. In most applications, breakthrough curves (BTCs) of conservative and reactive compounds are measured at only a few locations and spatially explicit models are calibrated by matching these BTCs. A common difficulty in such applications is that the individual BTCs differ too strongly to justify the assumption of spatial homogeneity, whereas the number of observation points is too small to identify the spatial distribution of the decisive parameters. The key objective of the current study is to characterize physical transport by the analysis of conservative tracer BTCs and predict the macroscopic BTCs of compounds that react upon mixing from the interpretation of conservative tracer BTCs and reactive parameters determined in the laboratory. We do this in the framework of traveltime-based transport models which do not require spatially explicit, costly aquifer characterization. By considering BTCs of a conservative tracer measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the traveltime-based framework, the BTC of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct traveltime value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of traveltimes, which also determines the weights associated with each stream tube. Key issues in using the traveltime-based framework include the description of mixing mechanisms and the estimation of the traveltime distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the traveltime distribution, given a BTC integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases wherein the true traveltime distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and traveltime distributions to fit conservative BTCs and describe the tailing. A reactive transport case of a dual Michaelis-Menten problem demonstrates that the reactive mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local BTCs.
NASA Astrophysics Data System (ADS)
Eckert, Jerry B.; Wang, Erda
1993-02-01
Farms in NE Conejos County, Colorado, are characterized by limited resources, uncertain surface flow irrigation systems, and mixed crop-livestock enterprise combinations which are dependent on public grazing resources. To model decision making on these farms, a linear program is developed stressing enterprise choices under conditions of multiple resource constraints. Differential access to grazing resources and irrigation water is emphasized in this research. Regarding the water resource, the model reflects farms situated alternatively on high-, medium-, and low-priority irrigation ditches within the Alamosa-La Jara river system, each with and without supplemental pumping. Differences are found in optimum enterprise mixes, net returns, choice of cropping technology, level of marketings, and other characteristics in response to variations in the availability of irrigation water. Implications are presented for alternative improvement strategies.
Ice cream structural elements that affect melting rate and hardness.
Muse, M R; Hartel, R W
2004-01-01
Statistical models were developed to reveal which structural elements of ice cream affect melting rate and hardness. Ice creams were frozen in a batch freezer with three types of sweetener, three levels of the emulsifier polysorbate 80, and two different draw temperatures to produce ice creams with a range of microstructures. Ice cream mixes were analyzed for viscosity, and finished ice creams were analyzed for air cell and ice crystal size, overrun, and fat destabilization. The ice phase volume of each ice cream were calculated based on the freezing point of the mix. Melting rate and hardness of each hardened ice cream was measured and correlated with the structural attributes by using analysis of variance and multiple linear regression. Fat destabilization, ice crystal size, and the consistency coefficient of the mix were found to affect the melting rate of ice cream, whereas hardness was influenced by ice phase volume, ice crystal size, overrun, fat destabilization, and the rheological properties of the mix.
Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen
2017-01-01
Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301
Nakamura, Shinichiro; Kondo, Yasushi; Nakajima, Kenichi; Ohno, Hajime; Pauliuk, Stefan
2017-09-05
Alloying metals are indispensable ingredients of high quality alloy steel such as austenitic stainless steel, the cyclical use of which is vital for sustainable resource management. Under the current practice of recycling, however, different metals are likely to be mixed in an uncontrolled manner, resulting in function losses and dissipation of metals with distinctive functions, and in the contamination of recycled steels. The latter could result in dilution loss, if metal scrap needed dilution with virgin iron to reduce the contamination below critical levels. Management of these losses resulting from mixing in repeated recycling of metals requires tracking of metals over multiple life cycles of products with compositional details. A new model (MaTrace-alloy) was developed that tracks the fate of metals embodied in each of products over multiple life cycles of products, involving accumulation, discard, and recycling, with compositional details at the level of both alloys and products. The model was implemented for the flow of Cr and Ni in the Japanese steel cycle involving 27 steel species and 115 final products. It was found that, under a high level of scrap sorting, greater than 70% of the initial functionality of Cr and Ni could be retained over a period of 100 years, whereas under a poor level of sorting, it could plunge to less than 30%, demonstrating the relevance of waste management technology in circular economy policies.
NASA Astrophysics Data System (ADS)
Nel, L.; Strydom, N. A.; Perissinotto, R.; Adams, J. B.; Lemley, D. A.
2017-10-01
Estuarine marine-dependent species, such as Rhabdosargus holubi, depend greatly on structured sheltered environments and important feeding areas provided by estuaries. In this study, we investigate the ecological feeding niches of the estuarine marine-dependent sparid, R. holubi, by using conventional stomach contents and stable isotope methods (δ13C and δ15N signatures). The study has been carried out in five temperate estuaries in order to understand how fish feed in multiple intertidal vegetated habitats. These habitats included the submerged seagrass, Zostera capensis, and both previously unexplored small intertidal cord grass, Spartina maritima, and the common reed, Phragmites australis. The diet varied amongst habitats, estuaries and fish sizes and data consistently confirmed their omnivorous diet relating to ontogenetic niche shifts. Stomach contents revealed the importance of benthic prey within both the S. maritima and P. australis habitats in the absence of large intertidal vegetation, available during low tides. Similarly, isotopic mixing models showed that R. holubi from these habitats have a greater isotopic niche compared to the Z. capensis habitat, due to their limited availability during the falling tide, suggesting migration between available habitats. Stable isotopes confirmed that R. holubi actively feeds on the epiphytic algae (especially diatoms) covering the leaves and stalks of plant matter, as supported by Bayesian mixing models. These findings add to the current knowledge regarding habitat partitioning in multiple aquatic vegetation types critical to fish ecology and the effective management and conservation of estuaries.
Sicras-Mainar, Antoni; Velasco-Velasco, Soledad; Navarro-Artieda, Ruth; Blanca Tamayo, Milagrosa; Aguado Jodar, Alba; Ruíz Torrejón, Amador; Prados-Torres, Alexandra; Violan-Fors, Concepción
2012-06-01
To compare three methods of measuring multiple morbidity according to the use of health resources (cost of care) in primary healthcare (PHC). Retrospective study using computerized medical records. Thirteen PHC teams in Catalonia (Spain). Assigned patients requiring care in 2008. The socio-demographic variables were co-morbidity and costs. Methods of comparison were: a) Combined Comorbidity Index (CCI): an index itself was developed from the scores of acute and chronic episodes, b) Charlson Index (ChI), and c) Adjusted Clinical Groups case-mix: resource use bands (RUB). The cost model was constructed by differentiating between fixed (operational) and variable costs. 3 multiple lineal regression models were developed to assess the explanatory power of each measurement of co-morbidity which were compared from the determination coefficient (R(2)), p< .05. The study included 227,235 patients. The mean unit of cost was €654.2. The CCI explained an R(2)=50.4%, the ChI an R(2)=29.2% and BUR an R(2)=39.7% of the variability of the cost. The behaviour of the ICC is acceptable, albeit with low scores (1 to 3 points), showing inconclusive results. The CCI may be a simple method of predicting PHC costs in routine clinical practice. If confirmed, these results will allow improvements in the comparison of the case-mix. Copyright © 2011 Elsevier España, S.L. All rights reserved.
Plasmonic Metallurgy Enabled by DNA
Ross, Michael B.; Ku, Jessie C.; Lee, Byeongdu; ...
2016-02-05
In this study, mixed silver and gold plasmonic nanoparticle architectures are synthesized using DNA-programmable assembly, unveiling exquisitely tunable optical properties that are predicted and explained both by effective thin-film models and explicit electrodynamic simulations. These data demonstrate that the manner and ratio with which multiple metallic components are arranged can greatly alter optical properties, including tunable color and asymmetric reflectivity behavior of relevance for thin-film applications.
Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.
Covarrubias-Pazaran, Giovanny
2016-01-01
Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.
Decadal change of the south Atlantic ocean Angola-Benguela frontal zone since 1980
NASA Astrophysics Data System (ADS)
Vizy, Edward K.; Cook, Kerry H.; Sun, Xiaoming
2018-01-01
High-resolution simulations with a regional atmospheric model coupled to an intermediate-level mixed layer ocean model along with multiple atmospheric and oceanic reanalyses are analyzed to understand how and why the Angola-Benguela frontal Zone (ABFZ) has changed since 1980. A southward shift of 0.05°-0.55° latitude decade-1 in the annual mean ABFZ position accompanied by an intensification of + 0.05 to + 0.13 K/100-km decade-1 has occurred as ocean mixed layer temperatures have warmed (cooled) equatorward (poleward) of the front over the 1980-2014 period. These changes are captured in a 35-year model integration. The oceanic warming north of the ABFZ is associated with a weakening of vertical entrainment, reduced cooling associated with vertical diffusion, and a deepening of the mixed layer along the Angola coast. These changes coincide with a steady weakening of the onshore atmospheric flow as the zonal pressure gradient between the eastern equatorial Atlantic and the Congo Basin weakens. Oceanic cooling poleward of the ABFZ is primarily due to enhanced advection of cooler water from the south and east, increased cooling by vertical diffusion, and shoaling of the mixed layer depth. In the atmosphere, these changes are related to an intensification and poleward shift of the South Atlantic sub-tropical anticyclone as surface winds, hence the westward mixed layer ocean currents, intensify in the Benguela upwelling region along the Namibian coast. With a few caveats, these findings demonstrate that air/sea interactions play a prominent role in influencing the observed decadal variability of the ABFZ over the southeastern Atlantic since 1980.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
Calculation of Disease Dynamics in a Population of Households
Ross, Joshua V.; House, Thomas; Keeling, Matt J.
2010-01-01
Early mathematical representations of infectious disease dynamics assumed a single, large, homogeneously mixing population. Over the past decade there has been growing interest in models consisting of multiple smaller subpopulations (households, workplaces, schools, communities), with the natural assumption of strong homogeneous mixing within each subpopulation, and weaker transmission between subpopulations. Here we consider a model of SIRS (susceptible-infectious-recovered-susceptible) infection dynamics in a very large (assumed infinite) population of households, with the simplifying assumption that each household is of the same size (although all methods may be extended to a population with a heterogeneous distribution of household sizes). For this households model we present efficient methods for studying several quantities of epidemiological interest: (i) the threshold for invasion; (ii) the early growth rate; (iii) the household offspring distribution; (iv) the endemic prevalence of infection; and (v) the transient dynamics of the process. We utilize these methods to explore a wide region of parameter space appropriate for human infectious diseases. We then extend these results to consider the effects of more realistic gamma-distributed infectious periods. We discuss how all these results differ from standard homogeneous-mixing models and assess the implications for the invasion, transmission and persistence of infection. The computational efficiency of the methodology presented here will hopefully aid in the parameterisation of structured models and in the evaluation of appropriate responses for future disease outbreaks. PMID:20305791
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes themore » biosynthetic pathways for the main components of biomass - namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-a-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.« less
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes
Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.; ...
2018-01-30
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes themore » biosynthetic pathways for the main components of biomass - namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-a-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.« less
Combined optimization model for sustainable energization strategy
NASA Astrophysics Data System (ADS)
Abtew, Mohammed Seid
Access to energy is a foundation to establish a positive impact on multiple aspects of human development. Both developed and developing countries have a common concern of achieving a sustainable energy supply to fuel economic growth and improve the quality of life with minimal environmental impacts. The Least Developing Countries (LDCs), however, have different economic, social, and energy systems. Prevalence of power outage, lack of access to electricity, structural dissimilarity between rural and urban regions, and traditional fuel dominance for cooking and the resultant health and environmental hazards are some of the distinguishing characteristics of these nations. Most energy planning models have been designed for developed countries' socio-economic demographics and have missed the opportunity to address special features of the poor countries. An improved mixed-integer programming energy-source optimization model is developed to address limitations associated with using current energy optimization models for LDCs, tackle development of the sustainable energization strategies, and ensure diversification and risk management provisions in the selected energy mix. The Model predicted a shift from traditional fuels reliant and weather vulnerable energy source mix to a least cost and reliable modern clean energy sources portfolio, a climb on the energy ladder, and scored multifaceted economic, social, and environmental benefits. At the same time, it represented a transition strategy that evolves to increasingly cleaner energy technologies with growth as opposed to an expensive solution that leapfrogs immediately to the cleanest possible, overreaching technologies.
Multivariate meta-analysis using individual participant data.
Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R
2015-06-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhien
2010-06-29
The project is mainly focused on the characterization of cloud macrophysical and microphysical properties, especially for mixed-phased clouds and middle level ice clouds by combining radar, lidar, and radiometer measurements available from the ACRF sites. First, an advanced mixed-phase cloud retrieval algorithm will be developed to cover all mixed-phase clouds observed at the ACRF NSA site. The algorithm will be applied to the ACRF NSA observations to generate a long-term arctic mixed-phase cloud product for model validations and arctic mixed-phase cloud processes studies. To improve the representation of arctic mixed-phase clouds in GCMs, an advanced understanding of mixed-phase cloud processesmore » is needed. By combining retrieved mixed-phase cloud microphysical properties with in situ data and large-scale meteorological data, the project aim to better understand the generations of ice crystals in supercooled water clouds, the maintenance mechanisms of the arctic mixed-phase clouds, and their connections with large-scale dynamics. The project will try to develop a new retrieval algorithm to study more complex mixed-phase clouds observed at the ACRF SGP site. Compared with optically thin ice clouds, optically thick middle level ice clouds are less studied because of limited available tools. The project will develop a new two wavelength radar technique for optically thick ice cloud study at SGP site by combining the MMCR with the W-band radar measurements. With this new algorithm, the SGP site will have a better capability to study all ice clouds. Another area of the proposal is to generate long-term cloud type classification product for the multiple ACRF sites. The cloud type classification product will not only facilitates the generation of the integrated cloud product by applying different retrieval algorithms to different types of clouds operationally, but will also support other research to better understand cloud properties and to validate model simulations. The ultimate goal is to improve our cloud classification algorithm into a VAP.« less
Campbell, Ellsworth M.; Chao, Lin
2014-01-01
The evolution of antibiotic resistance in microbes poses one of the greatest challenges to the management of human health. Because addressing the problem experimentally has been difficult, research on strategies to slow the evolution of resistance through the rational use of antibiotics has resorted to mathematical and computational models. However, despite many advances, several questions remain unsettled. Here we present a population model for rational antibiotic usage by adding three key features that have been overlooked: 1) the maximization of the frequency of uninfected patients in the human population rather than the minimization of antibiotic resistance in the bacterial population, 2) the use of cocktails containing antibiotic pairs, and 3) the imposition of tradeoff constraints on bacterial resistance to multiple drugs. Because of tradeoffs, bacterial resistance does not evolve directionally and the system reaches an equilibrium state. When considering the equilibrium frequency of uninfected patients, both cycling and mixing improve upon single-drug treatment strategies. Mixing outperforms optimal cycling regimens. Cocktails further improve upon aforementioned strategies. Moreover, conditions that increase the population frequency of uninfected patients also increase the recovery rate of infected individual patients. Thus, a rational strategy does not necessarily result in a tragedy of the commons because benefits to the individual patient and general public are not in conflict. Our identification of cocktails as the best strategy when tradeoffs between multiple-resistance are operating could also be extended to other host-pathogen systems. Cocktails or other multiple-drug treatments are additionally attractive because they allow re-using antibiotics whose utility has been negated by the evolution of single resistance. PMID:24498003
Campbell, Ellsworth M; Chao, Lin
2014-01-01
The evolution of antibiotic resistance in microbes poses one of the greatest challenges to the management of human health. Because addressing the problem experimentally has been difficult, research on strategies to slow the evolution of resistance through the rational use of antibiotics has resorted to mathematical and computational models. However, despite many advances, several questions remain unsettled. Here we present a population model for rational antibiotic usage by adding three key features that have been overlooked: 1) the maximization of the frequency of uninfected patients in the human population rather than the minimization of antibiotic resistance in the bacterial population, 2) the use of cocktails containing antibiotic pairs, and 3) the imposition of tradeoff constraints on bacterial resistance to multiple drugs. Because of tradeoffs, bacterial resistance does not evolve directionally and the system reaches an equilibrium state. When considering the equilibrium frequency of uninfected patients, both cycling and mixing improve upon single-drug treatment strategies. Mixing outperforms optimal cycling regimens. Cocktails further improve upon aforementioned strategies. Moreover, conditions that increase the population frequency of uninfected patients also increase the recovery rate of infected individual patients. Thus, a rational strategy does not necessarily result in a tragedy of the commons because benefits to the individual patient and general public are not in conflict. Our identification of cocktails as the best strategy when tradeoffs between multiple-resistance are operating could also be extended to other host-pathogen systems. Cocktails or other multiple-drug treatments are additionally attractive because they allow re-using antibiotics whose utility has been negated by the evolution of single resistance.
Predictive Inference Using Latent Variables with Covariates*
Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.
2014-01-01
Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627
The Mediating Effect of Context Variation in Mixed Practice for Transfer of Basic Science
ERIC Educational Resources Information Center
Kulasegaram, Kulamakan; Min, Cynthia; Howey, Elizabeth; Neville, Alan; Woods, Nicole; Dore, Kelly; Norman, Geoffrey
2015-01-01
Applying a previously learned concept to a novel problem is an important but difficult process called transfer. Practicing multiple concepts together (mixed practice mode) has been shown superior to practicing concepts separately (blocked practice mode) for transfer. This study examined the effect of single and multiple practice contexts for both…
ERIC Educational Resources Information Center
Sheffield, Caroline
2011-01-01
This mixed methods multiple case study explored middle school social studies teachers' instructional use of digital technology at three suburban middle schools This mixed methods, multiple-case study explored middle school social studies teachers' instructional use of digital technology at three suburban middle schools in a large Florida school…
Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel
2018-02-27
Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .
Schetter, Timothy A; Walters, Timothy L; Root, Karen V
2013-09-01
Impacts of human land use pose an increasing threat to global biodiversity. Resource managers must respond rapidly to this threat by assessing existing natural areas and prioritizing conservation actions across multiple spatial scales. Plant species richness is a useful measure of biodiversity but typically can only be evaluated on small portions of a given landscape. Modeling relationships between spatial heterogeneity and species richness may allow conservation planners to make predictions of species richness patterns within unsampled areas. We utilized a combination of field data, remotely sensed data, and landscape pattern metrics to develop models of native and exotic plant species richness at two spatial extents (60- and 120-m windows) and at four ecological levels for northwestern Ohio's Oak Openings region. Multiple regression models explained 37-77 % of the variation in plant species richness. These models consistently explained more variation in exotic richness than in native richness. Exotic richness was better explained at the 120-m extent while native richness was better explained at the 60-m extent. Land cover composition of the surrounding landscape was an important component of all models. We found that percentage of human-modified land cover (negatively correlated with native richness and positively correlated with exotic richness) was a particularly useful predictor of plant species richness and that human-caused disturbances exert a strong influence on species richness patterns within a mixed-disturbance oak savanna landscape. Our results emphasize the importance of using a multi-scale approach to examine the complex relationships between spatial heterogeneity and plant species richness.
A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow
NASA Technical Reports Server (NTRS)
Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.
2005-01-01
An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.
A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow. Supplement
NASA Technical Reports Server (NTRS)
Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.
2005-01-01
An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.
NASA Astrophysics Data System (ADS)
Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis
2015-07-01
This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.
Multistability with a Metastable Mixed State
NASA Astrophysics Data System (ADS)
Sneppen, Kim; Mitarai, Namiko
2012-09-01
Complex dynamical systems often show multiple metastable states. In macroevolution, such behavior is suggested by punctuated equilibrium and discrete geological epochs. In molecular biology, bistability is found in epigenetics and in the many mutually exclusive states that a human cell can take. Sociopolitical systems can be single-party regimes or a pluralism of balancing political fractions. To introduce multistability, we suggest a model system of D mutually exclusive microstates that battle for dominance in a large system. Assuming one common intermediate state, we obtain D+1 metastable macrostates for the system, one of which is a self-reinforced mixture of all D+1 microstates. Robustness of this metastable mixed state increases with diversity D.
Bhattacharyya, Onil; Schull, Michael; Shojania, Kaveh; Stergiopoulos, Vicky; Naglie, Gary; Webster, Fiona; Brandao, Ricardo; Mohammed, Tamara; Christian, Jennifer; Hawker, Gillian; Wilson, Lynn; Levinson, Wendy
2016-01-01
Integrating care for people with complex needs is challenging. Indeed, evidence of solutions is mixed, and therefore, well-designed, shared evaluation approaches are needed to create cumulative learning. The Toronto-based Building Bridges to Integrate Care (BRIDGES) collaborative provided resources to refine and test nine new models linking primary, hospital and community care. It used mixed methods, a cross-project meta-evaluation and shared outcome measures. Given the range of skills required to develop effective interventions, a novel incubator was used to test and spread opportunities for system integration that included operational expertise and support for evaluation and process improvement.
Spatial organization of a model 15-member human gut microbiota established in gnotobiotic mice
Mark Welch, Jessica L.; Hasegawa, Yuko; McNulty, Nathan P.; Gordon, Jeffrey I.; Borisy, Gary G.
2017-01-01
Knowledge of the spatial organization of the gut microbiota is important for understanding the physical and molecular interactions among its members. These interactions are thought to influence microbial succession, community stability, syntrophic relationships, and resiliency in the face of perturbations. The complexity and dynamism of the gut microbiota pose considerable challenges for quantitative analysis of its spatial organization. Here, we illustrate an approach for addressing this challenge, using (i) a model, defined 15-member consortium of phylogenetically diverse, sequenced human gut bacterial strains introduced into adult gnotobiotic mice fed a polysaccharide-rich diet, and (ii) in situ hybridization and spectral imaging analysis methods that allow simultaneous detection of multiple bacterial strains at multiple spatial scales. Differences in the binding affinities of strains for substrates such as mucus or food particles, combined with more rapid replication in a preferred microhabitat, could, in principle, lead to localized clonally expanded aggregates composed of one or a few taxa. However, our results reveal a colonic community that is mixed at micrometer scales, with distinct spatial distributions of some taxa relative to one another, notably at the border between the mucosa and the lumen. Our data suggest that lumen and mucosa in the proximal colon should be conceptualized not as stratified compartments but as components of an incompletely mixed bioreactor. Employing the experimental approaches described should allow direct tests of whether and how specified host and microbial factors influence the nature and functional contributions of “microscale” mixing to the dynamic operations of the microbiota in health and disease. PMID:29073107
PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization
NASA Astrophysics Data System (ADS)
Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh
2017-05-01
Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.
Criteria for quantitative and qualitative data integration: mixed-methods research methodology.
Lee, Seonah; Smith, Carrol A M
2012-05-01
Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.
Repayment policy for multiple loans
2017-01-01
The Repayment Policy for Multiple Loans is about a given set of loans and a monthly incoming cash flow: what is the best way to allocate the monthly income to repay such loans? In this article, we close the almost 20-year-old open question about how to model the repayment policy for multiple loans problem together with its computational complexity. Thus, we propose a mixed integer linear programming model that establishes an optimal repayment schedule by minimizing the total amount of cash required to repay the loans. We prove that the most employed repayment strategies, such as the highest interest debt and the debt snowball methods, are not optimal. Experimental results on simulated cases based on real data show that our methodology obtains on average more than 4% of savings, that is, the debtor pays approximately 4% less to the bank or loaner, which is a considerable amount in finances. In certain cases, the debtor can save up to 40%. PMID:28430786
Logistics system design for biomass-to-bioenergy industry with multiple types of feedstocks.
Zhu, Xiaoyan; Yao, Qingzhu
2011-12-01
It is technologically possible for a biorefinery to use a variety of biomass as feedstock including native perennial grasses (e.g., switchgrass) and agricultural residues (e.g., corn stalk and wheat straw). Incorporating the distinct characteristics of various types of biomass feedstocks and taking into account their interaction in supplying the bioenergy production, this paper proposed a multi-commodity network flow model to design the logistics system for a multiple-feedstock biomass-to-bioenergy industry. The model was formulated as a mixed integer linear programming, determining the locations of warehouses, the size of harvesting team, the types and amounts of biomass harvested/purchased, stored, and processed in each month, the transportation of biomass in the system, and so on. This paper demonstrated the advantages of using multiple types of biomass feedstocks by comparing with the case of using a single feedstock (switchgrass) and analyzed the relationship of the supply capacity of biomass feedstocks to the output and cost of biofuel. Copyright © 2011 Elsevier Ltd. All rights reserved.
Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport
NASA Technical Reports Server (NTRS)
Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon C.; Zhu, Zhifan; Jeong, Myeongsook; Kim, Hyounkong; Oh, Eunmi; Hong, Sungkwon
2017-01-01
This study aims to develop a controllers decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).
Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport
NASA Technical Reports Server (NTRS)
Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon Chul; Zhu, Zhifan; Jeong, Myeong-Sook; Kim, Hyoun Kyoung; Oh, Eunmi; Hong, Sungkwon
2017-01-01
This study aims to develop a controllers' decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).
Photon-Z mixing the Weinberg-Salam model: Effective charges and the a = -3 gauge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baulieu, L.; Coquereaux, R.
1982-04-15
We study some properties of the Weinberg-Salam model connected with the photon-Z mixing. We solve the linear Dyson-Schwinger equations between full and 1PI boson propagators. The task is made easier, by the two-point function Ward identities that we derive to all orders and in any gauge. Some aspects of the renormalization of the model are also discussed. We display the exact mass-dependent one-loop two-point functions involving the photon and Z field in any linear xi-gauge. The special gauge a = xi/sup -1/ = -3 is shown to play a peculiar role. In this gauge, the Z field is multiplicatively renormalizablemore » (at the one-loop level), and one can construct both electric and weak effective charges of the theory from the photon and Z propagators, with a very simple expression similar to that of the QED Petermann, Stueckelberg, Gell-Mann and Low charge.« less
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Reimplementation of the Biome-BGC model to simulate successional change.
Bond-Lamberty, Ben; Gower, Stith T; Ahl, Douglas E; Thornton, Peter E
2005-04-01
Biogeochemical process models are increasingly employed to simulate current and future forest dynamics, but most simulate only a single canopy type. This limitation means that mixed stands, canopy succession and understory dynamics cannot be modeled, severe handicaps in many forests. The goals of this study were to develop a version of Biome-BGC that supported multiple, interacting vegetation types, and to assess its performance and limitations by comparing modeled results to published data from a 150-year boreal black spruce (Picea mariana (Mill.) BSP) chronosequence in northern Manitoba, Canada. Model data structures and logic were modified to support an arbitrary number of interacting vegetation types; an explicit height calculation was necessary to prioritize radiation and precipitation interception. Two vegetation types, evergreen needle-leaf and deciduous broadleaf, were modeled based on site-specific meteorological and physiological data. The new version of Biome-BGC reliably simulated observed changes in leaf area, net primary production and carbon stocks, and should be useful for modeling the dynamics of mixed-species stands and ecological succession. We discuss the strengths and limitations of Biome-BGC for this application, and note areas in which further work is necessary for reliable simulation of boreal biogeochemical cycling at a landscape scale.
McCluney, Kevin E; Sabo, John L
2010-12-31
Fluxes of carbon, nitrogen, and water between ecosystem components and organisms have great impacts across levels of biological organization. Although much progress has been made in tracing carbon and nitrogen, difficulty remains in tracing water sources from the ecosystem to animals and among animals (the "water web"). Naturally occurring, non-radioactive isotopes of hydrogen and oxygen in water provide a potential method for tracing water sources. However, using this approach for terrestrial animals is complicated by a change in water isotopes within the body due to differences in activity of heavy and light isotopes during cuticular and transpiratory water losses. Here we present a technique to use stable water isotopes to estimate the mean mix of water sources in a population by sampling a group of sympatric animals over time. Strong correlations between H and O isotopes in the body water of animals collected over time provide linear patterns of enrichment that can be used to predict a mean mix of water sources useful in standard mixing models to determine relative source contribution. Multiple temperature and humidity treatment levels do not greatly alter these relationships, thus having little effect on our ability to estimate this population-level mix of water sources. We show evidence for the validity of using multiple samples of animal body water, collected across time, to estimate the isotopic mix of water sources in a population and more accurately trace water sources. The ability to use isotopes to document patterns of animal water use should be a great asset to biologists globally, especially those studying drylands, droughts, streamside areas, irrigated landscapes, and the effects of climate change.
Mixing-dependent Reactions in the Hyporheic Zone: Laboratory and Numerical Experiments
NASA Astrophysics Data System (ADS)
Santizo, K. Y.; Eastes, L. A.; Hester, E. T.; Widdowson, M.
2017-12-01
The hyporheic zone is the surface water-groundwater interface surrounding the river's perimeter. Prior research demonstrates the ability of the hyporheic zone to attenuate pollutants when surface water cycles through reactive sediments (non-mixing-dependent reactions). However, the colocation of both surface and ground water within hyporheic sediments also allows mixing-dependent reactions that require mixing of reactants from these two water sources. Recent modeling studies show these mixing zones can be small under steady state homogeneous conditions, but do not validate those results in the laboratory or explore the range of hydrological characteristics that control the extent of mixing. Our objective was to simulate the mixing zone, quantify its thickness, and probe its hydrological controls using a "mix" of laboratory and numerical experiments. For the lab experiments, a hyporheic zone was simulated in a sand mesocosm, and a mixing-dependent abiotic reaction of sodium sulfite and dissolved oxygen was induced. Oxygen concentration response and oxygen consumption were visualized via planar optodes. Sulfate production by the mixing-dependent reaction was measured by fluid samples and a spectrophometer. Key hydrologic controls varied in the mesocosm included head gradient driving hyporheic exchange and hydraulic conductivity/heterogeneity. Results show a clear mixing area, sulfate production, and oxygen gradient. Mixing zone length (hyporheic flow cell size) and thickness both increase with the driving head gradient. For the numerical experiments, transient surface water boundary conditions were implemented together with heterogeneity of hydraulic conductivity. Results indicate that both fluctuating boundary conditions and heterogeneity increase mixing-dependent reaction. The hyporheic zone is deemed an attenuation hotspot by multiple studies, but here we demonstrate its potential for mixing-dependent reactions and the influence of important hydrological parameters.
Miklius, Asta; Flower, M.F.J.; Huijsmans, J.P.P.; Mukasa, S.B.; Castillo, P.
1991-01-01
Taal lava series can be distinguished from each other by differences in major and trace element trends and trace element ratios, indicating multiple magmatic systems associated with discrete centers in time and space. On Volcano Island, contemporaneous lava series range from typically calc-alkaline to iron-enriched. Major and trace element variation in these series can be modelled by fractionation of similar assemblages, with early fractionation of titano-magnetite in less iron-enriched series. However, phase compositional and petrographic evidence of mineral-liquid disequilibrium suggests that magma mixing played an important role in the evolution of these series. -from Authors
Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.
1990-01-01
A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.
pong: fast analysis and visualization of latent clusters in population genetic data.
Behr, Aaron A; Liu, Katherine Z; Liu-Fang, Gracie; Nakka, Priyanka; Ramachandran, Sohini
2016-09-15
A series of methods in population genetics use multilocus genotype data to assign individuals membership in latent clusters. These methods belong to a broad class of mixed-membership models, such as latent Dirichlet allocation used to analyze text corpora. Inference from mixed-membership models can produce different output matrices when repeatedly applied to the same inputs, and the number of latent clusters is a parameter that is often varied in the analysis pipeline. For these reasons, quantifying, visualizing, and annotating the output from mixed-membership models are bottlenecks for investigators across multiple disciplines from ecology to text data mining. We introduce pong, a network-graphical approach for analyzing and visualizing membership in latent clusters with a native interactive D3.js visualization. pong leverages efficient algorithms for solving the Assignment Problem to dramatically reduce runtime while increasing accuracy compared with other methods that process output from mixed-membership models. We apply pong to 225 705 unlinked genome-wide single-nucleotide variants from 2426 unrelated individuals in the 1000 Genomes Project, and identify previously overlooked aspects of global human population structure. We show that pong outpaces current solutions by more than an order of magnitude in runtime while providing a customizable and interactive visualization of population structure that is more accurate than those produced by current tools. pong is freely available and can be installed using the Python package management system pip. pong's source code is available at https://github.com/abehr/pong aaron_behr@alumni.brown.edu or sramachandran@brown.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Schermerhorn, Alice C; Cummings, E Mark; Davies, Patrick T
2008-02-01
The authors examine mutual family influence processes at the level of children's representations of multiple family relationships, as well as the structure of those representations. From a community sample with 3 waves, each spaced 1 year apart, kindergarten-age children (105 boys and 127 girls) completed a story-stem completion task, tapping representations of multiple family relationships. Structural equation modeling with autoregressive controls indicated that representational processes involving different family relationships were interrelated over time, including links between children's representations of marital conflict and reactions to conflict, between representations of security about marital conflict and parent-child relationships, and between representations of security in father-child and mother-child relationships. Mixed support was found for notions of increasing stability in representations during this developmental period. Results are discussed in terms of notions of transactional family dynamics, including family-wide perspectives on mutual influence processes attributable to multiple family relationships.
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Nakshatrala, K. B.
2016-12-01
Fundamental to enhancement and control of the macroscopic spreading, mixing, and dilution of solute plumes in porous media structures is the topology of flow field and underlying heterogeneity and anisotropy contrast of porous media. Traditionally, in literature, the main focus was limited to the shearing effects of flow field (i.e., flow has zero helical density, meaning that flow is always perpendicular to vorticity vector) on scalar mixing [2]. However, the combined effect of anisotropy of the porous media and the helical structure (or chaotic nature) of the flow field on the species reactive-transport and mixing has been rarely studied. Recently, it has been experimentally shown that there is an irrefutable evidence that chaotic advection and helical flows are inherent in porous media flows [1,2]. In this poster presentation, we present a non-intrusive physics-based model-order reduction framework to quantify the effects of species mixing in-terms of reduced-order models (ROMs) and scaling laws. The ROM framework is constructed based on the recent advancements in non-negative formulations for reactive-transport in heterogeneous anisotropic porous media [3] and non-intrusive ROM methods [4]. The objective is to generate computationally efficient and accurate ROMs for species mixing for different values of input data and reactive-transport model parameters. This is achieved by using multiple ROMs, which is a way to determine the robustness of the proposed framework. Sensitivity analysis is performed to identify the important parameters. Representative numerical examples from reactive-transport are presented to illustrate the importance of the proposed ROMs to accurately describe mixing process in porous media. [1] Lester, Metcalfe, and Trefry, "Is chaotic advection inherent to porous media flow?," PRL, 2013. [2] Ye, Chiogna, Cirpka, Grathwohl, and Rolle, "Experimental evidence of helical flow in porous media," PRL, 2015. [3] Mudunuru, and Nakshatrala, "On enforcing maximum principles and achieving element-wise species balance for advection-diffusion-reaction equations under the finite element method," JCP, 2016. [4] Quarteroni, Manzoni, and Negri. "Reduced Basis Methods for Partial Differential Equations: An Introduction," Springer, 2016.
Open-target sparse sensing of biological agents using DNA microarray
2011-01-01
Background Current biosensors are designed to target and react to specific nucleic acid sequences or structural epitopes. These 'target-specific' platforms require creation of new physical capture reagents when new organisms are targeted. An 'open-target' approach to DNA microarray biosensing is proposed and substantiated using laboratory generated data. The microarray consisted of 12,900 25 bp oligonucleotide capture probes derived from a statistical model trained on randomly selected genomic segments of pathogenic prokaryotic organisms. Open-target detection of organisms was accomplished using a reference library of hybridization patterns for three test organisms whose DNA sequences were not included in the design of the microarray probes. Results A multivariate mathematical model based on the partial least squares regression (PLSR) was developed to detect the presence of three test organisms in mixed samples. When all 12,900 probes were used, the model correctly detected the signature of three test organisms in all mixed samples (mean(R2)) = 0.76, CI = 0.95), with a 6% false positive rate. A sampling algorithm was then developed to sparsely sample the probe space for a minimal number of probes required to capture the hybridization imprints of the test organisms. The PLSR detection model was capable of correctly identifying the presence of the three test organisms in all mixed samples using only 47 probes (mean(R2)) = 0.77, CI = 0.95) with nearly 100% specificity. Conclusions We conceived an 'open-target' approach to biosensing, and hypothesized that a relatively small, non-specifically designed, DNA microarray is capable of identifying the presence of multiple organisms in mixed samples. Coupled with a mathematical model applied to laboratory generated data, and sparse sampling of capture probes, the prototype microarray platform was able to capture the signature of each organism in all mixed samples with high sensitivity and specificity. It was demonstrated that this new approach to biosensing closely follows the principles of sparse sensing. PMID:21801424
Hospital financial performance: does IT governance make a difference?
Burke, Darrell; Randeree, Ebrahim; Menachemi, Nir; Brooks, Robert G
2008-01-01
This study examined whether information technology (IT) governance, a term describing the decision authority and reporting structures of the chief information officer (CIO), is related to the financial performance of hospitals. The study was conducted using a combination of primary survey data regarding health care IT adoption and reporting structures of Florida acute care hospitals, with secondary data on hospital financial performance. Multiple regression models were used to evaluate the relationship of the 3 most commonly identified reporting structures. Outcome variables included measures of operating revenue and operating expense. All models controlled for overall IT adoption, ownership, membership in a hospital system, case mix, and hospital bed size. The results suggest that IT governance matters when it comes to hospital financial performance. Reporting to the chief financial officer brings positive outcomes; reporting to the chief executive officer has a mixed financial result; and reporting to the chief operating officer was not associated with discernible financial impact.
Experimental Applications of Automatic Test Markup Language (ATML)
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris
2012-01-01
The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.
Moments, Mixed Methods, and Paradigm Dialogs
ERIC Educational Resources Information Center
Denzin, Norman K.
2010-01-01
I reread the 50-year-old history of the qualitative inquiry that calls for triangulation and mixed methods. I briefly visit the disputes within the mixed methods community asking how did we get to where we are today, the period of mixed-multiple-methods advocacy, and Teddlie and Tashakkori's third methodological moment. (Contains 10 notes.)
Breast Radiotherapy with Mixed Energy Photons; a Model for Optimal Beam Weighting.
Birgani, Mohammadjavad Tahmasebi; Fatahiasl, Jafar; Hosseini, Seyed Mohammad; Bagheri, Ali; Behrooz, Mohammad Ali; Zabiehzadeh, Mansour; Meskani, Reza; Gomari, Maryam Talaei
2015-01-01
Utilization of high energy photons (>10 MV) with an optimal weight using a mixed energy technique is a practical way to generate a homogenous dose distribution while maintaining adequate target coverage in intact breast radiotherapy. This study represents a model for estimation of this optimal weight for day to day clinical usage. For this purpose, treatment planning computed tomography scans of thirty-three consecutive early stage breast cancer patients following breast conservation surgery were analyzed. After delineation of the breast clinical target volume (CTV) and placing opposed wedge paired isocenteric tangential portals, dosimeteric calculations were conducted and dose volume histograms (DVHs) were generated, first with pure 6 MV photons and then these calculations were repeated ten times with incorporating 18 MV photons (ten percent increase in weight per step) in each individual patient. For each calculation two indexes including maximum dose in the breast CTV (Dmax) and the volume of CTV which covered with 95% Isodose line (VCTV, 95%IDL) were measured according to the DVH data and then normalized values were plotted in a graph. The optimal weight of 18 MV photons was defined as the intersection point of Dmax and VCTV, 95%IDL graphs. For creating a model to predict this optimal weight multiple linear regression analysis was used based on some of the breast and tangential field parameters. The best fitting model for prediction of 18 MV photons optimal weight in breast radiotherapy using mixed energy technique, incorporated chest wall separation plus central lung distance (Adjusted R2=0.776). In conclusion, this study represents a model for the estimation of optimal beam weighting in breast radiotherapy using mixed photon energy technique for routine day to day clinical usage.
Tim Seipel; Christoph Kueffer; Lisa J. Rew; Curtis C. Daehler; Aníbal Pauchard; Bridgett J. Naylor; Jake M. Alexander; Peter J. Edwards; Catherine G. Parks; Jose Ramon Arevalo; Lohengrin A. Cavieres; Hansjorg Dietz; Gabi Jakobs; Keith McDougall; Rudiger Otto; Neville. Walsh
2012-01-01
We compared the distribution of non-native plant species along roads in eight mountainous regions. Within each region, abundance of plant species was recorded at 41-84 sites along elevational gradients using 100-m2 plots located 0, 25 and 75 m from roadsides. We used mixed-effects models to examine how local variation in species richness and...
NASA Astrophysics Data System (ADS)
Hamid, Arian Zad
2016-12-01
We analytically investigate Multiple Quantum (MQ) NMR dynamics in a mixed-three-spin (1/2,1,1/2) system with XXX Heisenberg model at the front of an external homogeneous magnetic field B. A single-ion anisotropy property ζ is considered for the spin-1. The intensities dependence of MQ NMR coherences on their orders (zeroth and second orders) for two pairs of spins (1,1/2) and (1/2,1/2) of the favorite tripartite system are obtained. It is also investigated dynamics of the pairwise quantum entanglement for the bipartite (sub)systems (1,1/2) and (1/2,1/2) permanently coupled by, respectively, coupling constants J}1 and J}2, by means of concurrence and fidelity. Then, some straightforward comparisons are done between these quantities and the intensities of MQ NMR coherences and ultimately some interesting results are reported. We also show that the time evolution of MQ coherences based on the reduced density matrix of the pair spins (1,1/2) is closely connected with the dynamics of the pairwise entanglement. Finally, we prove that one can introduce MQ coherence of the zeroth order corresponds to the pair spins (1,1/2) as an entanglement witness at some special time intervals.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1993-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Average Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1992-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Averaged Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
Health Consequences of Racist and Antigay Discrimination for Multiple Minority Adolescents
Thoma, Brian C.; Huebner, David M.
2014-01-01
Individuals who belong to a marginalized group and who perceive discrimination based on that group membership suffer from a variety of poor health outcomes. Many people belong to more than one marginalized group, and much less is known about the influence of multiple forms of discrimination on health outcomes. Drawing on literature describing the influence of multiple stressors, three models of combined forms of discrimination are discussed: additive, prominence, and exacerbation. The current study examined the influence of multiple forms of discrimination in a sample of African American lesbian, gay, or bisexual (LGB) adolescents ages 14–19. Each of the three models of combined stressors were tested to determine which best describes how racist and antigay discrimination combine to predict depressive symptoms, suicidal ideation, and substance use. Participants were included in this analysis if they identified their ethnicity as either African American (n = 156) or African American mixed (n = 120). Mean age was 17.45 years (SD = 1.36). Results revealed both forms of mistreatment were associated with depressive symptoms and suicidal ideation among African American LGB adolescents. Racism was more strongly associated with substance use. Future intervention efforts should be targeted toward reducing discrimination and improving the social context of multiple minority adolescents, and future research with multiple minority individuals should be attuned to the multiple forms of discrimination experienced by these individuals within their environments. PMID:23731232
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi
2017-04-01
The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer. PMID:28103246
Effects of physiotherapy treatment for urinary incontinence in patient with multiple sclerosis.
Pereira, Carla Maria de Abreu; Castiglione, Mariane; Kasawara, Karina Tamy
2017-07-01
[Purpose] The aim of the study was to evaluate the benefits of physical therapy for urinary incontinence in patients with multiple sclerosis and to verify the impact of urinary incontinence on the patient's quality of life. [Subject and Methods] A case study of a 55-year-old female patient diagnosed with multiple sclerosis and mixed urinary incontinence was conducted. Physical therapy sessions were conducted once a week, in total 15 sessions, making use of targeted functional electrical vaginal stimulation, along with active exercises for the pelvic floor muscles and electrical stimulation of the posterior tibial nerve, behavioral rehabilitation and exercise at home. [Results] After 15 physical therapy sessions, a patient diagnosed with multiple sclerosis and mixed urinary incontinence showed continued satisfactory results after five months. She showed better quality of life, higher strength of pelvic floor muscle and reduced urinary frequency without nocturia and enuresis. [Conclusion] The physical therapy protocol in this patient with multiple sclerosis and mixed urinary incontinence showed satisfactory results reducing urinary incontinence symptomatology and improving the patient's quality of life.
A general multiple-compartment model for the transport of trace elements through animals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assimakopoulos, P.A.; Ioannides, K.G.; Pakou, A.A.
1991-08-01
Multiple-compartment models employed in the analysis of trace element transport in animals are often based on linear differential equations which relate the rate of change of contaminant (or contaminant concentration) in each compartment to the amount of contaminant (or contaminant concentration) in every other compartment in the system. This has the serious disadvantage of mixing intrinsic physiological properties with the geometry of the animal. The basic equations on which the model presented here is developed are derived from the actual physical process under way and are capable of separating intrinsic physiological properties from geometry. It is thus expected that ratemore » coefficients determined through this model will be applicable to a wider category of physiologically similar animals. A specific application of the model for the study of contamination of sheep--or indeed for any ruminant--is presented, and the temporal evolution of contaminant concentration in the various compartments of the animal is calculated. The application of this model to a system of compartments with changing geometry is also presented.« less
A numerical study of mixing in stationary, nonpremixed, turbulent reacting flows
NASA Astrophysics Data System (ADS)
Overholt, Matthew Ryan
1998-10-01
In this work a detailed numerical study is made of a statistically-stationary, non-premixed, turbulent reacting model flow known as Periodic Reaction Zones. The mixture fraction-progress variable approach is used, with a mean gradient in the mixture fraction and a model, single-step, reversible, finite-rate thermochemistry, yielding both stationary and local extinction behavior. The passive scalar is studied first, using a statistical forcing scheme to achieve stationarity of the velocity field. Multiple independent direct numerical simulations (DNS) are performed for a wide range of Reynolds numbers with a number of results including a bilinear model for scalar mixing jointly conditioned on the scalar and x2-component of velocity, Gaussian scalar probability density function tails which were anticipated to be exponential, and the quantification of the dissipation of scalar flux. A new deterministic forcing scheme for DNS is then developed which yields reduced fluctuations in many quantities and a more natural evolution of the velocity fields. This forcing method is used for the final portion of this work. DNS results for Periodic Reaction Zones are compared with the Conditional Moment Closure (CMC) model, the Quasi-Equilibrium Distributed Reaction (QEDR) model, and full probability density function (PDF) simulations using the Euclidean Minimum Spanning Tree (EMST) and the Interaction by Exchange with the Mean (IEM) mixing models. It is shown that CMC and QEDR results based on the local scalar dissipation match DNS wherever local extinction is not present. However, due to the large spatial variations of scalar dissipation, and hence local Damkohler number, local extinction is present even when the global Damkohler number is twenty-five times the critical value for extinction. Finally, in the PDF simulations the EMST mixing model closely reproduces CMC and DNS results when local extinction is not present, whereas the IEM model results in large error.
Rating the raters in a mixed model: An approach to deciphering the rater reliability
NASA Astrophysics Data System (ADS)
Shang, Junfeng; Wang, Yougui
2013-05-01
Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.
Mishra, Varsha; Puthucheri, Smitha; Singh, Dharmendra
2018-05-07
As a preventive measure against the electromagnetic (EM) wave exposure to human body, EM radiation regulatory authorities such as ICNIRP and FCC defined the value of specific absorption rate (SAR) for the human head during EM wave exposure from mobile phone. SAR quantifies the absorption of EM waves in the human body and it mainly depends on the dielectric properties (ε', σ) of the corresponding tissues. The head part of the human body is more susceptible to EM wave exposure due to the usage of mobile phones. The human head is a complex structure made up of multiple tissues with intermixing of many layers; thus, the accurate measurement of permittivity (ε') and conductivity (σ) of the tissues of the human head is still a challenge. For computing the SAR, researchers are using multilayer model, which has some challenges for defining the boundary for layers. Therefore, in this paper, an attempt has been made to propose a method to compute effective complex permittivity of the human head in the range of 0.3 to 3.0 GHz by applying De-Loor mixing model. Similarly, for defining the thermal effect in the tissue, thermal properties of the human head have also been computed using the De-Loor mixing method. The effective dielectric and thermal properties of equivalent human head model are compared with the IEEE Std. 1528. Graphical abstract ᅟ.
Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites
Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.
2005-01-01
The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.
Role of diversity in ICA and IVA: theory and applications
NASA Astrophysics Data System (ADS)
Adalı, Tülay
2016-05-01
Independent component analysis (ICA) has been the most popular approach for solving the blind source separation problem. Starting from a simple linear mixing model and the assumption of statistical independence, ICA can recover a set of linearly-mixed sources to within a scaling and permutation ambiguity. It has been successfully applied to numerous data analysis problems in areas as diverse as biomedicine, communications, finance, geo- physics, and remote sensing. ICA can be achieved using different types of diversity—statistical property—and, can be posed to simultaneously account for multiple types of diversity such as higher-order-statistics, sample dependence, non-circularity, and nonstationarity. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more type of diversity, statistical dependence across the data sets, for jointly achieving independent decomposition of multiple data sets. With the addition of each new diversity type, identification of a broader class of signals become possible, and in the case of IVA, this includes sources that are independent and identically distributed Gaussians. We review the fundamentals and properties of ICA and IVA when multiple types of diversity are taken into account, and then ask the question whether diversity plays an important role in practical applications as well. Examples from various domains are presented to demonstrate that in many scenarios it might be worthwhile to jointly account for multiple statistical properties. This paper is submitted in conjunction with the talk delivered for the "Unsupervised Learning and ICA Pioneer Award" at the 2016 SPIE Conference on Sensing and Analysis Technologies for Biomedical and Cognitive Applications.
Optimal mission planning of GEO on-orbit refueling in mixed strategy
NASA Astrophysics Data System (ADS)
Chen, Xiao-qian; Yu, Jing
2017-04-01
The mission planning of GEO on-orbit refueling (OOR) in Mixed strategy is studied in this paper. Specifically, one SSc will be launched to an orbital slot near the depot when multiple GEO satellites are reaching their end of lives. The SSc replenishes fuel from the depot and then extends the lifespan of the target satellites via refueling. In the mixed scenario, only some of the target satellites could be served by the SSc, and the remaining ones will be fueled by Pseudo SScs (the target satellite which has already been refueled by the SSc and now has sufficient fuel for its operation as well as the fuel to refuel other target satellites is called Pseudo SSc here). The mission sequences and fuel mass of the SSc and Pseudo SScs, the dry mass of the SSc are used as design variables, whereas the economic benefit of the whole mission is used as design objective. The economic cost and benefit models are stated first, and then a mathematical optimization model is proposed. A comprehensive solution method involving enumeration, particle swarm optimization and modification is developed. Numerical examples are carried out to demonstrate the effectiveness of the model and solution method. Economic efficiencies of different OOR strategies are compared and discussed. The mixed strategy would perform better than the other strategies only when the target satellites satisfy some conditions. This paper presents an available mixed strategy scheme for users and analyzes its advantages and disadvantages by comparing with some other OOR strategies, providing helpful references to decision makers. The best strategy in practical applications depends on the specific demands and user preference.
Alizadeh, A; Zhang, L; Wang, M
2014-10-01
Mixing becomes challenging in microchannels because of the low Reynolds number. This study aims to present a mixing enhancement method for electro-osmotic flows in microchannels using vortices caused by temperature-patterned walls. Since the fluid is non-isothermal, the conventional form of Nernst-Planck equation is modified by adding a new migration term which is dependent on both temperature and internal electric potential gradient. This term results in the so-called thermo-electrochemical migration phenomenon. The coupled Navier-Stokes, Poisson, modified Nernst-Planck, energy and advection-diffusion equations are iteratively solved by multiple lattice Boltzmann methods to obtain the velocity, internal electric potential, ion distribution, temperature and species concentration fields, respectively. To enhance the mixing, three schemes of temperature-patterned walls have been considered with symmetrical or asymmetrical arrangements of blocks with surface charge and temperature. Modeling results show that the asymmetric arrangement scheme is the most efficient scheme and enhances the mixing of species by 39% when the Reynolds number is on the order of 10(-3). Current results may help improve the design of micro-mixers at low Reynolds number. Copyright © 2014 Elsevier Inc. All rights reserved.
Sensitivity enhancements in MQ-MAS NMR of spin-5/2 nuclei using modulated rf mixing pulses
NASA Astrophysics Data System (ADS)
Vosegaard, Thomas; Massiot, Dominique; Grandinetti, Philip J.
2000-08-01
An X- overlineX pulse train with stepped modulation frequency was employed to enhance the multiple-quantum to single-quantum coherence transfer in the mixing period of the multiple-quantum magic-angle spinning (MQ-MAS) experiment for spin I=5/2 nuclei. Two MQ-MAS pulse sequences employing this mixing scheme for the triple-to-single and quintuple-to-single quantum coherence transfers have been designed and their performance is demonstrated for 27Al on samples of NaSi 3AlO 8 and 9Al 2O 3·2B 2O 3 . Compared to the standard single-pulse mixing sequences, the sensitivity is approximately doubled in the present experiments.
High-resolution Observations of Flares in an Arch Filament System
NASA Astrophysics Data System (ADS)
Su, Yingna; Liu, Rui; Li, Shangwei; Cao, Wenda; Ahn, Kwangsu; Ji, Haisheng
2018-03-01
We study five sequential solar flares (SOL2015-08-07) occurring in Active Region 12396 observed with the Goode Solar Telescope (GST) at the Big Bear Solar Observatory, complemented by Interface Region Imaging Spectrograph and SDO observations. The main flaring region is an arch filament system (AFS) consisting of multiple bundles of dark filament threads enclosed by semicircular flare ribbons. We study the magnetic configuration and evolution of the active region by constructing coronal magnetic field models based on SDO/HMI magnetograms using two independent methods, i.e., the nonlinear force-free field (NLFFF) extrapolation and the flux rope insertion method. The models consist of multiple flux ropes with mixed signs of helicity, i.e., positive (negative) in the northern (southern) region, which is consistent with the GST observations of multiple filament bundles. The footprints of quasi-separatrix layers (QSLs) derived from the extrapolated NLFFF compare favorably with the observed flare ribbons. An interesting double-ribbon fine structure located at the east border of the AFS is consistent with the fine structure of the QSL’s footprint. Moreover, magnetic field lines traced along the semicircular footprint of a dome-like QSL surrounding the AFS are connected to the regions of significant helicity and Poynting flux injection. The maps of magnetic twist show that positive twist became dominant as time progressed, which is consistent with the injection of positive helicity before the flares. We hence conclude that these circular shaped flares are caused by 3D magnetic reconnection at the QSLs associated with the AFS possessing mixed signs of helicity.
The great descriptor melting pot: mixing descriptors for the common good of QSAR models.
Tseng, Yufeng J; Hopfinger, Anton J; Esposito, Emilio Xavier
2012-01-01
The usefulness and utility of QSAR modeling depends heavily on the ability to estimate the values of molecular descriptors relevant to the endpoints of interest followed by an optimized selection of descriptors to form the best QSAR models from a representative set of the endpoints of interest. The performance of a QSAR model is directly related to its molecular descriptors. QSAR modeling, specifically model construction and optimization, has benefited from its ability to borrow from other unrelated fields, yet the molecular descriptors that form QSAR models have remained basically unchanged in both form and preferred usage. There are many types of endpoints that require multiple classes of descriptors (descriptors that encode 1D through multi-dimensional, 4D and above, content) needed to most fully capture the molecular features and interactions that contribute to the endpoint. The advantages of QSAR models constructed from multiple, and different, descriptor classes have been demonstrated in the exploration of markedly different, and principally biological systems and endpoints. Multiple examples of such QSAR applications using different descriptor sets are described and that examined. The take-home-message is that a major part of the future of QSAR analysis, and its application to modeling biological potency, ADME-Tox properties, general use in virtual screening applications, as well as its expanding use into new fields for building QSPR models, lies in developing strategies that combine and use 1D through nD molecular descriptors.
Ju, Jin Hyun; Crystal, Ronald G.
2017-01-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL. PMID:28505156
Ju, Jin Hyun; Shenoy, Sushila A; Crystal, Ronald G; Mezey, Jason G
2017-05-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL.
Dynamics of a nuclear invasion
NASA Astrophysics Data System (ADS)
Roper, Marcus; Simonin, Anna; Glass, N. Louise
2009-11-01
Filamentous fungi grow as a network of continuous interconnected tubes, containing nuclei that move freely through a shared cytoplasm. Wild fungi are frequently chimerical: two nuclei from the same physiological individual may be genetically different. Such internal diversity can arise either from spontaneous mutations during nuclear division, or by nuclear exchange when two individuals fuse, sharing their resources and organelles to become a single individual. This diversity is thought to be essential to adaptation in plant pathogens, allowing, for instance, an invading fungus to present many different genetic identities against its host's immune response. However, it is clear that the presence of multiple genetic lineages within the same physiological individual can also pose challenges - lineages that are present in growing hyphal tips will multiply preferentially. Nuclei must therefore be kept well mixed across a growing front. By applying models developed to describe mixing of fluids in microfluidic reactors to experimental observations of lineage mixing in a growing Neurospora crassa colony, we show how this mixing is achieved. In particular we analyze the individual contributions from interdigitation of hyphae and from nuclear transport.
NASA Astrophysics Data System (ADS)
Froggatt*, C. D.
2003-01-01
The quark-lepton mass problem and the ideas of mass protection are reviewed. The hierarchy problem and suggestions for its resolution, including Little Higgs models, are discussed. The Multiple Point Principle (MPP) is introduced and used within the Standard Model (SM) to predict the top quark and Higgs particle masses. Mass matrix ansätze are considered; in particular we discuss the lightest family mass generation model, in which all the quark mixing angles are successfully expressed in terms of simple expressions involving quark mass ratios. It is argued that an underlying chiral flavour symmetry is responsible for the hierarchical texture of the fermion mass matrices. The phenomenology of neutrino mass matrices is briefly discussed.
General squark flavour mixing: constraints, phenomenology and benchmarks
De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...
2015-11-19
Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.
Memory interface simulator: A computer design aid
NASA Technical Reports Server (NTRS)
Taylor, D. S.; Williams, T.; Weatherbee, J. E.
1972-01-01
Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.
Mixing of multiple metal vapours into an arc plasma in gas tungsten arc welding of stainless steel
NASA Astrophysics Data System (ADS)
Park, Hunkwan; Trautmann, Marcus; Tanaka, Keigo; Tanaka, Manabu; Murphy, Anthony B.
2017-11-01
A computational model of the mixing of multiple metal vapours, formed by vaporization of the surface of an alloy workpiece, into the thermal arc plasma in gas tungsten arc welding (GTAW) is presented. The model incorporates the combined diffusion coefficient method extended to allow treatment of three gases, and is applied to treat the transport of both chromium and iron vapour in the helium arc plasma. In contrast to previous models of GTAW, which predict that metal vapours are swept away to the edge of the arc by the plasma flow, it is found that the metal vapours penetrate strongly into the arc plasma, reaching the cathode region. The predicted results are consistent with published measurements of the intensity of atomic line radiation from the metal vapours. The concentration of chromium vapour is predicted to be higher than that of iron vapour due to its larger vaporization rate. An accumulation of chromium vapour is predicted to occur on the cathode at about 1.5 mm from the cathode tip, in agreement with published measurements. The arc temperature is predicted to be strongly reduced due to the strong radiative emission from the metal vapours. The driving forces causing the diffusion of metal vapours into the helium arc are examined, and it is found that diffusion due to the applied electric field (cataphoresis) is dominant. This is explained in terms of large ionization energies and the small mass of helium compared to those of the metal vapours.
A hybrid structured-unstructured grid method for unsteady turbomachinery flow computations
NASA Technical Reports Server (NTRS)
Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.
1993-01-01
A hybrid grid technique for the solution of 2D, unsteady flows is developed. This technique is capable of handling complex, multiple component geometries in relative motion, such as those encountered in turbomachinery. The numerical approach utilizes a mixed structured-unstructured zonal grid topology along with modeling equations and solution methods that are most appropriate in the individual domains, therefore combining the advantages of both structured and unstructured grid techniques.
Plasmonic Metallurgy Enabled by DNA.
Ross, Michael B; Ku, Jessie C; Lee, Byeongdu; Mirkin, Chad A; Schatz, George C
2016-04-13
Mixed silver and gold plasmonic nanoparticle architectures are synthesized using DNA-programmable assembly, unveiling exquisitely tunable optical properties that are predicted and explained both by effective thin-film models and explicit electrodynamic simulations. These data demonstrate that the manner and ratio with which multiple metallic components are arranged can greatly alter optical properties, including tunable color and asymmetric reflectivity behavior of relevance for thin-film applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall
2018-01-01
Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251
Guo, Lili; Qi, Junwei; Xue, Wei
2018-01-01
This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495
High-capacity mixed fiber-wireless backhaul networks using MMW radio-over-MCF and MIMO
NASA Astrophysics Data System (ADS)
Pham, Thu A.; Pham, Hien T. T.; Le, Hai-Chau; Dang, Ngoc T.
2017-10-01
In this paper, we have proposed a high-capacity backhaul network, which is based on mixed fiber-wireless systems using millimeter-wave radio-over-multi-core fiber (MMW RoMCF) and multiple-input multiple-output (MIMO) transmission, for next generation mobile access networks. In addition, we also investigate the use of avalanche photodiode (APD) to improve capacity of the proposed backhaul downlink. We then theoretically analyze the system capacity comprehensively while considering various physical impairments including noise, MCF crosstalk, and fading modeled by Rician MIMO channel. The feasibility of the proposed backhaul architecture is verified via the numerical simulation experiments. The research results demonstrate that our developed backhaul solution can significantly enhance the backhaul capacity; the system capacity of 24 bps/Hz can be achieved with 20-km 8-core MCF and 8 × 8 MIMO transmitted over 100-m Rician fading link. It is also shown that the system performance, in term of channel capacity, strongly depend on the MCF inter-core crosstalk, which is governed by the mode coupling coefficient, the core pitch, and the bending radius.
Progression of regional grey matter atrophy in multiple sclerosis
Marinescu, Razvan V; Young, Alexandra L; Firth, Nicholas C; Jorge Cardoso, M; Tur, Carmen; De Angelis, Floriana; Cawley, Niamh; Brownlee, Wallace J; De Stefano, Nicola; Laura Stromillo, M; Battaglini, Marco; Ruggieri, Serena; Gasperini, Claudio; Filippi, Massimo; Rocca, Maria A; Rovira, Alex; Sastre-Garriga, Jaume; Geurts, Jeroen J G; Vrenken, Hugo; Wottschel, Viktor; Leurs, Cyra E; Uitdehaag, Bernard; Pirpamer, Lukas; Enzinger, Christian; Ourselin, Sebastien; Gandini Wheeler-Kingshott, Claudia A; Chard, Declan; Thompson, Alan J; Barkhof, Frederik; Alexander, Daniel C; Ciccarelli, Olga
2018-01-01
Abstract See Stankoff and Louapre (doi:10.1093/brain/awy114) for a scientific commentary on this article. Grey matter atrophy is present from the earliest stages of multiple sclerosis, but its temporal ordering is poorly understood. We aimed to determine the sequence in which grey matter regions become atrophic in multiple sclerosis and its association with disability accumulation. In this longitudinal study, we included 1417 subjects: 253 with clinically isolated syndrome, 708 with relapsing-remitting multiple sclerosis, 128 with secondary-progressive multiple sclerosis, 125 with primary-progressive multiple sclerosis, and 203 healthy control subjects from seven European centres. Subjects underwent repeated MRI (total number of scans 3604); the mean follow-up for patients was 2.41 years (standard deviation = 1.97). Disability was scored using the Expanded Disability Status Scale. We calculated the volume of brain grey matter regions and brainstem using an unbiased within-subject template and used an established data-driven event-based model to determine the sequence of occurrence of atrophy and its uncertainty. We assigned each subject to a specific event-based model stage, based on the number of their atrophic regions. Linear mixed-effects models were used to explore associations between the rate of increase in event-based model stages, and T2 lesion load, disease-modifying treatments, comorbidity, disease duration and disability accumulation. The first regions to become atrophic in patients with clinically isolated syndrome and relapse-onset multiple sclerosis were the posterior cingulate cortex and precuneus, followed by the middle cingulate cortex, brainstem and thalamus. A similar sequence of atrophy was detected in primary-progressive multiple sclerosis with the involvement of the thalamus, cuneus, precuneus, and pallidum, followed by the brainstem and posterior cingulate cortex. The cerebellum, caudate and putamen showed early atrophy in relapse-onset multiple sclerosis and late atrophy in primary-progressive multiple sclerosis. Patients with secondary-progressive multiple sclerosis showed the highest event-based model stage (the highest number of atrophic regions, P < 0.001) at the study entry. All multiple sclerosis phenotypes, but clinically isolated syndrome, showed a faster rate of increase in the event-based model stage than healthy controls. T2 lesion load and disease duration in all patients were associated with increased event-based model stage, but no effects of disease-modifying treatments and comorbidity on event-based model stage were observed. The annualized rate of event-based model stage was associated with the disability accumulation in relapsing-remitting multiple sclerosis, independent of disease duration (P < 0.0001). The data-driven staging of atrophy progression in a large multiple sclerosis sample demonstrates that grey matter atrophy spreads to involve more regions over time. The sequence in which regions become atrophic is reasonably consistent across multiple sclerosis phenotypes. The spread of atrophy was associated with disease duration and with disability accumulation over time in relapsing-remitting multiple sclerosis. PMID:29741648
Progression of regional grey matter atrophy in multiple sclerosis.
Eshaghi, Arman; Marinescu, Razvan V; Young, Alexandra L; Firth, Nicholas C; Prados, Ferran; Jorge Cardoso, M; Tur, Carmen; De Angelis, Floriana; Cawley, Niamh; Brownlee, Wallace J; De Stefano, Nicola; Laura Stromillo, M; Battaglini, Marco; Ruggieri, Serena; Gasperini, Claudio; Filippi, Massimo; Rocca, Maria A; Rovira, Alex; Sastre-Garriga, Jaume; Geurts, Jeroen J G; Vrenken, Hugo; Wottschel, Viktor; Leurs, Cyra E; Uitdehaag, Bernard; Pirpamer, Lukas; Enzinger, Christian; Ourselin, Sebastien; Gandini Wheeler-Kingshott, Claudia A; Chard, Declan; Thompson, Alan J; Barkhof, Frederik; Alexander, Daniel C; Ciccarelli, Olga
2018-06-01
See Stankoff and Louapre (doi:10.1093/brain/awy114) for a scientific commentary on this article.Grey matter atrophy is present from the earliest stages of multiple sclerosis, but its temporal ordering is poorly understood. We aimed to determine the sequence in which grey matter regions become atrophic in multiple sclerosis and its association with disability accumulation. In this longitudinal study, we included 1417 subjects: 253 with clinically isolated syndrome, 708 with relapsing-remitting multiple sclerosis, 128 with secondary-progressive multiple sclerosis, 125 with primary-progressive multiple sclerosis, and 203 healthy control subjects from seven European centres. Subjects underwent repeated MRI (total number of scans 3604); the mean follow-up for patients was 2.41 years (standard deviation = 1.97). Disability was scored using the Expanded Disability Status Scale. We calculated the volume of brain grey matter regions and brainstem using an unbiased within-subject template and used an established data-driven event-based model to determine the sequence of occurrence of atrophy and its uncertainty. We assigned each subject to a specific event-based model stage, based on the number of their atrophic regions. Linear mixed-effects models were used to explore associations between the rate of increase in event-based model stages, and T2 lesion load, disease-modifying treatments, comorbidity, disease duration and disability accumulation. The first regions to become atrophic in patients with clinically isolated syndrome and relapse-onset multiple sclerosis were the posterior cingulate cortex and precuneus, followed by the middle cingulate cortex, brainstem and thalamus. A similar sequence of atrophy was detected in primary-progressive multiple sclerosis with the involvement of the thalamus, cuneus, precuneus, and pallidum, followed by the brainstem and posterior cingulate cortex. The cerebellum, caudate and putamen showed early atrophy in relapse-onset multiple sclerosis and late atrophy in primary-progressive multiple sclerosis. Patients with secondary-progressive multiple sclerosis showed the highest event-based model stage (the highest number of atrophic regions, P < 0.001) at the study entry. All multiple sclerosis phenotypes, but clinically isolated syndrome, showed a faster rate of increase in the event-based model stage than healthy controls. T2 lesion load and disease duration in all patients were associated with increased event-based model stage, but no effects of disease-modifying treatments and comorbidity on event-based model stage were observed. The annualized rate of event-based model stage was associated with the disability accumulation in relapsing-remitting multiple sclerosis, independent of disease duration (P < 0.0001). The data-driven staging of atrophy progression in a large multiple sclerosis sample demonstrates that grey matter atrophy spreads to involve more regions over time. The sequence in which regions become atrophic is reasonably consistent across multiple sclerosis phenotypes. The spread of atrophy was associated with disease duration and with disability accumulation over time in relapsing-remitting multiple sclerosis.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
Predictors of 2,4-dichlorophenoxyacetic acid exposure among herbicide applicators
BHATTI, PARVEEN; BLAIR, AARON; BELL, ERIN M.; ROTHMAN, NATHANIEL; LAN, QING; BARR, DANA B.; NEEDHAM, LARRY L.; PORTENGEN, LUTZEN; FIGGS, LARRY W.; VERMEULEN, ROEL
2009-01-01
To determine the major factors affecting the urinary levels of 2,4-dichlorophenoxyacetic acid (2,4-D) among county noxious weed applicators in Kansas, we used a regression technique that accounted for multiple days of exposure. We collected 136 12-h urine samples from 31 applicators during the course of two spraying seasons (April to August of 1994 and 1995). Using mixed-effects models, we constructed exposure models that related urinary 2,4-D measurements to weighted self-reported work activities from daily diaries collected over 5 to 7 days before the collection of the urine sample. Our primary weights were based on an earlier pharmacokinetic analysis of turf applicators; however, we examined a series of alternative weighting schemes to assess the impact of the specific weights and the number of days before urine sample collection that were considered. The derived models accounting for multiple days of exposure related to a single urine measurement seemed robust with regard to the exact weights, but less to the number of days considered; albeit the determinants from the primary model could be fitted with marginal losses of fit to the data from the other weighting schemes that considered a different numbers of days. In the primary model, the total time of all activities (spraying, mixing, other activities), spraying method, month of observation, application concentration, and wet gloves were significant determinants of urinary 2,4-D concentration and explained 16% of the between-worker variance and 23% of the within-worker variance of urinary 2,4-D levels. As a large proportion of the variance remained unexplained, further studies should be conducted to try to systematically assess other exposure determinants. PMID:19319162
Physiotherapy practice in the private sector: organizational characteristics and models.
Perreault, Kadija; Dionne, Clermont E; Rossignol, Michel; Poitras, Stéphane; Morin, Diane
2014-08-29
Even if a large proportion of physiotherapists work in the private sector worldwide, very little is known of the organizations within which they practice. Such knowledge is important to help understand contexts of practice and how they influence the quality of services and patient outcomes. The purpose of this study was to: 1) describe characteristics of organizations where physiotherapists practice in the private sector, and 2) explore the existence of a taxonomy of organizational models. This was a cross-sectional quantitative survey of 236 randomly-selected physiotherapists. Participants completed a purpose-designed questionnaire online or by telephone, covering organizational vision, resources, structures and practices. Organizational characteristics were analyzed descriptively, while organizational models were identified by multiple correspondence analyses. Most organizations were for-profit (93.2%), located in urban areas (91.5%), and within buildings containing multiple businesses/organizations (76.7%). The majority included multiple providers (89.8%) from diverse professions, mainly physiotherapy assistants (68.7%), massage therapists (67.3%) and osteopaths (50.2%). Four organizational models were identified: 1) solo practice, 2) middle-scale multiprovider, 3) large-scale multiprovider and 4) mixed. The results of this study provide a detailed description of the organizations where physiotherapists practice, and highlight the importance of human resources in differentiating organizational models. Further research examining the influences of these organizational characteristics and models on outcomes such as physiotherapists' professional practices and patient outcomes are needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valocchi, Albert; Werth, Charles; Liu, Wen-Tso
Bioreduction is being actively investigated as an effective strategy for subsurface remediation and long-term management of DOE sites contaminated by metals and radionuclides (i.e. U(VI)). These strategies require manipulation of the subsurface, usually through injection of chemicals (e.g., electron donor) which mix at varying scales with the contaminant to stimulate metal reducing bacteria. There is evidence from DOE field experiments suggesting that mixing limitations of substrates at all scales may affect biological growth and activity for U(VI) reduction. Although current conceptual models hold that biomass growth and reduction activity is limited by physical mixing processes, a growing body of literaturemore » suggests that reaction could be enhanced by cell-to-cell interaction occurring over length scales extending tens to thousands of microns. Our project investigated two potential mechanisms of enhanced electron transfer. The first is the formation of single- or multiple-species biofilms that transport electrons via direct electrical connection such as conductive pili (i.e. ‘nanowires’) through biofilms to where the electron acceptor is available. The second is through diffusion of electron carriers from syntrophic bacteria to dissimilatory metal reducing bacteria (DMRB). The specific objectives of this work are (i) to quantify the extent and rate that electrons are transported between microorganisms in physical mixing zones between an electron donor and electron acceptor (e.g. U(IV)), (ii) to quantify the extent that biomass growth and reaction are enhanced by interspecies electron transport, and (iii) to integrate mixing across scales (e.g., microscopic scale of electron transfer and macroscopic scale of diffusion) in an integrated numerical model to quantify these mechanisms on overall U(VI) reduction rates. We tested these hypotheses with five tasks that integrate microbiological experiments, unique micro-fluidics experiments, flow cell experiments, and multi-scale numerical models. Continuous fed-batch reactors were used to derive kinetic parameters for DMRB, and to develop an enrichment culture for elucidation of syntrophic relationships in a complex microbial community. Pore and continuum scale experiments using microfluidic and bench top flow cells were used to evaluate the impact of cell-to-cell and microbial interactions on reaction enhancement in mixing-limited bioactive zones, and the mechanisms of this interaction. Some of the microfluidic experiments were used to develop and test models that considers direct cell-to-cell interactions during metal reduction. Pore scale models were incorporated into a multi-scale hybrid modeling framework that combines pore scale modeling at the reaction interface with continuum scale modeling. New computational frameworks for combining continuum and pore-scale models were also developed« less
Martinez, Jorge L; Raiber, Matthias; Cendón, Dioni I
2017-01-01
The influence of mountain front recharge on the water balance of alluvial valley aquifers located in upland catchments of the Condamine River basin in Queensland, Australia, is investigated through the development of an integrated hydrogeological framework. A combination of three-dimensional (3D) geological modelling, hydraulic gradient maps, multivariate statistical analyses and hydrochemical mixing calculations is proposed for the identification of hydrochemical end-members and quantification of the relative contributions of each end-member to alluvial aquifer recharge. The recognised end-members correspond to diffuse recharge and lateral groundwater inflows from three hydrostratigraphic units directly connected to the alluvial aquifer. This approach allows mapping zones of potential inter-aquifer connectivity and areas of groundwater mixing between underlying units and the alluvium. Mixing calculations using samples collected under baseflow conditions reveal that lateral contribution from a regional volcanic aquifer system represents the majority (41%) of inflows to the alluvial aquifer. Diffuse recharge contribution (35%) and inflow from two sedimentary bedrock hydrostratigraphic units (collectively 24%) comprise the remainder of major recharge sources. A detailed geochemical assessment of alluvial groundwater evolution along a selected flowpath of a representative subcatchment of the Condamine River basin confirms mixing as a key process responsible for observed spatial variations in hydrochemistry. Dissolution of basalt-related minerals and dolomite, CO 2 uptake, ion-exchange, precipitation of clay minerals, and evapotranspiration further contribute to the hydrochemical evolution of groundwater in the upland alluvial aquifer. This study highlights the benefits of undertaking an integrated approach that combines multiple independent lines of evidence. The proposed methods can be applied to investigate processes associated with inter-aquifer mixing, including groundwater contamination resulting from depressurisation of underlying geological units hydraulically connected to the shallower water reservoirs. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less
Functional linear models for association analysis of quantitative traits.
Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao
2013-11-01
Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY PERIODICALS, INC.
Manson, Steven M.; Evans, Tom
2007-01-01
We combine mixed-methods research with integrated agent-based modeling to understand land change and economic decision making in the United States and Mexico. This work demonstrates how sustainability science benefits from combining integrated agent-based modeling (which blends methods from the social, ecological, and information sciences) and mixed-methods research (which interleaves multiple approaches ranging from qualitative field research to quantitative laboratory experiments and interpretation of remotely sensed imagery). We test assumptions of utility-maximizing behavior in household-level landscape management in south-central Indiana, linking parcel data, land cover derived from aerial photography, and findings from laboratory experiments. We examine the role of uncertainty and limited information, preferences, differential demographic attributes, and past experience and future time horizons. We also use evolutionary programming to represent bounded rationality in agriculturalist households in the southern Yucatán of Mexico. This approach captures realistic rule of thumb strategies while identifying social and environmental factors in a manner similar to econometric models. These case studies highlight the role of computational models of decision making in land-change contexts and advance our understanding of decision making in general. PMID:18093928
Optimising the selection of food items for FFQs using Mixed Integer Linear Programming.
Gerdessen, Johanna C; Souverein, Olga W; van 't Veer, Pieter; de Vries, Jeanne Hm
2015-01-01
To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear Programming (MILP) model. The methodology was demonstrated for an FFQ with interest in energy, total protein, total fat, saturated fat, monounsaturated fat, polyunsaturated fat, total carbohydrates, mono- and disaccharides, dietary fibre and potassium. The food lists generated by the MILP model have good performance in terms of length, coverage and R 2 (explained variance) of all nutrients. MILP-generated food lists were 32-40 % shorter than a benchmark food list, whereas their quality in terms of R 2 was similar to that of the benchmark. The results suggest that the MILP model makes the selection process faster, more standardised and transparent, and is especially helpful in coping with multiple nutrients. The complexity of the method does not increase with increasing number of nutrients. The generated food lists appear either shorter or provide more information than a food list generated without the MILP model.
ERIC Educational Resources Information Center
Dissemination and Assessment Center for Bilingual Education, Austin, TX.
This is one of a series of student booklets designed for use in a bilingual mathematics program in grades 6-8. The general format is to present each page in both Spanish and English. The mathematical topics in this booklet include equivalent fractions, mixed numbers, and multiplication of fractions and mixed numbers. (MK)
ERIC Educational Resources Information Center
Walker, Michael E.; Kim, Sooyeon
2010-01-01
This study examined the use of an all multiple-choice (MC) anchor for linking mixed format tests containing both MC and constructed-response (CR) items, in a nonequivalent groups design. An MC-only anchor could effectively link two such test forms if either (a) the MC and CR portions of the test measured the same construct, so that the MC anchor…
Spatial scan statistics for detection of multiple clusters with arbitrary shapes.
Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray
2016-12-01
In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Ryoo, Ju-Mee; Johnson, Matthew S.; Iraci, Laura T.; Yates, Emma L.; Gore, Warren
2017-01-01
High ozone (O3) concentrations at low altitudes (1.5e4 km) were detected from airborne Alpha Jet Atmospheric eXperiment (AJAX) measurements on 30 May 2012 off the coast of California (CA). We investigate the causes of those elevated O3 concentrations using airborne measurements and various models. GEOS-Chem simulation shows that the contribution from local sources is likely small. A back trajectory model was used to determine the air mass origins and how much they contributed to the O3 over CA. Low-level potential vorticity (PV) from Modern Era Retrospective analysis for Research and Applications 2 (MERRA-2) reanalysis data appears to be a result of the diabatic heating and mixing of airs in the lower altitudes, rather than be a result of direct transport from stratospheric intrusion. The Q diagnostic, which is a measure of the mixing of the air masses, indicates that there is sufficient mixing along the trajectory to indicate that O3 from the different origins is mixed and transported to the western U.S.The back-trajectory model simulation demonstrates the air masses of interest came mostly from the mid troposphere (MT, 76), but the contribution of the lower troposphere (LT, 19) is also significant compared to those from the upper troposphere/lower stratosphere (UTLS, 5). Air coming from the LT appears to be mostly originating over Asia. The possible surface impact of the high O3 transported aloft on the surface O3 concentration through vertical and horizontal transport within a few days is substantiated by the influence maps determined from the Weather Research and Forecasting Stochastic Time Inverted Lagrangian Transport (WRF-STILT) model and the observed increases in surface ozone mixing ratios. Contrasting this complex case with a stratospheric-dominant event emphasizes the contribution of each source to the high O3 concentration in the lower altitudes over CA. Integrated analyses using models, reanalysis, and diagnostic tools, allows high ozone values detected by in-situ measurements to be attributed to multiple source processes.
NASA Astrophysics Data System (ADS)
Endreny, T. A.; Robinson, J.
2012-12-01
River restoration structures, also known as river steering deflectors, are designed to reduce bank shear stress by generating wake zones between the bank and the constricted conveyance region. There is interest in characterizing the surface transient storage (STS) and associated biogeochemical processing in the STS zones around these structures to quantify the ecosystem benefits of river restoration. This research explored how the hydraulics around river restoration structures prohibits application of transient storage models designed for homogenous, completely mixed STS zones. We used slug and constant rate injections of a conservative tracer in a 3rd order river in Onondaga County, NY over the course of five experiments at varying flow regimes. Recovered breakthrough curves spanned a transect including the main channel and wake zone at a j-hook restoration structure. We noted divergent patterns of peak solute concentration and times within the wake zone regardless of transect location within the structure. Analysis reveals an inhomogeneous STS zone which is frequently still loading tracer after the main channel has peaked. The breakthrough curve loading patterns at the restoration structure violated the assumptions of simplified "random walk" 2 zone transient storage models which seek to identify representative STS zones and zone locations. Use of structure-scale Weiner filter based multi-rate mass transfer models to characterize STS zones residence times are similarly dependent on a representative zone location. Each 2 zone model assumes 1 zone is a completely mixed STS zone and the other a completely mixed main channel. Our research reveals limits to simple application of the recently developed 2 zone models, and raises important questions about the measurement scale necessary to identify critical STS properties at restoration sites. An explanation for the incompletely mixed STS zone may be the distinct hydraulics at restoration sites, including a constrained high velocity conveyance region closely abutting a wake zone that receives periodic disruption from the upstream structure shearing vortices.igure 1. River restoration j-hook with blue dye revealing main channel and edge of wake zone with multiple surface transient storage zones.
Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.
McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark
2018-07-01
To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
How old is old in allegations of age discrimination? The limitations of existing law.
Wiener, Richard L; Farnum, Katlyn S
2016-10-01
Under Title VII, courts may give a mixed motive instruction allowing jurors to determine that defendants are liable for discrimination if an illegal factor (here: race, color, religion, sex, or national origin) contributed to an adverse decision. Recently, the Supreme Court held that to conclude that an employer discriminated against a worker because of age, the Age Discrimination in Employment Act, unlike Title VII of the Civil Rights Act of 1964, requires "but for" causality, necessitating jurors to find that age was the determinative factor in an employer's adverse decision regarding that worker. Using a national online sample (N = 392) and 2 study phases, 1 to measure stereotypes, and a second to present experimental manipulations, this study tested whether older worker stereotypes as measured through the lens of the Stereotype Content Model, instruction type (but for vs. mixed motive causality), and plaintiff age influenced mock juror verdicts in an age discrimination case. Decision modeling in Phase 2 with 3 levels of case orientation (i.e., proplaintiff, prodefendant, and neutral) showed that participants relied on multiple factors when making a decision, as opposed to just 1, suggesting that mock jurors favor a mixed model approach to discrimination verdict decisions. In line with previous research, instruction effects showed that mock jurors found in favor of plaintiffs under mixed motive instructions but not under "but for" instructions especially for older plaintiffs (64- and 74-year-old as opposed to 44- and 54-year-old-plaintiffs). Most importantly, in accordance with the Stereotype Content Model theory, competence and warmth stereotypes moderated the instruction effects found for specific judgments. The results of this study show the importance of the type of legal causality required for age discrimination cases. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Weir, Christopher J.; Rubio, Noah; Rabinovich, Roberto; Pinnock, Hilary; Hanley, Janet; McCloughan, Lucy; Drost, Ellen M.; Mantoani, Leandro C.; MacNee, William; McKinstry, Brian
2016-01-01
Introduction The Bland-Altman limits of agreement method is widely used to assess how well the measurements produced by two raters, devices or systems agree with each other. However, mixed effects versions of the method which take into account multiple sources of variability are less well described in the literature. We address the practical challenges of applying mixed effects limits of agreement to the comparison of several devices to measure respiratory rate in patients with chronic obstructive pulmonary disease (COPD). Methods Respiratory rate was measured in 21 people with a range of severity of COPD. Participants were asked to perform eleven different activities representative of daily life during a laboratory-based standardised protocol of 57 minutes. A mixed effects limits of agreement method was used to assess the agreement of five commercially available monitors (Camera, Photoplethysmography (PPG), Impedance, Accelerometer, and Chest-band) with the current gold standard device for measuring respiratory rate. Results Results produced using mixed effects limits of agreement were compared to results from a fixed effects method based on analysis of variance (ANOVA) and were found to be similar. The Accelerometer and Chest-band devices produced the narrowest limits of agreement (-8.63 to 4.27 and -9.99 to 6.80 respectively) with mean bias -2.18 and -1.60 breaths per minute. These devices also had the lowest within-participant and overall standard deviations (3.23 and 3.29 for Accelerometer and 4.17 and 4.28 for Chest-band respectively). Conclusions The mixed effects limits of agreement analysis enabled us to answer the question of which devices showed the strongest agreement with the gold standard device with respect to measuring respiratory rates. In particular, the estimated within-participant and overall standard deviations of the differences, which are easily obtainable from the mixed effects model results, gave a clear indication that the Accelerometer and Chest-band devices performed best. PMID:27973556
2013-01-01
Background Despite the widespread use of multiple-choice assessments in medical education assessment, current practice and published advice concerning the number of response options remains equivocal. This article describes an empirical study contrasting the quality of three 60 item multiple-choice test forms within the Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZCOG) Fetal Surveillance Education Program (FSEP). The three forms are described below. Methods The first form featured four response options per item. The second form featured three response options, having removed the least functioning option from each item in the four-option counterpart. The third test form was constructed by retaining the best performing version of each item from the first two test forms. It contained both three and four option items. Results Psychometric and educational factors were taken into account in formulating an approach to test construction for the FSEP. The four-option test performed better than the three-option test overall, but some items were improved by the removal of options. The mixed-option test demonstrated better measurement properties than the fixed-option tests, and has become the preferred test format in the FSEP program. The criteria used were reliability, errors of measurement and fit to the item response model. Conclusions The position taken is that decisions about the number of response options be made at the item level, with plausible options being added to complete each item on both psychometric and educational grounds rather than complying with a uniform policy. The point is to construct the better performing item in providing the best psychometric and educational information. PMID:23453056
Zoanetti, Nathan; Beaves, Mark; Griffin, Patrick; Wallace, Euan M
2013-03-04
Despite the widespread use of multiple-choice assessments in medical education assessment, current practice and published advice concerning the number of response options remains equivocal. This article describes an empirical study contrasting the quality of three 60 item multiple-choice test forms within the Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZCOG) Fetal Surveillance Education Program (FSEP). The three forms are described below. The first form featured four response options per item. The second form featured three response options, having removed the least functioning option from each item in the four-option counterpart. The third test form was constructed by retaining the best performing version of each item from the first two test forms. It contained both three and four option items. Psychometric and educational factors were taken into account in formulating an approach to test construction for the FSEP. The four-option test performed better than the three-option test overall, but some items were improved by the removal of options. The mixed-option test demonstrated better measurement properties than the fixed-option tests, and has become the preferred test format in the FSEP program. The criteria used were reliability, errors of measurement and fit to the item response model. The position taken is that decisions about the number of response options be made at the item level, with plausible options being added to complete each item on both psychometric and educational grounds rather than complying with a uniform policy. The point is to construct the better performing item in providing the best psychometric and educational information.
Wu, Limin; Li, Nainong; Zhang, Mingfeng; Xue, Sheng-Li; Cassady, Kaniel; Lin, Qing; Riggs, Arthur D.; Zeng, Defu
2015-01-01
Multiple sclerosis (MS) is an autoimmune inflammatory disease of the central nervous system with demyelination, axon damage, and paralysis. Induction of mixed chimerism with allogeneic donors has been shown to not cause graft-versus-host disease (GVHD) in animal models and humans. We have reported that induction of MHC-mismatched mixed chimerism can cure autoimmunity in autoimmune NOD mice, but this approach has not yet been tested in animal models of MS, such as experimental autoimmune encephalomyelitis (EAE). Here, we report that MHC-mismatched mixed chimerism with C57BL/6 (H-2b) donor in SJL/J (H-2s) EAE recipients eliminates clinical symptoms and prevents relapse. This cure is demonstrated by not only disappearance of clinical signs but also reversal of autoimmunity; elimination of infiltrating T, B, and macrophage cells in the spinal cord; and regeneration of myelin sheath. The reversal of autoimmunity is associated with a marked reduction of autoreactivity of CD4+ T cells and significant increase in the percentage of Foxp3+ Treg among host-type CD4+ T cells in the spleen and lymph nodes. The latter is associated with a marked reduction of the percentage of host-type CD4+CD8+ thymocytes and an increase of Treg percentage among the CD4+CD8+ and CD4+CD8− thymocytes. Thymectomy leads to loss of prevention of EAE relapse by induction of mixed chimerism, although there is a dramatic expansion of host-type Treg cells in the lymph nodes. These results indicate that induction of MHC-mismatched mixed chimerism can restore thymic negative selection of autoreactive CD4+ T cells, augment production of Foxp3+ Treg, and cure EAE. PMID:26647186
Cox Regression Models with Functional Covariates for Survival Data.
Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M
2015-06-01
We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.
Solid precipitation measurement intercomparison in Bismarck, North Dakota, from 1988 through 1997
Ryberg, Karen R.; Emerson, Douglas G.; Macek-Rowland, Kathleen M.
2009-01-01
A solid precipitation measurement intercomparison was recommended by the World Meteorological Organization (WMO) and was initiated after approval by the ninth session of the Commission for Instruments and Methods of Observation. The goal of the intercomparison was to assess national methods of measuring solid precipitation against methods whose accuracy and reliability were known. A field study was started in Bismarck, N. Dak., during the 1988-89 winter as part of the intercomparison. The last official field season of the WMO intercomparison was 1992-93; however, the Bismarck site continued to operate through the winter of 1996-97. Precipitation events at Bismarck were categorized as snow, mixed, or rain on the basis of descriptive notes recorded as part of the solid precipitation intercomparison. The rain events were not further analyzed in this study. Catch ratios (CRs) - the ratio of the precipitation catch at each gage to the true precipitation measurement (the corrected double fence intercomparison reference) - were calculated. Then, regression analysis was used to develop equations that model the snow and mixed precipitation CRs at each gage as functions of wind speed and temperature. Wind speed at the gages, functions of temperature, and upper air conditions (wind speed and air temperature at 700 millibars pressure) were used as possible explanatory variables in the multiple regression analysis done for this study. The CRs were modeled by using multiple regression analysis for the Tretyakov gage, national shielded gage, national unshielded gage, AeroChem gage, national gage with double fence, and national gage with Wyoming windshield. As in earlier studies by the WMO, wind speed and air temperature were found to influence the CR of the Tretyakov gage. However, in this study, the temperature variable represented the average upper air temperature over the duration of the event. The WMO did not use upper air conditions in its analysis. The national shielded and unshielded gages where found to be influenced by functions of wind speed only, as in other studies, but the upper air wind speed was used as an explanatory variable in this study. The AeroChem gage was not used in the WMO intercomparison study for 1987-93. The AeroChem gage had a highly varied CR at Bismarck, and a number of variables related to wind speed and temperature were used in the model for the CR. Despite extensive efforts to find a model for the national gage with double fence, no statistically significant regression model was found at the 0.05 level of statistical significance. The national gage with Wyoming windshield had a CR modeled by temperature and wind speed variables, and the regression relation had the highest coefficient of determination (R2 = 0.572) and adjusted coefficient of multiple determination (R2a = 0.476) of all of the models identified for any gage. Three of the gage CRs evaluated could be compared with those in the WMO intercomparison study for 1987-93. The WMO intercomparison had the advantage of a much larger dataset than this study. However, the data in this study represented a longer time period. Snow precipitation catch is highly varied depending on the equipment used and the weather conditions. Much of the variation is not accounted for in the WMO equations or in the equations developed in this study, particularly for unshielded gages. Extensive attempts at regression analysis were made with the mixed precipitation data, but it was concluded that the sample sizes were not large enough to model the CRs. However, the data could be used to test the WMO intercomparison equations. The mixed precipitation equations for the Tretyakov and national shielded gages are similar to those for snow in that they are more likely to underestimate precipitation when observed amounts were small and overestimate precipitation when observed amounts were relatively large. Mixed precipitation is underestimated by the WMO adjustment and t
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrison, H.; Zuidema, Paquita; Ackerman, Andrew
2011-06-16
An intercomparison of six cloud-resolving and large-eddy simulation models is presented. This case study is based on observations of a persistent mixed-phase boundary layer cloud gathered on 7 May, 1998 from the Surface Heat Budget of Arctic Ocean (SHEBA) and First ISCCP Regional Experiment - Arctic Cloud Experiment (FIRE-ACE). Ice nucleation is constrained in the simulations in a way that holds the ice crystal concentration approximately fixed, with two sets of sensitivity runs in addition to the baseline simulations utilizing different specified ice nucleus (IN) concentrations. All of the baseline and sensitivity simulations group into two distinct quasi-steady states associatedmore » with either persistent mixed-phase clouds or all-ice clouds after the first few hours of integration, implying the existence of multiple equilibria. These two states are associated with distinctly different microphysical, thermodynamic, and radiative characteristics. Most but not all of the models produce a persistent mixed-phase cloud qualitatively similar to observations using the baseline IN/crystal concentration, while small increases in the IN/crystal concentration generally lead to rapid glaciation and conversion to the all-ice state. Budget analysis indicates that larger ice deposition rates associated with increased IN/crystal concentrations have a limited direct impact on dissipation of liquid in these simulations. However, the impact of increased ice deposition is greatly enhanced by several interaction pathways that lead to an increased surface precipitation flux, weaker cloud top radiative cooling and cloud dynamics, and reduced vertical mixing, promoting rapid glaciation of the mixed-phase cloud for deposition rates in the cloud layer greater than about 1-2x10-5 g kg-1 s-1. These results indicate the critical importance of precipitation-radiative-dynamical interactions in simulating cloud phase, which have been neglected in previous fixed-dynamical parcel studies of the cloud phase parameter space. Large sensitivity to the IN/crystal concentration also suggests the need for improved understanding of ice nucleation and its parameterization in models.« less
Hemispheric Differences in Tropical Lower Stratospheric Transport and Tracers Annual Cycle
NASA Technical Reports Server (NTRS)
Tweedy, Olga; Waugh, D.; Stolarski, R.; Oman, L.
2016-01-01
Transport of long-lived tracers (such as O, CO, and N O) in the lower stratosphere largely determines the composition of the entire stratosphere. Stratospheric transport includes the mean residual circulation (with air rising in the tropics and sinking in the polar and middle latitudes), plus two-way isentropic (quasi-horizontal) mixing by eddies. However, the relative importance of two transport components remains uncertain. Previous studies quantified the relative role of these processes based on tropics-wide average characteristics under common assumption of well-mixed tropics. However, multiple instruments provide us with evidence that show significant differences in the seasonal cycle of ozone between the Northern (0-20N) and Southern (0-20S) tropical (NT and ST respectively) lower stratosphere. In this study we investigate these differences in tracer seasonality and quantify transport processes affecting tracers annual cycle amplitude using simulations from Goddard Earth Observing System Chemistry Climate Model (GEOSCCM) and Whole Atmosphere Community Climate Model (WACCM) and compare them to observations from the Microwave Limb Sounder (MLS) on the Aura satellite. We detect the observed contrast between the ST and NT in GEOSCCM and WACCM: annual cycle in ozone and other chemical tracers is larger in the NT than in the ST but opposite is true for the annual cycle in vertical advection. Ozone budgets in the models, analyzed based on the Transformed Eulerian Mean (TEM) framework, demonstrate a major role of quasi-horizontal mixing vertical advection in determining the NTST ozone distribution and behavior. Analysis of zonal variations in the NT and ST ozone annual cycles further suggests important role of North American and Asian Summer Monsoons (associated with strong isentropic mixing) on the lower stratospheric ozone in the NT. Furthermore, multi model comparison shows that most CCMs reproduce the observed characteristic of ozone annual cycle quite well. Thus, latitudinal variations within the tropics have to be considered in order to understand the balance between upwelling and quasi- horizontal mixing in the tropical lower stratosphere and the paradigm of well mixed tropics has to be reconsidered.
Health consequences of racist and antigay discrimination for multiple minority adolescents.
Thoma, Brian C; Huebner, David M
2013-10-01
Individuals who belong to a marginalized group and who perceive discrimination based on that group membership suffer from a variety of poor health outcomes. Many people belong to more than one marginalized group, and much less is known about the influence of multiple forms of discrimination on health outcomes. Drawing on literature describing the influence of multiple stressors, three models of combined forms of discrimination are discussed: additive, prominence, and exacerbation. The current study examined the influence of multiple forms of discrimination in a sample of African American lesbian, gay, or bisexual (LGB) adolescents ages 14-19. Each of the three models of combined stressors were tested to determine which best describes how racist and antigay discrimination combine to predict depressive symptoms, suicidal ideation, and substance use. Participants were included in this analysis if they identified their ethnicity as either African American (n = 156) or African American mixed (n = 120). Mean age was 17.45 years (SD = 1.36). Results revealed both forms of mistreatment were associated with depressive symptoms and suicidal ideation among African American LGB adolescents. Racism was more strongly associated with substance use. Future intervention efforts should be targeted toward reducing discrimination and improving the social context of multiple minority adolescents, and future research with multiple minority individuals should be attuned to the multiple forms of discrimination experienced by these individuals within their environments. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Walker, R. E.; Kors, D. L.
1973-01-01
Test data is presented which allows determination of jet penetration and mixing of multiple cold air jets into a ducted subsonic heated mainstream flow. Jet-to-mainstream momentum flux ratios ranged from 6 to 60. Temperature profile data is presented at various duct locations up to 24 orifice diameters downstream of the plane of jet injection. Except for two configurations, all geometries investigated had a single row of constant diameter orifices located transverse to the main flow direction. Orifice size and spacing between orifices were varied. Both of these were found to have a significant effect on jet penetration and mixing. The best mixing of the hot and cold streams was achieved with duct height.
Spectral Unmixing With Multiple Dictionaries
NASA Astrophysics Data System (ADS)
Cohen, Jeremy E.; Gillis, Nicolas
2018-02-01
Spectral unmixing aims at recovering the spectral signatures of materials, called endmembers, mixed in a hyperspectral or multispectral image, along with their abundances. A typical assumption is that the image contains one pure pixel per endmember, in which case spectral unmixing reduces to identifying these pixels. Many fully automated methods have been proposed in recent years, but little work has been done to allow users to select areas where pure pixels are present manually or using a segmentation algorithm. Additionally, in a non-blind approach, several spectral libraries may be available rather than a single one, with a fixed number (or an upper or lower bound) of endmembers to chose from each. In this paper, we propose a multiple-dictionary constrained low-rank matrix approximation model that address these two problems. We propose an algorithm to compute this model, dubbed M2PALS, and its performance is discussed on both synthetic and real hyperspectral images.
Tremblay, Dominique; Prady, Catherine; Bilodeau, Karine; Touati, Nassera; Chouinard, Maud-Christine; Fortin, Martin; Gaboury, Isabelle; Rodrigue, Jean; L'Italien, Marie-France
2017-12-16
Cancer is now viewed as a chronic disease, presenting challenges to follow-up and survivorship care. Models to shift from haphazard, suboptimal and fragmented episodes of care to an integrated cancer care continuum must be developed, tested and implemented. Numerous studies demonstrate improved care when follow-up is assured by both oncology and primary care providers rather than either group alone. However, there is little data on the roles assumed by specialized oncology teams and primary care providers and the extent to which they work together. This study aims to develop, pilot test and measure outcomes of an innovative risk-based coordinated cancer care model for patients transitioning from specialized oncology teams to primary care providers. This multiple case study using a sequential mixed-methods design rests on a theory-driven realist evaluation approach to understand how transitions might be improved. The cases are two health regions in Quebec, Canada, defined by their geographic territory. Each case includes a Cancer Centre and three Family Medicine Groups selected based on differences in their determining characteristics. Qualitative data will be collected from document review (scientific journal, grey literature, local documentation), semi-directed interviews with key informants, and observation of care coordination practices. Qualitative data will be supplemented with a survey to measure the outcome of the coordinated model among providers (scope of practice, collaboration, relational coordination, leadership) and patients diagnosed with breast, colorectal or prostate cancer (access to care, patient-centredness, communication, self-care, survivorship profile, quality of life). Results from descriptive and regression analyses will be triangulated with thematic analysis of qualitative data. Qualitative, quantitative, and mixed methods data will be interpreted within and across cases in order to identify context-mechanism associations that explain outcomes. The study will provide empirical data on a risk-based coordinated model of cancer care to guide actions at different levels in the health system. This in-depth multiple case study using a realist approach considers both the need for context-specific intervention research and the imperative to address research gaps regarding coordinated models of cancer care.
ERIC Educational Resources Information Center
Kim, Sooyeon; Walker, Michael E.
2011-01-01
This study examines the use of subpopulation invariance indices to evaluate the appropriateness of using a multiple-choice (MC) item anchor in mixed-format tests, which include both MC and constructed-response (CR) items. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using an MC-only anchor set for 4…
The cutoff phenomenon in finite Markov chains.
Diaconis, P
1996-01-01
Natural mixing processes modeled by Markov chains often show a sharp cutoff in their convergence to long-time behavior. This paper presents problems where the cutoff can be proved (card shuffling, the Ehrenfests' urn). It shows that chains with polynomial growth (drunkard's walk) do not show cutoffs. The best general understanding of such cutoffs (high multiplicity of second eigenvalues due to symmetry) is explored. Examples are given where the symmetry is broken but the cutoff phenomenon persists. PMID:11607633
A mixed formulation for interlaminar stresses in dropped-ply laminates
NASA Technical Reports Server (NTRS)
Harrison, Peter N.; Johnson, Eric R.
1993-01-01
A structural model is developed for the linear elastic response of structures consisting of multiple layers of varying thickness such as laminated composites containing internal ply drop-offs. The assumption of generalized plane deformation is used to reduce the solution domain to two dimensions while still allowing some out-of-plane deformation. The Hellinger-Reissner variational principle is applied to a layerwise assumed stress distribution with the resulting governing equations solved using finite differences.
Sulcus reproduction with elastomeric impression materials: a new in vitro testing method.
Finger, Werner J; Kurokawa, Rie; Takahashi, Hidekazu; Komatsu, Masashi
2008-12-01
Aim of this study was to investigate the depth reproduction of differently wide sulci with elastomeric impression materials by single- and double-mix techniques using a tooth and sulcus model, simulating clinical conditions. Impressions with one vinyl polysiloxane (VPS; FLE), two polyethers (PE; IMP and P2), and one hybrid VPS/PE elastomer (FUS) were taken from a truncated steel cone with a circumferential 2 mm deep sulcus, 50, 100 or 200 microm wide. The "root surface" was in steel and the "periodontal tissue" in reversible hydrocolloid. Single-mix impressions were taken with light-body (L) or monophase (M) pastes, double-mix impressions with L as syringe and M or heavy-body (H) as tray materials (n=8). Sulcus reproduction was determined by 3D laser topography of impressions at eight locations, 45 degrees apart. Statistical data analysis by ANOVA and multiple comparison tests (p<0.05). For 200 microm wide sulci, significant differences were found between impression materials only: FLE=IMP>FUS=P2. At 50 and 100 microm width, significant differences were found between materials (IMP>FUS=FLE>P2) and techniques (L+H=L+M>M>L). The sulcus model is considered useful for screening evaluation of elastomeric impression materials ability to reproduce narrow sulci. All tested materials and techniques reproduced 200 microm wide sulci to almost nominal depth. Irrespective of the impression technique used, IMP showed the best penetration ability in 50 and 100 microm sulci. Double-mix techniques are more suitable to reproduce narrow sulci than single-mix techniques.
Computational parametric study of a Richtmyer-Meshkov instability for an inclined interface.
McFarland, Jacob A; Greenough, Jeffrey A; Ranjan, Devesh
2011-08-01
A computational study of the Richtmyer-Meshkov instability for an inclined interface is presented. The study covers experiments to be performed in the Texas A&M University inclined shock tube facility. Incident shock wave Mach numbers from 1.2 to 2.5, inclination angles from 30° to 60°, and gas pair Atwood numbers of ∼0.67 and ∼0.95 are used in this parametric study containing 15 unique combinations of these parameters. Qualitative results are examined through a time series of density plots for multiple combinations of these parameters, and the qualitative effects of each of the parameters are discussed. Pressure, density, and vorticity fields are presented in animations available online to supplement the discussion of the qualitative results. These density plots show the evolution of two main regions in the flow field: a mixing region containing driver and test gas that is dominated by large vortical structures, and a more homogeneous region of unmixed fluid which can separate away from the mixing region in some cases. The interface mixing width is determined for various combinations of the parameters listed at the beginning of the Abstract. A scaling method for the mixing width is proposed using the interface geometry and wave velocities calculated using one-dimensional gas dynamic equations. This model uses the transmitted wave velocity for the characteristic velocity and an initial offset time based on the travel time of strong reflected waves. It is compared to an adapted Richtmyer impulsive model scaling and shown to scale the initial mixing width growth rate more effectively for fixed Atwood number.
Pre-natal exposures to cocaine and alcohol and physical growth patterns to age 8 years
Lumeng, Julie C.; Cabral, Howard J.; Gannon, Katherine; Heeren, Timothy; Frank, Deborah A.
2007-01-01
Two hundred and two primarily African American/Caribbean children (classified by maternal report and infant meconium as 38 heavier, 74 lighter and 89 not cocaine-exposed) were measured repeatedly from birth to age 8 years to assess whether there is an independent effect of prenatal cocaine exposure on physical growth patterns. Children with fetal alcohol syndrome identifiable at birth were excluded. At birth, cocaine and alcohol exposures were significantly and independently associated with lower weight, length and head circumference in cross-sectional multiple regression analyses. The relationship over time of pre-natal exposures to weight, height, and head circumference was then examined by multiple linear regression using mixed linear models including covariates: child’s gestational age, gender, ethnicity, age at assessment, current caregiver, birth mother’s use of alcohol, marijuana and tobacco during the pregnancy and pre-pregnancy weight (for child’s weight) and height (for child’s height and head circumference). The cocaine effects did not persist beyond infancy in piecewise linear mixed models, but a significant and independent negative effect of pre-natal alcohol exposure persisted for weight, height, and head circumference. Catch-up growth in cocaine-exposed infants occurred primarily by 6 months of age for all growth parameters, with some small fluctuations in growth rates in the preschool age range but no detectable differences between heavier versus unexposed nor lighter versus unexposed thereafter. PMID:17412558
Constraints on atmospheric structure and helium abundance of Saturn from Cassini/UVIS and CIRS
NASA Astrophysics Data System (ADS)
Koskinen, Tommi; Guerlet, Sandrine
2017-10-01
We combine results from stellar occultations observed by Cassini/UVIS and infrared emissions observed by Cassini/CIRS to create empirical models of atmospheric structure on Saturn corresponding to the locations probed by the UVIS stellar occultations. These models span multiple occultation locations at different latitudes from 2005 to the end of 2015. In summary, we connect the temperature-pressure profiles retrieved from the CIRS data to the temperature-pressure profiles in the thermosphere retrieved from the occultations. A corresponding altitude scale is calculated and matched to the altitude scale of the density profiles that are retrieved directly from the occultations. In addition to the temperature structure, our ability to match the altitudes in the occultation light curves depends on the mean molecular weight of the atmosphere. We use the UVIS occultations to constrain the abundance of methane near the homopause, allowing us to constrain the eddy mixing rate of the atmosphere. In addition, our preliminary results are consistent with a mixing ratio of about 11% for helium in the lower atmosphere. Our results provide an important reference for future models of Saturn’s upper atmosphere.
Key factors controlling ozone production in wildfire plumes
NASA Astrophysics Data System (ADS)
Jaffe, D. A.
2017-12-01
Production of ozone in wildfire plumes is complex and highly variable. As a wildfire plume mixes into an urban area, ozone is often, but not always, produced. We have examined multiple factors that can help explain some of this variability. This includes CO/NOy enhancement ratios, photolysis rates, PAN/NOy fraction and degree of NOx oxidation. While fast ozone production is well known, on average, ozone production increases downwind in a plume for several days. Peroxyacetyl nitrate (PAN) is likely a key cause for delayed ozone formation. Recent observations at the Mt. Bachelor Observatory a mountain top observatory relatively remote from nearby anthropogenic influence and in Boise Idaho, an urban setting, show the importance of PAN in wildfire plumes. From these observations we can devise a conceptual model that considers four factors in ozone production: NOx/VOC emission ratio; degree of NOx oxidation; transport time and pathway; and mixing with urban pollutants. Using this conceptual model, we can then devise a lagrangian modeling strategy that can be used to improve our understanding of ozone production in wildfire plumes, both in remote and urban settings.
Kronholm, Scott C.; Capel, Paul D.
2016-01-01
Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.
Haleem, Kirolos
2016-10-01
Private highway-railroad grade crossings (HRGCs) are intersections of highways and railroads on roadways that are not maintained by a public authority. Since no public authority maintains private HRGCs, fatal and injury crashes at these locations are of concern. However, no study has been conducted at private HRGCs to identify the safety issues that might exist and how to alleviate them. This study identifies the significant predictors of traffic casualties (including both injuries and fatalities) at private HRGCs in the U.S. using six years of nationwide crashes from 2009 to 2014. Two levels of injury severity were considered, injury (including fatalities and injuries) and no injury. The study investigates multiple predictors, e.g., temporal crash characteristics, geometry, railroad, traffic, vehicle, and environment. The study applies both the mixed logit and binary logit models. The mixed logit model was found to outperform the binary logit model. The mixed logit model revealed that drivers who did not stop, railroad equipment that struck highway users, higher train speeds, non-presence of advance warning signs, concrete road surface type, and cloudy weather were associated with an increase in injuries and fatalities. For example, a one-mile-per-hour higher train speed increases the probability of fatality by 22%. On the contrary, male drivers, PM peak periods, and presence of warning devices at both approaches were associated with a fatality reduction. Potential strategies are recommended to alleviate injuries and fatalities at private HRGCs. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Matichuk, Rebecca; Tonnesen, Gail; Luecken, Deborah; Gilliam, Rob; Napelenok, Sergey L.; Baker, Kirk R.; Schwede, Donna; Murphy, Ben; Helmig, Detlev; Lyman, Seth N.; Roselle, Shawn
2017-12-01
The Weather Research and Forecasting (WRF) and Community Multiscale Air Quality (CMAQ) models were used to simulate a 10 day high-ozone episode observed during the 2013 Uinta Basin Winter Ozone Study (UBWOS). The baseline model had a large negative bias when compared to ozone (O3) and volatile organic compound (VOC) measurements across the basin. Contrary to other wintertime Uinta Basin studies, predicted nitrogen oxides (NOx) were typically low compared to measurements. Increases to oil and gas VOC emissions resulted in O3 predictions closer to observations, and nighttime O3 improved when reducing the deposition velocity for all chemical species. Vertical structures of these pollutants were similar to observations on multiple days. However, the predicted surface layer VOC mixing ratios were generally found to be underestimated during the day and overestimated at night. While temperature profiles compared well to observations, WRF was found to have a warm temperature bias and too low nighttime mixing heights. Analyses of more realistic snow heat capacity in WRF to account for the warm bias and vertical mixing resulted in improved temperature profiles, although the improved temperature profiles seldom resulted in improved O3 profiles. While additional work is needed to investigate meteorological impacts, results suggest that the uncertainty in the oil and gas emissions contributes more to the underestimation of O3. Further, model adjustments based on a single site may not be suitable across all sites within the basin.
Distinguishability of generic quantum states
NASA Astrophysics Data System (ADS)
Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol
2016-06-01
Properties of random mixed states of dimension N distributed uniformly with respect to the Hilbert-Schmidt measure are investigated. We show that for large N , due to the concentration of measure, the trace distance between two random states tends to a fixed number D ˜=1 /4 +1 /π , which yields the Helstrom bound on their distinguishability. To arrive at this result, we apply free random calculus and derive the symmetrized Marchenko-Pastur distribution, which is shown to describe numerical data for the model of coupled quantum kicked tops. Asymptotic value for the root fidelity between two random states, √{F }=3/4 , can serve as a universal reference value for further theoretical and experimental studies. Analogous results for quantum relative entropy and Chernoff quantity provide other bounds on the distinguishablity of both states in a multiple measurement setup due to the quantum Sanov theorem. We study also mean entropy of coherence of random pure and mixed states and entanglement of a generic mixed state of a bipartite system.
Effectiveness of purging on preventing gas emission buildup in wood pellet storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yazdanpanah, Fahimeh; Sokhansanj, Shahab; Lim, Choon Jim
Storage of wood pellets has resulted in deadly accidents in connection with off-gassing and self-heating. A forced ventilation system should be in place to sweep the off-gases and control the thermal conditions. In this study, multiple purging tests were conducted in a pilot scale silo to evaluate the effectiveness of a purging system and quantify the time and volume of the gas needed to sweep the off-gases. To identify the degree of mixing, residence time distribution of the tracer gas was also studied experimentally. Large deviations from plug flow suggested strong gas mixing for all superficial velocities. As the velocitymore » increased, the system dispersion number became smaller, which indicated less degree of mixing with increased volume of the purging gas. Finally, one-dimensional modelling and numerical simulation of the off-gas concentration profile gave the best agreement with the measured gas concentration at the bottom and middle of the silo.« less
Multi-dimensional computer simulation of MHD combustor hydrodynamics
NASA Astrophysics Data System (ADS)
Berry, G. F.; Chang, S. L.; Lottes, S. A.; Rimkus, W. A.
1991-04-01
Argonne National Laboratory is investigating the nonreacting jet gas mixing patterns in an MHD second stage combustor by using a 2-D multiphase hydrodynamics computer program and a 3-D single phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A 2-D steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross stream gas flow. A 3-D code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.
Effectiveness of purging on preventing gas emission buildup in wood pellet storage
Yazdanpanah, Fahimeh; Sokhansanj, Shahab; Lim, Choon Jim; ...
2015-04-24
Storage of wood pellets has resulted in deadly accidents in connection with off-gassing and self-heating. A forced ventilation system should be in place to sweep the off-gases and control the thermal conditions. In this study, multiple purging tests were conducted in a pilot scale silo to evaluate the effectiveness of a purging system and quantify the time and volume of the gas needed to sweep the off-gases. To identify the degree of mixing, residence time distribution of the tracer gas was also studied experimentally. Large deviations from plug flow suggested strong gas mixing for all superficial velocities. As the velocitymore » increased, the system dispersion number became smaller, which indicated less degree of mixing with increased volume of the purging gas. Finally, one-dimensional modelling and numerical simulation of the off-gas concentration profile gave the best agreement with the measured gas concentration at the bottom and middle of the silo.« less
Covariate Selection for Multilevel Models with Missing Data
Marino, Miguel; Buxton, Orfeu M.; Li, Yi
2017-01-01
Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457
Pedagogical Strategies Used by Selected Leading Mixed Methodologists in Mixed Research Courses
ERIC Educational Resources Information Center
Frels, Rebecca K.; Onwuegbuzie, Anthony J.; Leech, Nancy L.; Collins, Kathleen M. T.
2014-01-01
The teaching of research methods is common across multiple fields in the social and educational sciences for establishing evidence-based practices and furthering the knowledge base through scholarship. Yet, specific to mixed methods, scant information exists as to how to approach teaching complex concepts for meaningful learning experiences. Thus,…
Biofilm development and enhanced stress resistance of a model, mixed-species community biofilm.
Lee, Kai Wei Kelvin; Periasamy, Saravanan; Mukherjee, Manisha; Xie, Chao; Kjelleberg, Staffan; Rice, Scott A
2014-04-01
Most studies of biofilm biology have taken a reductionist approach, where single-species biofilms have been extensively investigated. However, biofilms in nature mostly comprise multiple species, where interspecies interactions can shape the development, structure and function of these communities differently from biofilm populations. Hence, a reproducible mixed-species biofilm comprising Pseudomonas aeruginosa, Pseudomonas protegens and Klebsiella pneumoniae was adapted to study how interspecies interactions affect biofilm development, structure and stress responses. Each species was fluorescently tagged to determine its abundance and spatial localization within the biofilm. The mixed-species biofilm exhibited distinct structures that were not observed in comparable single-species biofilms. In addition, development of the mixed-species biofilm was delayed 1-2 days compared with the single-species biofilms. Composition and spatial organization of the mixed-species biofilm also changed along the flow cell channel, where nutrient conditions and growth rate of each species could have a part in community assembly. Intriguingly, the mixed-species biofilm was more resistant to the antimicrobials sodium dodecyl sulfate and tobramycin than the single-species biofilms. Crucially, such community level resilience was found to be a protection offered by the resistant species to the whole community rather than selection for the resistant species. In contrast, community-level resilience was not observed for mixed-species planktonic cultures. These findings suggest that community-level interactions, such as sharing of public goods, are unique to the structured biofilm community, where the members are closely associated with each other.
Quiñones, Rebecca M.; Holyoak, Marcel; Johnson, Michael L.; Moyle, Peter B.
2014-01-01
Understanding factors influencing survival of Pacific salmonids (Oncorhynchus spp.) is essential to species conservation, because drivers of mortality can vary over multiple spatial and temporal scales. Although recent studies have evaluated the effects of climate, habitat quality, or resource management (e.g., hatchery operations) on salmonid recruitment and survival, a failure to look at multiple factors simultaneously leaves open questions about the relative importance of different factors. We analyzed the relationship between ten factors and survival (1980–2007) of four populations of salmonids with distinct life histories from two adjacent watersheds (Salmon and Scott rivers) in the Klamath River basin, California. The factors were ocean abundance, ocean harvest, hatchery releases, hatchery returns, Pacific Decadal Oscillation, North Pacific Gyre Oscillation, El Niño Southern Oscillation, snow depth, flow, and watershed disturbance. Permutation tests and linear mixed-effects models tested effects of factors on survival of each taxon. Potential factors affecting survival differed among taxa and between locations. Fall Chinook salmon O. tshawytscha survival trends appeared to be driven partially or entirely by hatchery practices. Trends in three taxa (Salmon River spring Chinook salmon, Scott River fall Chinook salmon; Salmon River summer steelhead trout O. mykiss) were also likely driven by factors subject to climatic forcing (ocean abundance, summer flow). Our findings underscore the importance of multiple factors in simultaneously driving population trends in widespread species such as anadromous salmonids. They also show that the suite of factors may differ among different taxa in the same location as well as among populations of the same taxa in different watersheds. In the Klamath basin, hatchery practices need to be reevaluated to protect wild salmonids. PMID:24866173
Vaughn, Justin N.; Nelson, Randall L.; Song, Qijian; Cregan, Perry B.; Li, Zenglu
2014-01-01
Soybean oil and meal are major contributors to world-wide food production. Consequently, the genetic basis for soybean seed composition has been intensely studied using family-based mapping. Population-based mapping approaches, in the form of genome-wide association (GWA) scans, have been able to resolve loci controlling moderately complex quantitative traits (QTL) in numerous crop species. Yet, it is still unclear how soybean’s unique population history will affect GWA scans. Using one of the populations in this study, we simulated phenotypes resulting from a range of genetic architectures. We found that with a heritability of 0.5, ∼100% and ∼33% of the 4 and 20 simulated QTL can be recovered, respectively, with a false-positive rate of less than ∼6×10−5 per marker tested. Additionally, we demonstrated that combining information from multi-locus mixed models and compressed linear-mixed models improves QTL identification and interpretation. We applied these insights to exploring seed composition in soybean, refining the linkage group I (chromosome 20) protein QTL and identifying additional oil QTL that may allow some decoupling of highly correlated oil and protein phenotypes. Because the value of protein meal is closely related to its essential amino acid profile, we attempted to identify QTL underlying methionine, threonine, cysteine, and lysine content. Multiple QTL were found that have not been observed in family-based mapping studies, and each trait exhibited associations across multiple populations. Chromosomes 1 and 8 contain strong candidate alleles for essential amino acid increases. Overall, we present these and additional data that will be useful in determining breeding strategies for the continued improvement of soybean’s nutrient portfolio. PMID:25246241
Divided spatial attention and feature-mixing errors.
Golomb, Julie D
2015-11-01
Spatial attention is thought to play a critical role in feature binding. However, often multiple objects or locations are of interest in our environment, and we need to shift or split attention between them. Recent evidence has demonstrated that shifting and splitting spatial attention results in different types of feature-binding errors. In particular, when two locations are simultaneously sharing attentional resources, subjects are susceptible to feature-mixing errors; that is, they tend to report a color that is a subtle blend of the target color and the color at the other attended location. The present study was designed to test whether these feature-mixing errors are influenced by target-distractor similarity. Subjects were cued to split attention across two different spatial locations, and were subsequently presented with an array of colored stimuli, followed by a postcue indicating which color to report. Target-distractor similarity was manipulated by varying the distance in color space between the two attended stimuli. Probabilistic modeling in all cases revealed shifts in the response distribution consistent with feature-mixing errors; however, the patterns differed considerably across target-distractor color distances. With large differences in color, the findings replicated the mixing result, but with small color differences, repulsion was instead observed, with the reported target color shifted away from the other attended color.
Bayesian mixture analysis for metagenomic community profiling.
Morfopoulou, Sofia; Plagnol, Vincent
2015-09-15
Deep sequencing of clinical samples is now an established tool for the detection of infectious pathogens, with direct medical applications. The large amount of data generated produces an opportunity to detect species even at very low levels, provided that computational tools can effectively profile the relevant metagenomic communities. Data interpretation is complicated by the fact that short sequencing reads can match multiple organisms and by the lack of completeness of existing databases, in particular for viral pathogens. Here we present metaMix, a Bayesian mixture model framework for resolving complex metagenomic mixtures. We show that the use of parallel Monte Carlo Markov chains for the exploration of the species space enables the identification of the set of species most likely to contribute to the mixture. We demonstrate the greater accuracy of metaMix compared with relevant methods, particularly for profiling complex communities consisting of several related species. We designed metaMix specifically for the analysis of deep transcriptome sequencing datasets, with a focus on viral pathogen detection; however, the principles are generally applicable to all types of metagenomic mixtures. metaMix is implemented as a user friendly R package, freely available on CRAN: http://cran.r-project.org/web/packages/metaMix sofia.morfopoulou.10@ucl.ac.uk Supplementary data are available at Bionformatics online. © The Author 2015. Published by Oxford University Press.
Winkelmann, Stefanie; Schütte, Christof
2017-09-21
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
Group Prenatal Care: A Financial Perspective.
Rowley, Rebecca A; Phillips, Lindsay E; O'Dell, Lisa; Husseini, Racha El; Carpino, Sarah; Hartman, Scott
2016-01-01
Multiple studies have demonstrated improved perinatal outcomes for group prenatal care (GPC) when compared to traditional prenatal care. Benefits of GPC include lower rates of prematurity and low birth weight, fewer cesarean deliveries, improved breastfeeding outcomes and improved maternal satisfaction with care. However, the outpatient financial costs of running a GPC program are not well established. This study involved the creation of a financial model that forecasted costs and revenues for prenatal care groups with various numbers of participants based on numerous variables, including patient population, payor mix, patient show rates, staffing mix, supply usage and overhead costs. The model was developed for use in an urban underserved practice. Adjusted revenue per pregnancy in this model was found to be $989.93 for traditional care and $1080.69 for GPC. Cost neutrality for GPC was achieved when each group enrolled an average of 10.652 women with an enriched staffing model or 4.801 women when groups were staffed by a single nurse and single clinician. Mathematical cost-benefit modeling in an urban underserved practice demonstrated that GPC can be not only financially sustainable but possibly a net income generator for the outpatient clinic. Use of this model could offer maternity care practices an important tool for demonstrating the financial practicality of GPC.
NASA Astrophysics Data System (ADS)
Winkelmann, Stefanie; Schütte, Christof
2017-09-01
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
Multiplicative mixing of object identity and image attributes in single inferior temporal neurons.
Ratan Murty, N Apurva; Arun, S P
2018-04-03
Object recognition is challenging because the same object can produce vastly different images, mixing signals related to its identity with signals due to its image attributes, such as size, position, rotation, etc. Previous studies have shown that both signals are present in high-level visual areas, but precisely how they are combined has remained unclear. One possibility is that neurons might encode identity and attribute signals multiplicatively so that each can be efficiently decoded without interference from the other. Here, we show that, in high-level visual cortex, responses of single neurons can be explained better as a product rather than a sum of tuning for object identity and tuning for image attributes. This subtle effect in single neurons produced substantially better population decoding of object identity and image attributes in the neural population as a whole. This property was absent both in low-level vision models and in deep neural networks. It was also unique to invariances: when tested with two-part objects, neural responses were explained better as a sum than as a product of part tuning. Taken together, our results indicate that signals requiring separate decoding, such as object identity and image attributes, are combined multiplicatively in IT neurons, whereas signals that require integration (such as parts in an object) are combined additively. Copyright © 2018 the Author(s). Published by PNAS.
Webster, R J; Williams, A; Marchetti, F; Yauk, C L
2018-07-01
Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Hedden, Sarra L; Woolson, Robert F; Carter, Rickey E; Palesch, Yuko; Upadhyaya, Himanshu P; Malcolm, Robert J
2009-07-01
"Loss to follow-up" can be substantial in substance abuse clinical trials. When extensive losses to follow-up occur, one must cautiously analyze and interpret the findings of a research study. Aims of this project were to introduce the types of missing data mechanisms and describe several methods for analyzing data with loss to follow-up. Furthermore, a simulation study compared Type I error and power of several methods when missing data amount and mechanism varies. Methods compared were the following: Last observation carried forward (LOCF), multiple imputation (MI), modified stratified summary statistics (SSS), and mixed effects models. Results demonstrated nominal Type I error for all methods; power was high for all methods except LOCF. Mixed effect model, modified SSS, and MI are generally recommended for use; however, many methods require that the data are missing at random or missing completely at random (i.e., "ignorable"). If the missing data are presumed to be nonignorable, a sensitivity analysis is recommended.
Brown, Andrew; Shi, Qi; Moore, Terry W.; Yoon, Younghyoun; Prussia, Andrew; Maddox, Clinton; Liotta, Dennis C.; Shim*, Hyunsuk; Snyder*, James P.
2014-01-01
Curcumin is a biologically active component of curry powder. A structurally-related class of mimetics possesses similar anti-inflammatory and anticancer properties. Mechanism has been examined by exploring kinase inhibition trends. In a screen of 50 kinases relevant to many forms of cancer, one member of the series (4, EF31) showed ≥85% inhibition for ten of the enzymes at 5 μM, while twenty-two of the proteins were blocked at ≥40%. IC50’s for an expanded set of curcumin analogs established a rank order of potencies, and analyses of IKKβ and AKT2 enzyme kinetics for 4 revealed a mixed inhibition model, ATP competition dominating. Our curcumin mimetics are generally selective for Ser/Thr kinases. Both selectivity and potency trends are compatible with protein sequence comparisons, while modeled kinase binding site geometries deliver a reasonable correlation with mixed inhibition. Overall, these analogs are shown to be pleiotropic inhibitors that operate at multiple points along cell signaling pathways. PMID:23550937
Brown, Andrew; Shi, Qi; Moore, Terry W; Yoon, Younghyoun; Prussia, Andrew; Maddox, Clinton; Liotta, Dennis C; Shim, Hyunsuk; Snyder, James P
2013-05-09
Curcumin is a biologically active component of curry powder. A structurally related class of mimetics possesses similar anti-inflammatory and anticancer properties. Mechanism has been examined by exploring kinase inhibition trends. In a screen of 50 kinases relevant to many forms of cancer, one member of the series (4, EF31) showed ≥85% inhibition for 10 of the enzymes at 5 μM, while 22 of the proteins were blocked at ≥40%. IC50 values for an expanded set of curcumin analogues established a rank order of potencies, and analyses of IKKβ and AKT2 enzyme kinetics for 4 revealed a mixed inhibition model, ATP competition dominating. Our curcumin mimetics are generally selective for Ser/Thr kinases. Both selectivity and potency trends are compatible with protein sequence comparisons, while modeled kinase binding site geometries deliver a reasonable correlation with mixed inhibition. Overall, these analogues are shown to be pleiotropic inhibitors that operate at multiple points along cell signaling pathways.
Evaluating diagnosis-based case-mix measures: how well do they apply to the VA population?
Rosen, A K; Loveland, S; Anderson, J J; Rothendler, J A; Hankin, C S; Rakovski, C C; Moskowitz, M A; Berlowitz, D R
2001-07-01
Diagnosis-based case-mix measures are increasingly used for provider profiling, resource allocation, and capitation rate setting. Measures developed in one setting may not adequately capture the disease burden in other settings. To examine the feasibility of adapting two such measures, Adjusted Clinical Groups (ACGs) and Diagnostic Cost Groups (DCGs), to the Department of Veterans Affairs (VA) population. A 60% random sample of veterans who used health care services during FY 1997 was obtained from VA inpatient and outpatient administrative databases. A split-sample technique was used to obtain a 40% sample (n = 1,046,803) for development and a 20% sample (n = 524,461) for validation. Concurrent ACG and DCG risk adjustment models, using 1997 diagnoses and demographics to predict FY 1997 utilization (ambulatory provider encounters, and service days-the sum of a patient's inpatient and outpatient visit days), were fitted and cross-validated. Patients were classified into groupings that indicated a population with multiple psychiatric and medical diseases. Model R-squares explained between 6% and 32% of the variation in service utilization. Although reparameterized models did better in predicting utilization than models with external weights, none of the models was adequate in characterizing the entire population. For predicting service days, DCGs were superior to ACGs in most categories, whereas ACGs did better at discriminating among veterans who had the lowest utilization. Although "off-the-shelf" case-mix measures perform moderately well when applied to another setting, modifications may be required to accurately characterize a population's disease burden with respect to the resource needs of all patients.
Probing coherence in microcavity frequency combs via optical pulse shaping
NASA Astrophysics Data System (ADS)
Ferdous, Fahmida; Miao, Houxun; Wang, Pei-Hsun; Leaird, Daniel E.; Srinivasan, Kartik; Chen, Lei; Aksyuk, Vladimir; Weiner, Andrew M.
2012-09-01
Recent investigations of microcavity frequency combs based on cascaded four-wave mixing have revealed a link between the evolution of the optical spectrum and the observed temporal coherence. Here we study a silicon nitride microresonator for which the initial four-wave mixing sidebands are spaced by multiple free spectral ranges (FSRs) from the pump, then fill in to yield a comb with single FSR spacing, resulting in partial coherence. By using a pulse shaper to select and manipulate the phase of various subsets of spectral lines, we are able to probe the structure of the coherence within the partially coherent comb. Our data demonstrate strong variation in the degree of mutual coherence between different groups of lines and provide support for a simple model of partially coherent comb formation.
Quantifying learning in biotracer studies.
Brown, Christopher J; Brett, Michael T; Adame, Maria Fernanda; Stewart-Koster, Ben; Bunn, Stuart E
2018-04-12
Mixing models have become requisite tools for analyzing biotracer data, most commonly stable isotope ratios, to infer dietary contributions of multiple sources to a consumer. However, Bayesian mixing models will always return a result that defaults to their priors if the data poorly resolve the source contributions, and thus, their interpretation requires caution. We describe an application of information theory to quantify how much has been learned about a consumer's diet from new biotracer data. We apply the approach to two example data sets. We find that variation in the isotope ratios of sources limits the precision of estimates for the consumer's diet, even with a large number of consumer samples. Thus, the approach which we describe is a type of power analysis that uses a priori simulations to find an optimal sample size. Biotracer data are fundamentally limited in their ability to discriminate consumer diets. We suggest that other types of data, such as gut content analysis, must be used as prior information in model fitting, to improve model learning about the consumer's diet. Information theory may also be used to identify optimal sampling protocols in situations where sampling of consumers is limited due to expense or ethical concerns.
Research on mixed network architecture collaborative application model
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Zhao, Xi'an; Liang, Song
2009-10-01
When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.
A compound chimeric antigen receptor strategy for targeting multiple myeloma.
Chen, K H; Wada, M; Pinz, K G; Liu, H; Shuai, X; Chen, X; Yan, L E; Petrov, J C; Salman, H; Senzel, L; Leung, E L H; Jiang, X; Ma, Y
2018-02-01
Current clinical outcomes using chimeric-antigen receptors (CARs) against multiple myeloma show promise in the eradication of bulk disease. However, these anti-BCMA (CD269) CARs observe relapse as a common phenomenon after treatment due to the reemergence of either antigen-positive or -negative cells. Hence, the development of improvements in CAR design to target antigen loss and increase effector cell persistency represents a critical need. Here, we report on the anti-tumor activity of a CAR T-cell possessing two complete and independent CAR receptors against the multiple myeloma antigens BCMA and CS1. We determined that the resulting compound CAR (cCAR) T-cell possesses consistent, potent and directed cytotoxicity against each target antigen population. Using multiple mouse models of myeloma and mixed cell populations, we are further able to show superior in vivo survival by directed cytotoxicity against multiple populations compared to a single-expressing CAR T-cell. These findings indicate that compound targeting of BCMA and CS1 on myeloma cells can potentially be an effective strategy for augmenting the response against myeloma bulk disease and for initiation of broader coverage CAR therapy.
The mediating effect of context variation in mixed practice for transfer of basic science.
Kulasegaram, Kulamakan; Min, Cynthia; Howey, Elizabeth; Neville, Alan; Woods, Nicole; Dore, Kelly; Norman, Geoffrey
2015-10-01
Applying a previously learned concept to a novel problem is an important but difficult process called transfer. Practicing multiple concepts together (mixed practice mode) has been shown superior to practicing concepts separately (blocked practice mode) for transfer. This study examined the effect of single and multiple practice contexts for both mixed and blocked practice modalities on transfer performance. We looked at performance on near transfer (familiar contexts) cases and far transfer (unfamiliar contexts) cases. First year psychology students (n = 42) learned three physiological concepts in a 2 × 2 factorial study (one or two practice contexts and blocked or mixed practice). Each concept was practiced with two clinical cases; practice context was defined as the number of organ systems used (one system per concept vs. two systems). In blocked practice, two practice cases followed each concept; in mixed practice, students learned all concepts before seeing six practice cases. Transfer testing consisted of correctly classifying and explaining 15 clinical cases involving near and far transfer. The outcome was ratings of quality of explanations on a 0-3 scale. The repeated measures analysis showed a significant near versus far by organ system interaction [F(1,38) = 3.4, p < 0.002] with practice with a single context showing lower far transfer scores than near transfer [0.58 (0.37)-0.83 (0.37)] compared to the two contexts which had similar far and near transfer scores [1.19 (0.50)-1.01 (0.38)]. Practicing with two organ contexts had a significant benefit for far transfer regardless of mixed or blocked practice; the single context mixed practice group had the lowest far transfer performance; this was a large effect size (Cohen's d = 0.81). Using only one practice context during practice significantly lowers performance even with the usually superior mixed practice mode. Novices should be exposed to multiple contexts and mixed practice to facilitate transfer.
De Champlain, Andre F; Boulais, Andre-Philippe; Dallas, Andrew
2016-01-01
The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada's Qualifying Examination Part I (MCCQEI) based on item response theory. Our data consisted of test results from 8,213 first time applicants to MCCQEI in spring and fall 2010 and 2011 test administrations. The data set contained several thousand multiple choice items and several hundred CDM cases. Four dichotomous calibrations were run using BILOG-MG 3.0. All 3 mixed item format (dichotomous MCQ responses and polytomous CDM case scores) calibrations were conducted using PARSCALE 4. The 2-PL model had identical numbers of items with chi-square values at or below a Type I error rate of 0.01 (83/3,499 or 0.02). In all 3 polytomous models, whether the MCQs were either anchored or concurrently run with the CDM cases, results suggest very poor fit. All IRT abilities estimated from dichotomous calibration designs correlated very highly with each other. IRT-based pass-fail rates were extremely similar, not only across calibration designs and methods, but also with regard to the actual reported decision to candidates. The largest difference noted in pass rates was 4.78%, which occurred between the mixed format concurrent 2-PL graded response model (pass rate= 80.43%) and the dichotomous anchored 1-PL calibrations (pass rate= 85.21%). Simpler calibration designs with dichotomized items should be implemented. The dichotomous calibrations provided better fit of the item response matrix than more complex, polytomous calibrations.
Xia, Xinghui; Wu, Qiong; Zhu, Baotong; Zhao, Pujun; Zhang, Shangwei; Yang, Lingyan
2015-08-01
We applied a mixing model based on stable isotopic δ(13)C, δ(15)N, and C:N ratios to estimate the contributions of multiple sources to sediment nitrogen. We also developed a conceptual model describing and analyzing the impacts of climate change on nitrogen enrichment. These two models were conducted in Miyun Reservoir to analyze the contribution of climate change to the variations in sediment nitrogen sources based on two (210)Pb and (137)Cs dated sediment cores. The results showed that during the past 50years, average contributions of soil and fertilizer, submerged macrophytes, N2-fixing phytoplankton, and non-N2-fixing phytoplankton were 40.7%, 40.3%, 11.8%, and 7.2%, respectively. In addition, total nitrogen (TN) contents in sediment showed significant increasing trends from 1960 to 2010, and sediment nitrogen of both submerged macrophytes and phytoplankton sources exhibited significant increasing trends during the past 50years. In contrast, soil and fertilizer sources showed a significant decreasing trend from 1990 to 2010. According to the changing trend of N2-fixing phytoplankton, changes of temperature and sunshine duration accounted for at least 43% of the trend in the sediment nitrogen enrichment over the past 50years. Regression analysis of the climatic factors on nitrogen sources showed that the contributions of precipitation, temperature, and sunshine duration to the variations in sediment nitrogen sources ranged from 18.5% to 60.3%. The study demonstrates that the mixing model provides a robust method for calculating the contribution of multiple nitrogen sources in sediment, and this study also suggests that N2-fixing phytoplankton could be regarded as an important response factor for assessing the impacts of climate change on nitrogen enrichment. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
McTigue, N. D.; Dunton, K. H.
2017-10-01
Predicting how alterations in sea ice-mediated primary production will impact Arctic food webs remains a challenge in forecasting ecological responses to climate change. One top-down approach to this challenge is to elucidate trophic roles of consumers as either specialists (i.e., consumers of predominantly one food resource) or generalists (i.e., consumers of multiple food resources) to categorize the dependence of consumers on each primary producer. At Hanna Shoal in the Chukchi Sea, Alaska, we used stable carbon and nitrogen isotope data to quantify trophic redundancy with standard ellipse areas at both the species and trophic guild levels. We also investigated species-level trophic plasticity by analyzing the varying extents that three end-members were assimilated by the food web using the mixing model simmr (Stable Isotope Mixing Model in R). Our results showed that ice algae, a combined phytoplankton and sediment organic matter composite (PSOM), and a hypothesized microphytobenthos (MPB) component were incorporated by consumers in the benthic food web, but their importance varied by species. Some primary consumers relied heavily on PSOM (e.g, the amphipods Ampelisca sp. and Byblis sp.; the copepod Calanus sp.), while others exhibited generalist feeding and obtained nutrition from multiple sources (e.g., the holothuroidean Ocnus glacialis, the gastropod Tachyrhynchus sp., the sipunculid Golfingia margaritacea, and the bivalves Ennucula tenuis, Nuculana pernula, Macoma sp., and Yoldia hyperborea). Most higher trophic level benthic predators, including the gastropods Buccinum sp., Cryptonatica affinis, and Neptunea sp, the seastar Leptasterias groenlandica, and the amphipod Anonyx sp. also exhibited trophic plasticity by coupling energy pathways from multiple primary producers including PSOM, ice algae, and MPB. Our stable isotope data indicate that consumers in the Hanna Shoal food web exhibit considerable trophic redundancy, while few species were specialists and assimilated only one end-member. Although most consumers were capable of obtaining nutrition from multiple food sources, the timing, quantity, and quality of ice-mediated primary production may still have pronounced effects on food web structure.
NASA Astrophysics Data System (ADS)
Wang, Xiu-lin; Wei, Zheng; Wang, Rui; Huang, Wen-cai
2018-05-01
A self-mixing interferometer (SMI) with resolution twenty times higher than that of a conventional interferometer is developed by multiple reflections. Only by employing a simple external reflecting mirror, the multiple-pass optical configuration can be constructed. The advantage of the configuration is simple and easy to make the light re-injected back into the laser cavity. Theoretical analysis shows that the resolution of measurement is scalable by adjusting the number of reflections. The experiment shows that the proposed method has the optical resolution of approximate λ/40. The influence of displacement sensitivity gain ( G) is further analyzed and discussed in practical experiments.
ERIC Educational Resources Information Center
Su, Shu-Chin; Liang, Eleen
2017-01-01
This study is based on the "2014 the Schweitzer Program" in Taiwan which spanned for four weeks from the 2nd to 29th of August. The lessons included four classes of multimedia picture books and eight game-based lessons. The aim of this research is to describe how to integrate the theory of "Multiple Intelligence (MI)" by Howard…
Physics prospects of future neutrino oscillation experiments in Asia
NASA Astrophysics Data System (ADS)
Hagiwara, Kaoru
2004-12-01
The three neutrino model has 9 physical parameters, 3 neutrino masses, 3 mixing angles and 3 CP violating phases. Among them, neutrino oscillation experiments can probe 6 neutrino parameters: 2 mass squared differences, 3 mixing angles, and 1 CP phase. The experiments performed so far determined the magnitudes of the two mass squared differences, the sign of the smaller mass squared difference, the magnitudes of two of the three mixing angles, and the upper bound on the third mixing angle. The sign of the larger mass squared difference (the neutrino mass hierarchy pattern), the magnitude of the third mixing angle and the CP violating phase, and a two-fold ambiguity in the mixing angle that dictates the atmospheric neutrino oscillation should be determined by future oscillation experiments. In this talk, I introduce a few ideas of future long baseline neutrino oscillation experiments which make use of the super neutrino beams from J-PARC (Japan Proton Accelerator Research Complex) in Tokai village. We examine the potential of HyperKamiokande (HK), the proposed 1 Mega-ton water Čerenkov detector, and then study the fate and possible detection of the off-axis beam from J-PARC in Korea, which is available free throughout the period of the T2K (Tokai-to-SuperKamiokande) and the possible T-to-HK projects. Although the CP violating phase can be measured accurately by studying ν→ν and ν→ν oscillations at HK, there appear multiple solution ambiguities which can be solved only by determining the neutrino mass hierarchy and the twofold ambiguity in the mixing angle. We show that very long baseline experiments with higher energy beams from J-PARC and a possible huge Water Čerenkov Calorimeter detector proposed in Beijing can resolve the neutrino mass hierarchy. If such a detector can be built in China, future experiments with a muon storage ring neutrino factory at J-PARC will be able to lift all the degeneracies in the three neutrino model parameters.
Deep Whole-Genome Sequencing to Detect Mixed Infection of Mycobacterium tuberculosis
Gan, Mingyu; Liu, Qingyun; Yang, Chongguang; Gao, Qian; Luo, Tao
2016-01-01
Mixed infection by multiple Mycobacterium tuberculosis (MTB) strains is associated with poor treatment outcome of tuberculosis (TB). Traditional genotyping methods have been used to detect mixed infections of MTB, however, their sensitivity and resolution are limited. Deep whole-genome sequencing (WGS) has been proved highly sensitive and discriminative for studying population heterogeneity of MTB. Here, we developed a phylogenetic-based method to detect MTB mixed infections using WGS data. We collected published WGS data of 782 global MTB strains from public database. We called homogeneous and heterogeneous single nucleotide variations (SNVs) of individual strains by mapping short reads to the ancestral MTB reference genome. We constructed a phylogenomic database based on 68,639 homogeneous SNVs of 652 MTB strains. Mixed infections were determined if multiple evolutionary paths were identified by mapping the SNVs of individual samples to the phylogenomic database. By simulation, our method could specifically detect mixed infections when the sequencing depth of minor strains was as low as 1× coverage, and when the genomic distance of two mixed strains was as small as 16 SNVs. By applying our methods to all 782 samples, we detected 47 mixed infections and 45 of them were caused by locally endemic strains. The results indicate that our method is highly sensitive and discriminative for identifying mixed infections from deep WGS data of MTB isolates. PMID:27391214
A mixing timescale model for TPDF simulations of turbulent premixed flames
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...
2017-02-06
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
A mixing timescale model for TPDF simulations of turbulent premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
Investigation of the N2O emission strength in the U. S. Corn Belt
NASA Astrophysics Data System (ADS)
Fu, Congsheng; Lee, Xuhui; Griffis, Timothy J.; Dlugokencky, Edward J.; Andrews, Arlyn E.
2017-09-01
Nitrous oxide (N2O) has a high global warming potential and depletes stratospheric ozone. The U. S. Corn Belt plays an important role in the global anthropogenic N2O budget. To date, studies on local surface N2O emissions and the atmospheric N2O budget have commonly used Lagrangian models. In the present study, we used an Eulerian model - Weather Research and Forecasting Chemistry (WRF-Chem) model to investigate the relationships between N2O emissions in the Corn Belt and observed atmospheric N2O mixing ratios. We derived a simple equation to relate the emission strengths to atmospheric N2O mixing ratios, and used the derived equation and hourly atmospheric N2O measurements at the KCMP tall tower in Minnesota to constrain agricultural N2O emissions. The modeled spatial patterns of atmospheric N2O were evaluated against discrete observations at multiple tall towers in the NOAA flask network. After optimization of the surface flux, the model reproduced reasonably well the hourly N2O mixing ratios monitored at the KCMP tower. Agricultural N2O emissions in the EDGAR42 database needed to be scaled up by 19.0 to 28.1 fold to represent the true emissions in the Corn Belt for June 1-20, 2010 - a peak emission period. Optimized mean N2O emissions were 3.00-4.38, 1.52-2.08, 0.61-0.81 and 0.56-0.75 nmol m- 2 s- 1 for June 1-20, August 1-20, October 1-20 and December 1-20, 2010, respectively. The simulated spatial patterns of atmospheric N2O mixing ratios after optimization were in good agreement with the NOAA discrete observations during the strong emission peak in June. Such spatial patterns suggest that the underestimate of emissions using IPCC (Inter-governmental Panel on Climate Change) inventory methodology is not dependent on tower measurement location.
Flavopiridol in Treating Patients With Relapsed or Refractory Lymphoma or Multiple Myeloma
2016-06-27
Adult Lymphocyte Depletion Hodgkin Lymphoma; Adult Lymphocyte Predominant Hodgkin Lymphoma; Adult Mixed Cellularity Hodgkin Lymphoma; Adult Nodular Sclerosis Hodgkin Lymphoma; Anaplastic Large Cell Lymphoma; Angioimmunoblastic T-cell Lymphoma; Extranodal Marginal Zone B-cell Lymphoma of Mucosa-associated Lymphoid Tissue; Nodal Marginal Zone B-cell Lymphoma; Recurrent Adult Diffuse Large Cell Lymphoma; Recurrent Adult Diffuse Mixed Cell Lymphoma; Recurrent Adult Diffuse Small Cleaved Cell Lymphoma; Recurrent Adult Grade III Lymphomatoid Granulomatosis; Recurrent Adult Hodgkin Lymphoma; Recurrent Adult T-cell Leukemia/Lymphoma; Recurrent Cutaneous T-cell Non-Hodgkin Lymphoma; Recurrent Grade 1 Follicular Lymphoma; Recurrent Grade 2 Follicular Lymphoma; Recurrent Grade 3 Follicular Lymphoma; Recurrent Mantle Cell Lymphoma; Recurrent Marginal Zone Lymphoma; Recurrent Mycosis Fungoides/Sezary Syndrome; Recurrent Small Lymphocytic Lymphoma; Refractory Multiple Myeloma; Splenic Marginal Zone Lymphoma; Stage I Multiple Myeloma; Stage II Multiple Myeloma; Stage III Multiple Myeloma; Waldenström Macroglobulinemia
Mixed-Mode Surveys: A Strategy to Reduce Costs and Enhance Response Rates
ERIC Educational Resources Information Center
Tobin, Daniel; Thomson, Joan; Radhakrishna, Rama; LaBorde, Luke
2012-01-01
Mixed-mode surveys present one opportunity for Extension to determine program outcomes at lower costs. In order to conduct a follow-up evaluation, we implemented a mixed-mode survey that relied on communication using the Web, postal mailings, and telephone calls. Using multiple modes conserved costs by reducing the number of postal mailings yet…
Mixing Qualitative and Quantitative Methods: Insights into Design and Analysis Issues
ERIC Educational Resources Information Center
Lieber, Eli
2009-01-01
This article describes and discusses issues related to research design and data analysis in the mixing of qualitative and quantitative methods. It is increasingly desirable to use multiple methods in research, but questions arise as to how best to design and analyze the data generated by mixed methods projects. I offer a conceptualization for such…
NASA Astrophysics Data System (ADS)
Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg
2010-06-01
Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.
Fractional noise destroys or induces a stochastic bifurcation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qigui, E-mail: qgyang@scut.edu.cn; Zeng, Caibin, E-mail: zeng.cb@mail.scut.edu.cn; School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640
2013-12-15
Little seems to be known about the stochastic bifurcation phenomena of non-Markovian systems. Our intention in this paper is to understand such complex dynamics by a simple system, namely, the Black-Scholes model driven by a mixed fractional Brownian motion. The most interesting finding is that the multiplicative fractional noise not only destroys but also induces a stochastic bifurcation under some suitable conditions. So it opens a possible way to explore the theory of stochastic bifurcation in the non-Markovian framework.
Yakubova, Gulnoza; Hughes, Elizabeth M; Hornberger, Erin
2015-09-01
The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with ASD completed the study. All three students demonstrated greater accuracy in solving fraction word problems and maintained accuracy levels at a 1-week follow-up.
NASA Astrophysics Data System (ADS)
Xu, Bin; Ye, Ming; Dong, Shuning; Dai, Zhenxue; Pei, Yongzhen
2018-07-01
Quantitative analysis of recession curves of karst spring hydrographs is a vital tool for understanding karst hydrology and inferring hydraulic properties of karst aquifers. This paper presents a new model for simulating karst spring recession curves. The new model has the following characteristics: (1) the model considers two separate but hydraulically connected reservoirs: matrix reservoir and conduit reservoir; (2) the model separates karst spring hydrograph recession into three stages: conduit-drainage stage, mixed-drainage stage (with both conduit drainage and matrix drainage), and matrix-drainage stage; and (3) in the mixed-drainage stage, the model uses multiple conduit layers to present different levels of conduit development. The new model outperforms the classical Mangin model and the recently developed Fiorillo model for simulating observed discharge at the Madison Blue Spring located in northern Florida. This is attributed to the latter two characteristics of the new model. Based on the new model, a method is developed for estimating effective porosity of the matrix and conduit reservoirs for the three drainage stages. The estimated porosity values are consistent with measured matrix porosity at the study site and with estimated conduit porosity reported in literature. The new model for simulating karst spring hydrograph recession is mathematically general, and can be applied to a wide range of karst spring hydrographs to understand groundwater flow in karst aquifers. The limitations of the model are discussed at the end of this paper.
LVC interaction within a mixed-reality training system
NASA Astrophysics Data System (ADS)
Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio
2012-03-01
The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.
Competitive advantage for multiple-memory strategies in an artificial market
NASA Astrophysics Data System (ADS)
Mitman, Kurt E.; Choe, Sehyo C.; Johnson, Neil F.
2005-05-01
We consider a simple binary market model containing N competitive agents. The novel feature of our model is that it incorporates the tendency shown by traders to look for patterns in past price movements over multiple time scales, i.e. multiple memory-lengths. In the regime where these memory-lengths are all small, the average winnings per agent exceed those obtained for either (1) a pure population where all agents have equal memory-length, or (2) a mixed population comprising sub-populations of equal-memory agents with each sub-population having a different memory-length. Agents who consistently play strategies of a given memory-length, are found to win more on average -- switching between strategies with different memory lengths incurs an effective penalty, while switching between strategies of equal memory does not. Agents employing short-memory strategies can outperform agents using long-memory strategies, even in the regime where an equal-memory system would have favored the use of long-memory strategies. Using the many-body 'Crowd-Anticrowd' theory, we obtain analytic expressions which are in good agreement with the observed numerical results. In the context of financial markets, our results suggest that multiple-memory agents have a better chance of identifying price patterns of unknown length and hence will typically have higher winnings.
Single and multiple phenotype QTL analyses of downy mildew resistance in interspecific grapevines.
Divilov, Konstantin; Barba, Paola; Cadle-Davidson, Lance; Reisch, Bruce I
2018-05-01
Downy mildew resistance across days post-inoculation, experiments, and years in two interspecific grapevine F 1 families was investigated using linear mixed models and Bayesian networks, and five new QTL were identified. Breeding grapevines for downy mildew disease resistance has traditionally relied on qualitative gene resistance, which can be overcome by pathogen evolution. Analyzing two interspecific F 1 families, both having ancestry derived from Vitis vinifera and wild North American Vitis species, across 2 years and multiple experiments, we found multiple loci associated with downy mildew sporulation and hypersensitive response in both families using a single phenotype model. The loci explained between 7 and 17% of the variance for either phenotype, suggesting a complex genetic architecture for these traits in the two families studied. For two loci, we used RNA-Seq to detect differentially transcribed genes and found that the candidate genes at these loci were likely not NBS-LRR genes. Additionally, using a multiple phenotype Bayesian network analysis, we found effects between the leaf trichome density, hypersensitive response, and sporulation phenotypes. Moderate-high heritabilities were found for all three phenotypes, suggesting that selection for downy mildew resistance is an achievable goal by breeding for either physical- or non-physical-based resistance mechanisms, with the combination of the two possibly providing durable resistance.
Aguado Loi, Claudia X; Alfonso, Moya L; Chan, Isabella; Anderson, Kelsey; Tyson, Dinorah Dina Martinez; Gonzales, Junius; Corvin, Jaime
2017-08-01
The purpose of this paper is to share lessons learned from a collaborative, community-informed mixed-methods approach to adapting an evidence-based intervention to meet the needs of Latinos with chronic disease and minor depression and their family members. Mixed-methods informed by community-based participatory research (CBPR) were employed to triangulate multiple stakeholders' perceptions of facilitators and barriers of implementing the adapted intervention in community settings. Community partners provided an insider perspective to overcome methodological challenges. The study's community informed mixed-methods: research approach offered advantages to a single research methodology by expanding or confirming research findings and engaging multiple stakeholders in data collection. This approach also allowed community partners to collaborate with academic partners in key research decisions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Guo, P; Huang, G H
2009-01-01
In this study, an inexact fuzzy chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is proposed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing inexact two-stage programming and mixed-integer linear programming techniques by incorporating uncertainties expressed as multiple uncertainties of intervals and dual probability distributions within a general optimization framework. The developed method can provide an effective linkage between the predefined environmental policies and the associated economic implications. Four special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it provides a linkage to predefined policies that have to be respected when a modeling effort is undertaken; secondly, it is useful for tackling uncertainties presented as intervals, probabilities, fuzzy sets and their incorporation; thirdly, it facilitates dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period, multi-level, and multi-option context; fourthly, the penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised solid waste-generation rates are violated. In a companion paper, the developed method is applied to a real case for the long-term planning of waste management in the City of Regina, Canada.
Communication Architecture in Mixed-Reality Simulations of Unmanned Systems
2018-01-01
Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture’s viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture. PMID:29538290
K →π matrix elements of the chromomagnetic operator on the lattice
NASA Astrophysics Data System (ADS)
Constantinou, M.; Costa, M.; Frezzotti, R.; Lubicz, V.; Martinelli, G.; Meloni, D.; Panagopoulos, H.; Simula, S.; ETM Collaboration
2018-04-01
We present the results of the first lattice QCD calculation of the K →π matrix elements of the chromomagnetic operator OCM=g s ¯ σμ νGμ νd , which appears in the effective Hamiltonian describing Δ S =1 transitions in and beyond the standard model. Having dimension five, the chromomagnetic operator is characterized by a rich pattern of mixing with operators of equal and lower dimensionality. The multiplicative renormalization factor as well as the mixing coefficients with the operators of equal dimension have been computed at one loop in perturbation theory. The power divergent coefficients controlling the mixing with operators of lower dimension have been determined nonperturbatively, by imposing suitable subtraction conditions. The numerical simulations have been carried out using the gauge field configurations produced by the European Twisted Mass Collaboration with Nf=2 +1 +1 dynamical quarks at three values of the lattice spacing. Our result for the B parameter of the chromomagnetic operator at the physical pion and kaon point is BCMOK π=0.273 (69 ) , while in the SU(3) chiral limit we obtain BCMO=0.076 (23 ) . Our findings are significantly smaller than the model-dependent estimate BCMO˜1 - 4 , currently used in phenomenological analyses, and improve the uncertainty on this important phenomenological quantity.
Communication Architecture in Mixed-Reality Simulations of Unmanned Systems.
Selecký, Martin; Faigl, Jan; Rollo, Milan
2018-03-14
Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture's viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture.
Competition for resources can explain patterns of social and individual learning in nature.
Smolla, Marco; Gilman, R Tucker; Galla, Tobias; Shultz, Susanne
2015-09-22
In nature, animals often ignore socially available information despite the multiple theoretical benefits of social learning over individual trial-and-error learning. Using information filtered by others is quicker, more efficient and less risky than randomly sampling the environment. To explain the mix of social and individual learning used by animals in nature, most models penalize the quality of socially derived information as either out of date, of poor fidelity or costly to acquire. Competition for limited resources, a fundamental evolutionary force, provides a compelling, yet hitherto overlooked, explanation for the evolution of mixed-learning strategies. We present a novel model of social learning that incorporates competition and demonstrates that (i) social learning is favoured when competition is weak, but (ii) if competition is strong social learning is favoured only when resource quality is highly variable and there is low environmental turnover. The frequency of social learning in our model always evolves until it reduces the mean foraging success of the population. The results of our model are consistent with empirical studies showing that individuals rely less on social information where resources vary little in quality and where there is high within-patch competition. Our model provides a framework for understanding the evolution of social learning, a prerequisite for human cumulative culture. © 2015 The Author(s).
Competition for resources can explain patterns of social and individual learning in nature
Smolla, Marco; Gilman, R. Tucker; Galla, Tobias; Shultz, Susanne
2015-01-01
In nature, animals often ignore socially available information despite the multiple theoretical benefits of social learning over individual trial-and-error learning. Using information filtered by others is quicker, more efficient and less risky than randomly sampling the environment. To explain the mix of social and individual learning used by animals in nature, most models penalize the quality of socially derived information as either out of date, of poor fidelity or costly to acquire. Competition for limited resources, a fundamental evolutionary force, provides a compelling, yet hitherto overlooked, explanation for the evolution of mixed-learning strategies. We present a novel model of social learning that incorporates competition and demonstrates that (i) social learning is favoured when competition is weak, but (ii) if competition is strong social learning is favoured only when resource quality is highly variable and there is low environmental turnover. The frequency of social learning in our model always evolves until it reduces the mean foraging success of the population. The results of our model are consistent with empirical studies showing that individuals rely less on social information where resources vary little in quality and where there is high within-patch competition. Our model provides a framework for understanding the evolution of social learning, a prerequisite for human cumulative culture. PMID:26354936
Krentzman, Amy R; Cranford, James A; Robinson, Elizabeth A R
2013-01-01
Alcoholics Anonymous (AA) states that recovery is possible through spiritual experiences and spiritual awakenings. Research examining spirituality as a mediator of AA's effect on drinking has been mixed. It is unknown whether such findings are due to variations in the operationalization of key constructs, such as AA and spirituality. To answer these questions, the authors used a longitudinal model to test 2 dimensions of AA as focal predictors and 6 dimensions of spirituality as possible mediators of AA's association with drinking. Data from the first 18 months of a 3-year longitudinal study of 364 alcohol-dependent individuals were analyzed. Structural equation modeling was used to replicate the analyses of Kelly et al. (Alcohol Clin Exp Res. 2011;35:454-463) and to compare AA attendance and AA involvement as focal predictors. Multiple regression analyses were used to determine which spirituality dimensions changed as the result of AA participation. A trimmed, data-driven model was employed to test multiple mediation paths simultaneously. The findings of the Kelly et al. study were replicated. AA involvement was a stronger predictor of drinking outcomes than AA attendance. AA involvement predicted increases in private religious practices, daily spiritual experiences, and forgiveness of others. However, only private religious practices mediated the relationship between AA and drinking.
Brace, Christopher L; Laeseke, Paul F; Sampson, Lisa A; Frey, Tina M; van der Weide, Daniel W; Lee, Fred T
2007-07-01
To prospectively investigate the ability of a single generator to power multiple small-diameter antennas and create large zones of ablation in an in vivo swine liver model. Thirteen female domestic swine (mean weight, 70 kg) were used for the study as approved by the animal care and use committee. A single generator was used to simultaneously power three triaxial antennas at 55 W per antenna for 10 minutes in three groups: a control group where antennas were spaced to eliminate ablation zone overlap (n=6; 18 individual zones of ablation) and experimental groups where antennas were spaced 2.5 cm (n=7) or 3.0 cm (n=5) apart. Animals were euthanized after ablation, and ablation zones were sectioned and measured. A mixed linear model was used to test for differences in size and circularity among groups. Mean (+/-standard deviation) cross-sectional areas of multiple-antenna zones of ablation at 2.5- and 3.0-cm spacing (26.6 cm(2) +/- 9.7 and 32.2 cm(2) +/- 8.1, respectively) were significantly larger than individual ablation zones created with single antennas (6.76 cm(2) +/- 2.8, P<.001) and were 31% (2.5-cm spacing group: multiple antenna mean area, 26.6 cm(2); 3 x single antenna mean area, 20.28 cm(2)) to 59% (3.0-cm spacing group: multiple antenna mean area, 32.2 cm(2); 3 x single antenna mean area, 20.28 cm(2)) larger than 3 times the mean area of the single-antenna zones. Zones of ablation were found to be very circular, and vessels as large as 1.1 cm were completely coagulated with multiple antennas. A single generator may effectively deliver microwave power to multiple antennas. Large volumes of tissue may be ablated and large vessels coagulated with multiple-antenna ablation in the same time as single-antenna ablation. (c) RSNA, 2007.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cosgrove, Benjamin D.; Cell Decision Processes Center, Massachusetts Institute of Technology, Cambridge, MA; Biotechnology Process Engineering Center, Massachusetts Institute of Technology, Cambridge, MA
Idiosyncratic drug hepatotoxicity represents a major problem in drug development due to inadequacy of current preclinical screening assays, but recently established rodent models utilizing bacterial LPS co-administration to induce an inflammatory background have successfully reproduced idiosyncratic hepatotoxicity signatures for certain drugs. However, the low-throughput nature of these models renders them problematic for employment as preclinical screening assays. Here, we present an analogous, but high-throughput, in vitro approach in which drugs are administered to a variety of cell types (primary human and rat hepatocytes and the human HepG2 cell line) across a landscape of inflammatory contexts containing LPS and cytokines TNF,more » IFN{gamma}, IL-1{alpha}, and IL-6. Using this assay, we observed drug-cytokine hepatotoxicity synergies for multiple idiosyncratic hepatotoxicants (ranitidine, trovafloxacin, nefazodone, nimesulide, clarithromycin, and telithromycin) but not for their corresponding non-toxic control compounds (famotidine, levofloxacin, buspirone, and aspirin). A larger compendium of drug-cytokine mix hepatotoxicity data demonstrated that hepatotoxicity synergies were largely potentiated by TNF, IL-1{alpha}, and LPS within the context of multi-cytokine mixes. Then, we screened 90 drugs for cytokine synergy in human hepatocytes and found that a significantly larger fraction of the idiosyncratic hepatotoxicants (19%) synergized with a single cytokine mix than did the non-hepatotoxic drugs (3%). Finally, we used an information theoretic approach to ascertain especially informative subsets of cytokine treatments for most highly effective construction of regression models for drug- and cytokine mix-induced hepatotoxicities across these cell systems. Our results suggest that this drug-cytokine co-treatment approach could provide a useful preclinical tool for investigating inflammation-associated idiosyncratic drug hepatotoxicity.« less
Jumping into the healthcare retail market: our experience.
Pollert, Pat; Dobberstein, Darla; Wiisanen, Ronald
2008-01-01
Who among us has not heard of the retail-based clinic concept? Retail-based clinics have been springing up across the country in Target, Walmart, grocery stores, drugstores, and shopping malls. Due to multiple marketplace issues, others who have not traditionally been providers of healthcare saw an opportunity to meet the consumer's demand. Do retail and healthcare mix, and can this model be successful? MeritCare Health System in Fargo, ND made the decision to embrace and experiment with this new emerging consumerism model. This article reviews our experience in developing the first retail-based clinic in our service area and the state of North Dakota.
NASA Astrophysics Data System (ADS)
Meneveau, C. V.; Bai, K.; Katz, J.
2011-12-01
The vegetation canopy has a significant impact on various physical and biological processes such as forest microclimate, rainfall evaporation distribution and climate change. Most scaled laboratory experimental studies have used canopy element models that consist of rigid vertical strips or cylindrical rods that can be typically represented through only one or a few characteristic length scales, for example the diameter and height for cylindrical rods. However, most natural canopies and vegetation are highly multi-scale with branches and sub-branches, covering a wide range of length scales. Fractals provide a convenient idealization of multi-scale objects, since their multi-scale properties can be described in simple ways (Mandelbrot 1982). While fractal aspects of turbulence have been studied in several works in the past decades, research on turbulence generated by fractal objects started more recently. We present an experimental study of boundary layer flow over fractal tree-like objects. Detailed Particle-Image-Velocimetry (PIV) measurements are carried out in the near-wake of a fractal-like tree. The tree is a pre-fractal with five generations, with three branches and a scale reduction factor 1/2 at each generation. Its similarity fractal dimension (Mandelbrot 1982) is D ~ 1.58. Detailed mean velocity and turbulence stress profiles are documented, as well as their downstream development. We then turn attention to the turbulence mixing properties of the flow, specifically to the question whether a mixing length-scale can be identified in this flow, and if so, how it relates to the geometric length-scales in the pre-fractal object. Scatter plots of mean velocity gradient (shear) and Reynolds shear stress exhibit good linear relation at all locations in the flow. Therefore, in the transverse direction of the wake evolution, the Boussinesq eddy viscosity concept is appropriate to describe the mixing. We find that the measured mixing length increases with increasing streamwise locations. Conversely, the measured eddy viscosity and mixing length decrease with increasing elevation, which differs from eddy viscosity and mixing length behaviors of traditional boundary layers or canopies studied before. In order to find an appropriate length for the flow, several models based on the notion of superposition of scales are proposed and examined. One approach is based on spectral distributions. Another more practical approach is based on length-scale distributions evaluated using fractal geometry tools. These proposed models agree well with the measured mixing length. The results indicate that information about multi-scale clustering of branches as it occurs in fractals has to be incorporated into models of the mixing length for flows through canopies with multiple scales. The research is supported by National Science Foundation grant ATM-0621396 and AGS-1047550.
Application of optimization technique for flood damage modeling in river system
NASA Astrophysics Data System (ADS)
Barman, Sangita Deb; Choudhury, Parthasarathi
2018-04-01
A river system is defined as a network of channels that drains different parts of a basin uniting downstream to form a common outflow. An application of various models found in literatures, to a river system having multiple upstream flows is not always straight forward, involves a lengthy procedure; and with non-availability of data sets model calibration and applications may become difficult. In the case of a river system the flow modeling can be simplified to a large extent if the channel network is replaced by an equivalent single channel. In the present work optimization model formulations based on equivalent flow and applications of the mixed integer programming based pre-emptive goal programming model in evaluating flood control alternatives for a real life river system in India are proposed to be covered in the study.
Modeling the Interactions Between Multiple Crack Closure Mechanisms at Threshold
NASA Technical Reports Server (NTRS)
Newman, John A.; Riddell, William T.; Piascik, Robert S.
2003-01-01
A fatigue crack closure model is developed that includes interactions between the three closure mechanisms most likely to occur at threshold; plasticity, roughness, and oxide. This model, herein referred to as the CROP model (for Closure, Roughness, Oxide, and Plasticity), also includes the effects of out-of plane cracking and multi-axial loading. These features make the CROP closure model uniquely suited for, but not limited to, threshold applications. Rough cracks are idealized here as two-dimensional sawtooths, whose geometry induces mixed-mode crack- tip stresses. Continuum mechanics and crack-tip dislocation concepts are combined to relate crack face displacements to crack-tip loads. Geometric criteria are used to determine closure loads from crack-face displacements. Finite element results, used to verify model predictions, provide critical information about the locations where crack closure occurs.
Bjork, K E; Kopral, C A; Wagner, B A; Dargatz, D A
2015-12-01
Antimicrobial use in agriculture is considered a pathway for the selection and dissemination of resistance determinants among animal and human populations. From 1997 through 2003 the U.S. National Antimicrobial Resistance Monitoring System (NARMS) tested clinical Salmonella isolates from multiple animal and environmental sources throughout the United States for resistance to panels of 16-19 antimicrobials. In this study we applied two mixed effects models, the generalized linear mixed model (GLMM) and accelerated failure time frailty (AFT-frailty) model, to susceptible/resistant and interval-censored minimum inhibitory concentration (MIC) metrics, respectively, from Salmonella enterica subspecies enterica serovar Typhimurium isolates from livestock and poultry. Objectives were to compare characteristics of the two models and to examine the effects of time, species, and multidrug resistance (MDR) on the resistance of isolates to individual antimicrobials, as revealed by the models. Fixed effects were year of sample collection, isolate source species and MDR indicators; laboratory study site was included as a random effect. MDR indicators were significant for every antimicrobial and were dominant effects in multivariable models. Temporal trends and source species influences varied by antimicrobial. In GLMMs, the intra-class correlation coefficient ranged up to 0.8, indicating that the proportion of variance accounted for by laboratory study site could be high. AFT models tended to be more sensitive, detecting more curvilinear temporal trends and species differences; however, high levels of left- or right-censoring made some models unstable and results uninterpretable. Results from GLMMs may be biased by cutoff criteria used to collapse MIC data into binary categories, and may miss signaling important trends or shifts if the series of antibiotic dilutions tested does not span a resistance threshold. Our findings demonstrate the challenges of measuring the AMR ecosystem and the complexity of interacting factors, and have implications for future monitoring. We include suggestions for future data collection and analyses, including alternative modeling approaches. Published by Elsevier B.V.
Lu, Tao; Wang, Min; Liu, Guangying; Dong, Guang-Hui; Qian, Feng
2016-01-01
It is well known that there is strong relationship between HIV viral load and CD4 cell counts in AIDS studies. However, the relationship between them changes during the course of treatment and may vary among individuals. During treatments, some individuals may experience terminal events such as death. Because the terminal event may be related to the individual's viral load measurements, the terminal mechanism is non-ignorable. Furthermore, there exists competing risks from multiple types of events, such as AIDS-related death and other death. Most joint models for the analysis of longitudinal-survival data developed in literatures have focused on constant coefficients and assume symmetric distribution for the endpoints, which does not meet the needs for investigating the nature of varying relationship between HIV viral load and CD4 cell counts in practice. We develop a mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects to deal with varying relationship between the two endpoints for longitudinal-competing risks survival data. A fully Bayesian inference procedure is established to estimate parameters in the joint model. The proposed method is applied to a multicenter AIDS cohort study. Various scenarios-based potential models that account for partial data features are compared. Some interesting findings are presented.
Neural Population Coding of Multiple Stimuli
Ma, Wei Ji
2015-01-01
In natural scenes, objects generally appear together with other objects. Yet, theoretical studies of neural population coding typically focus on the encoding of single objects in isolation. Experimental studies suggest that neural responses to multiple objects are well described by linear or nonlinear combinations of the responses to constituent objects, a phenomenon we call stimulus mixing. Here, we present a theoretical analysis of the consequences of common forms of stimulus mixing observed in cortical responses. We show that some of these mixing rules can severely compromise the brain's ability to decode the individual objects. This cost is usually greater than the cost incurred by even large reductions in the gain or large increases in neural variability, explaining why the benefits of attention can be understood primarily in terms of a stimulus selection, or demixing, mechanism rather than purely as a gain increase or noise reduction mechanism. The cost of stimulus mixing becomes even higher when the number of encoded objects increases, suggesting a novel mechanism that might contribute to set size effects observed in myriad psychophysical tasks. We further show that a specific form of neural correlation and heterogeneity in stimulus mixing among the neurons can partially alleviate the harmful effects of stimulus mixing. Finally, we derive simple conditions that must be satisfied for unharmful mixing of stimuli. PMID:25740513
Wankel, Scott D.; Kendall, Carol; Paytan, Adina
2009-01-01
Nitrate (NO-3 concentrations and dual isotopic composition (??15N and ??18O) were measured during various seasons and tidal conditions in Elkhorn Slough to evaluate mixing of sources of NO-3 within this California estuary. We found the isotopic composition of NO-3 was influenced most heavily by mixing of two primary sources with unique isotopic signatures, a marine (Monterey Bay) and terrestrial agricultural runoff source (Old Salinas River). However, our attempt to use a simple two end-member mixing model to calculate the relative contribution of these two NO-3 sources to the Slough was complicated by periods of nonconservative behavior and/or the presence of additional sources, particularly during the dry season when NO-3 concentrations were low. Although multiple linear regression generally yielded good fits to the observed data, deviations from conservative mixing were still evident. After consideration of potential alternative sources, we concluded that deviations from two end-member mixing were most likely derived from interactions with marsh sediments in regions of the Slough where high rates of NO-3 uptake and nitrification result in NO-3 with low ?? 15N and high ??18O values. A simple steady state dual isotope model is used to illustrate the impact of cycling processes in an estuarine setting which may play a primary role in controlling NO -3 isotopic composition when and where cycling rates and water residence times are high. This work expands our understanding of nitrogen and oxygen isotopes as biogeochemical tools for investigating NO -3 sources and cycling in estuaries, emphasizing the role that cycling processes may play in altering isotopic composition. Copyright 2009 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Kolesnikov, E. K.; Manuilov, A. S.; Petrov, V. S.; Klyushnikov, G. N.; Chernov, S. V.
2017-06-01
The influence of the current neutralization process, the phase mixing of the trajectories of electrons and multiple Coulomb scattering of electrons beam on the atoms of the background medium on the spatial increment of the growth of sausage instability of a relativistic electron beam propagating in ohmic plasma channel has been considered. It has been shown that the amplification of the current neutralization leads to a significant increase in this instability, and phase mixing and the process of multiple scattering of electrons beam on the atoms of the background medium are the stabilizing factor.
Laboratory testing and economic analysis of high RAP warm mixed asphalt.
DOT National Transportation Integrated Search
2009-03-24
This report contains laboratory testing, economic analysis, literature review, and information obtained from multiple producers throughout the state of Mississippi regarding the use of high RAP (50 % to 100%) mixtures containing warm mix additives. T...
LatMix 2011 and 2012 Dispersion Analysis
2017-05-15
was to complete the analysis and write -up of additional manuscripts relating to LatMix, and to further strengthen the results for multiple manuscripts...versus a propagation of energy upwards from small mixing events (e.g., via generation of vo rtices). A key technical goal of our work was to develop...raw waveforms co llected during the LatMix 20 l I airborne lidar surveys, and completion of the analysis and write -up of major results stemming from
NASA Astrophysics Data System (ADS)
Rounds, S. A.; Buccola, N. L.
2014-12-01
The two-dimensional (longitudinal, vertical) water-quality model CE-QUAL-W2, version 3.7, was enhanced with new features to help dam operators and managers efficiently explore and optimize potential solutions for temperature management downstream of thermally stratified reservoirs. Such temperature management often is accomplished by blending releases from multiple dam outlets that access water of different temperatures at different depths in the reservoir. The original blending algorithm in this version of the model was limited to mixing releases from two outlets at a time, and few constraints could be imposed. The new enhanced blending algorithm allows the user to (1) specify a time-series of target release temperatures, (2) designate from 2 to 10 floating or fixed-elevation outlets for blending, (3) impose maximum head constraints as well as minimum and maximum flow constraints for any blended outlet, and (4) set a priority designation for each outlet that allows the model to choose which outlets to use and how to balance releases among them. The modified model was tested against a previously calibrated model of Detroit Lake on the North Santiam River in northwestern Oregon, and the results compared well. The enhanced model code is being used to evaluate operational and structural scenarios at multiple dam/reservoir systems in the Willamette River basin in Oregon, where downstream temperature management for endangered fish is a high priority for resource managers and dam operators. These updates to the CE-QUAL-W2 blending algorithm allow scenarios involving complicated dam operations and/or hypothetical outlet structures to be evaluated more efficiently with the model, with decreased need for multiple/iterative model runs or preprocessing of model inputs to fully characterize the operational constraints.
Characterization and Modeling of Atmospheric Flow Within and Above Plant Canopies
NASA Astrophysics Data System (ADS)
Souza Freire Grion, Livia
The turbulent flow within and above plant canopies is responsible for the exchange of momentum, heat, gases and particles between vegetation and the atmosphere. Turbulence is also responsible for the mixing of air inside the canopy, playing an important role in chemical and biophysical processes occurring in the plants' environment. In the last fifty years, research has significantly advanced the understanding of and ability to model the flow field within and above the canopy, but important issues remain unsolved. In this work, we focus on (i) the estimation of turbulent mixing timescales within the canopy from field data; and (ii) the development of new computationally efficient modeling approaches for the coupled canopy-atmosphere flow field. The turbulent mixing timescale represents how quickly turbulence creates a well-mixed environment within the canopy. When the mixing timescale is much smaller than the timescale of other relevant processes (e.g. chemical reactions, deposition), the system can be assumed to be well-mixed and detailed modeling of turbulence is not critical to predict the system evolution. Conversely, if the mixing timescale is comparable or larger than the other timescales, turbulence becomes a controlling factor for the concentration of the variables involved; hence, turbulence needs to be taken into account when studying and modeling such processes. In this work, we used a combination of ozone concentration and high-frequency velocity data measured within and above the canopy in the Amazon rainforest to characterize turbulent mixing. The eddy diffusivity parameter (used as a proxy for mixing efficiency) was applied in a simple theoretical model of one-dimensional diffusion, providing an estimate of turbulent mixing timescales as a function of height within the canopy and time-of-day. Results showed that, during the day, the Amazon rainforest is characterized by well-mixed conditions with mixing timescales smaller than thirty minutes in the upper-half of the canopy, and partially mixed conditions in the lower half of the canopy. During the night, most of the canopy (except for the upper 20%) is either partially or poorly mixed, resulting in mixing timescales of up to several hours. For the specific case of ozone, the mixing timescales observed during the day are much lower than the chemical and deposition timescales, whereas chemical processes and turbulence have comparable timescales during the night. In addition, the high day-to-day variability in mixing conditions and the fast increase in mixing during the morning transition period indicate that turbulence within the canopy needs to be properly investigated and modeled in many studies involving plant-atmosphere interactions. Motivated by the findings described above, this work proposes and tests a new approach for modeling canopy flows. Typically, vertical profiles of flow statistics are needed to represent canopy-atmosphere exchanges in chemical and biophysical processes happening within the canopy. Current single-column models provide only steady-state (equilibrium) profiles, and rely on closure assumptions that do not represent the dominant non-local turbulent fluxes present in canopy flows. We overcome these issues by adapting the one-dimensional turbulent (ODT) model to represent atmospheric flows from the ground up to the top of the atmospheric boundary layer (ABL). The ODT model numerically resolves the one-dimensional diffusion equation along a vertical line (representing a horizontally homogeneous ABL column), and the presence of three-dimensional turbulence is added through the effect of stochastic eddies. Simulations of ABL without canopy were performed for different atmospheric stabilities and a diurnal cycle, to test the capabilities of this modeling approach in representing unsteady flows with strong non-local transport. In addition, four different types of canopies were simulated, one of them including the transport of scalar with a point source located inside the canopy. The comparison of all simulations with theory and field data provided satisfactory results. The main advantages of using ODT compared to typical 1D canopy-flow models are the ability to represent the coupled canopy-ABL flow with one single modeling approach, the presence of non-local turbulent fluxes, the ability to simulate transient conditions, the straightforward representation of multiple scalar fields, and the presence of only one adjustable parameter (as opposed to the several adjustable constants and boundary conditions needed for other modeling approaches). The results obtained with ODT as a stand-alone model motivated its use as a surface parameterization for Large-Eddy Simulation (LES). In this two-way coupling between LES and ODT, the former is used to simulate the ABL in a case where a canopy is present but cannot be resolved by the LES (i.e., the LES first vertical grid point is above the canopy). ODT is used to represent the flow field between the ground and the first LES grid point, including the region within and just above the canopy. In this work, we tested the ODT-LES model for three different types of canopies and obtained promising results. Although more work is needed in order to improve first and second-order statistics within the canopy (i.e. in the ODT domain), the results obtained for the flow statistics in the LES domain and for the third order statistics in the ODT domain demonstrate that the ODT-LES model is capable of capturing some important features of the canopy-atmosphere interaction. This new surface superparameterization approach using ODT provides a new alternative for simulations that require complex interactions between the flow field and near-surface processes (e.g. sand and snow drift, waves over water surfaces) and can potentially be extended to other large-scale models, such as mesoscale and global circulation models.
Assessing total fungal concentrations on commercial passenger aircraft using mixed-effects modeling.
McKernan, Lauralynn Taylor; Hein, Misty J; Wallingford, Kenneth M; Burge, Harriet; Herrick, Robert
2008-01-01
The primary objective of this study was to compare airborne fungal concentrations onboard commercial passenger aircraft at various in-flight times with concentrations measured inside and outside airport terminals. A secondary objective was to investigate the use of mixed-effects modeling of repeat measures from multiple sampling intervals and locations. Sequential triplicate culturable and total spore samples were collected on wide-body commercial passenger aircraft (n = 12) in the front and rear of coach class during six sampling intervals: boarding, midclimb, early cruise, midcruise, late cruise, and deplaning. Comparison samples were collected inside and outside airport terminals at the origin and destination cities. The MIXED procedure in SAS was used to model the mean and the covariance matrix of the natural log transformed fungal concentrations. Five covariance structures were tested to determine the appropriate models for analysis. Fixed effects considered included the sampling interval and, for samples obtained onboard the aircraft, location (front/rear of coach section), occupancy rate, and carbon dioxide concentrations. Overall, both total culturable and total spore fungal concentrations were low while the aircraft were in flight. No statistical difference was observed between measurements made in the front and rear sections of the coach cabin for either culturable or total spore concentrations. Both culturable and total spore concentrations were significantly higher outside the airport terminal compared with inside the airport terminal (p-value < 0.0001) and inside the aircraft (p-value < 0.0001). On the aircraft, the majority of total fungal exposure occurred during the boarding and deplaning processes, when the aircraft utilized ancillary ventilation and passenger activity was at its peak.
K-Rich Basaltic Sources beneath Ultraslow Spreading Central Lena Trough in the Arctic Ocean
NASA Astrophysics Data System (ADS)
Ling, X.; Snow, J. E.; Li, Y.
2016-12-01
Magma sources fundamentally influence accretion processes at ultraslow spreading ridges. Potassium enriched Mid-Ocean Ridge Basalt (K-MORB) was dredged from the central Lena Trough (CLT) in the Arctic Ocean (Nauret et al., 2011). Its geochemical signatures indicate a heterogeneous mantle source with probable garnet present under low pressure. To explore the basaltic mantle sources beneath the study area, multiple models are carried out predicting melting sources and melting P-T conditions in this study. P-T conditions are estimated by the experimental derived thermobarometer from Hoang and Flower (1998). Batch melting model and major element model (AlphaMELTs) are used to calculate the heterogeneous mantle sources. The modeling suggests phlogopite is the dominant H2O-K bearing mineral in the magma source. 5% partial melting of phlogopite and amphibole mixing with depleted mantle (DM) melt is consistent with the incompatible element pattern of CLT basalt. P-T estimation shows 1198-1212oC/4-7kbar as the possible melting condition for CLT basalt. Whereas the chemical composition of north Lena Trough (NLT) basalt is similar to N-MORB, and the P-T estimation corresponds to 1300oC normal mantle adiabat. The CLT basalt bulk composition is of mixture of 40% of the K-MORB endmember and an N-MORB-like endmember similar to NLT basalt. Therefore the binary mixing of the two endmembers exists in the CLT region. This kind of mixing infers to the tectonic evolution of the region, which is simultaneous to the Arctic Ocean opening.
Job-mix modeling and system analysis of an aerospace multiprocessor.
NASA Technical Reports Server (NTRS)
Mallach, E. G.
1972-01-01
An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.
Campaign datasets for Biomass Burning Observation Project (BBOP)
Kleinman,Larry; Mei,Fan; Arnott,William; Buseck,Peter; Chand,Duli; Comstock,Jennifer; Dubey,Manvendra; Lawson,Paul; Long,Chuck; Onasch,Timothy; Sedlacek,Arthur; Senum,Gunnar; Shilling,John; Springston,Stephen; Tomlinson,Jason; Wang,Jian
2014-04-24
This field campaign will address multiple uncertainties in aerosol intensive properties, which are poorly represented in climate models, by means of aircraft measurements in biomass burning plumes. Key topics to be investigated are: 1. Aerosol mixing state and morphology 2. Mass absorption coefficients (MACs) 3. Chemical composition of non-refractory material associated with light-absorbing carbon (LAC) 4. Production rate of secondary organic aerosol (SOA) 5. Microphysical processes relevant to determining aerosol size distributions and single scattering albedo (SSA) 6. CCN activity. These topics will be investigated through measurements near active fires (0-5 hours downwind), where limited observations indicate rapid changes in aerosol properties, and in biomass burning plumes aged >5 hours. Aerosol properties and their time evolution will be determined as a function of fire type, defined according to fuel and the mix of flaming and smoldering combustion at the source.
Globular cluster chemistry in fast-rotating dwarf stars belonging to intermediate-age open clusters
NASA Astrophysics Data System (ADS)
Pancino, Elena
2018-06-01
The peculiar chemistry observed in multiple populations of Galactic globular clusters is not generally found in other systems such as dwarf galaxies and open clusters, and no model can currently fully explain it. Exploring the boundaries of the multiple-population phenomenon and the variation of its extent in the space of cluster mass, age, metallicity, and compactness has proven to be a fruitful line of investigation. In the framework of a larger project to search for multiple populations in open clusters that is based on literature and survey data, I found peculiar chemical abundance patterns in a sample of intermediate-age open clusters with publicly available data. More specifically, fast-rotating dwarf stars (v sin i ≥ 50 km s-1) that belong to four clusters (Pleiades, Ursa Major, Come Berenices, and Hyades) display a bimodality in either [Na/Fe] or [O/Fe], or both, with the low-Na and high-O peak more populated than the high-Na and low-O peak. Additionally, two clusters show a Na-O anti-correlation in the fast-rotating stars, and one cluster shows a large [Mg/Fe] variation in stars with high [Na/Fe], reaching the extreme Mg depletion observed in NGC 2808. Even considering that the sample sizes are small, these patterns call for attention in the light of a possible connection with the multiple population phenomenon of globular clusters. The specific chemistry observed in these fast-rotating dwarf stars is thought to be produced by a complex interplay of different diffusion and mixing mechanisms, such as rotational mixing and mass loss, which in turn are influenced by metallicity, binarity, mass, age, variability, and so on. However, with the sample in hand, it was not possible to identify which stellar parameters cause the observed Na and O bimodality and Na-O anti-correlation. This suggests that other stellar properties might be important in addition to stellar rotation. Stellar binarity might influence the rotational properties and enhance rotational mixing and mass loss of stars in a dense environment like that of clusters (especially globulars). In conclusion, rotation and binarity appear as a promising research avenue for better understanding multiple stellar populations in globular clusters; this is certainly worth exploring further.
O'Malley, A James; Christakis, Nicholas A
2011-01-01
We develop novel mixed effects models to examine the role of health traits on the status of peoples' close friendship nominations in the Framingham Heart Study. The health traits considered are both mutable (body mass index (BMI), smoking, blood pressure, body proportion, muscularity, and depression) and, for comparison, basically immutable (height, birth order, personality type, only child, and handedness); and the traits have varying degrees of observability. We test the hypotheses that existing ties (i.e. close friendship nominations) are more likely to dissolve between people with dissimilar (mutable and observable) health traits whereas new ties are more likely to form between those with similar (mutable and observable) traits while controlling for persons' age, gender, geographic separation, and education. The mixed effects models contain random effects for both the nominator (ego) and nominated (alter) persons in a tie to account for the fact that people were involved in multiple relationships and contributed observations at multiple exams. Results for BMI support the hypotheses that people of similar BMI are less likely to dissolve existing ties and more likely to form ties, while smoker to non-smoker ties were the least likely to dissolve and smoker to smoker ties were the most likely to form. We also validated previously known findings regarding homophily on age and gender, and found evidence that homophily also depends upon geographic separation. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21287589
O'Malley, A James; Christakis, Nicholas A
2011-04-30
We develop novel mixed effects models to examine the role of health traits on the status of peoples' close friendship nominations in the Framingham Heart Study. The health traits considered are both mutable (body mass index (BMI), smoking, blood pressure, body proportion, muscularity, and depression) and, for comparison, basically immutable (height, birth order, personality type, only child, and handedness); and the traits have varying degrees of observability. We test the hypotheses that existing ties (i.e. close friendship nominations) are more likely to dissolve between people with dissimilar (mutable and observable) health traits whereas new ties are more likely to form between those with similar (mutable and observable) traits while controlling for persons' age, gender, geographic separation, and education. The mixed effects models contain random effects for both the nominator (ego) and nominated (alter) persons in a tie to account for the fact that people were involved in multiple relationships and contributed observations at multiple exams. Results for BMI support the hypotheses that people of similar BMI are less likely to dissolve existing ties and more likely to form ties, while smoker to non-smoker ties were the least likely to dissolve and smoker to smoker ties were the most likely to form. We also validated previously known findings regarding homophily on age and gender, and found evidence that homophily also depends upon geographic separation. Copyright © 2011 John Wiley & Sons, Ltd.
Enhanced project management tool
NASA Technical Reports Server (NTRS)
Hsu, Chen-Jung (Inventor); Patel, Hemil N. (Inventor); Maluf, David A. (Inventor); Moh Hashim, Jairon C. (Inventor); Tran, Khai Peter B. (Inventor)
2012-01-01
A system for managing a project that includes multiple tasks and a plurality of workers. Input information includes characterizations based upon a human model, a team model and a product model. Periodic reports, such as one or more of a monthly report, a task plan report, a schedule report, a budget report and a risk management report, are generated and made available for display or further analysis or collection into a customized report template. An extensible database allows searching for information based upon context and upon content. Seven different types of project risks are addressed, including non-availability of required skill mix of workers. The system can be configured to exchange data and results with corresponding portions of similar project analyses, and to provide user-specific access to specified information.
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
NASA Astrophysics Data System (ADS)
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)
Energy-exchange collisions of dark-bright-bright vector solitons.
Radhakrishnan, R; Manikandan, N; Aravinthan, K
2015-12-01
We find a dark component guiding the practically interesting bright-bright vector one-soliton to two different parametric domains giving rise to different physical situations by constructing a more general form of three-component dark-bright-bright mixed vector one-soliton solution of the generalized Manakov model with nine free real parameters. Moreover our main investigation of the collision dynamics of such mixed vector solitons by constructing the multisoliton solution of the generalized Manakov model with the help of Hirota technique reveals that the dark-bright-bright vector two-soliton supports energy-exchange collision dynamics. In particular the dark component preserves its initial form and the energy-exchange collision property of the bright-bright vector two-soliton solution of the Manakov model during collision. In addition the interactions between bound state dark-bright-bright vector solitons reveal oscillations in their amplitudes. A similar kind of breathing effect was also experimentally observed in the Bose-Einstein condensates. Some possible ways are theoretically suggested not only to control this breathing effect but also to manage the beating, bouncing, jumping, and attraction effects in the collision dynamics of dark-bright-bright vector solitons. The role of multiple free parameters in our solution is examined to define polarization vector, envelope speed, envelope width, envelope amplitude, grayness, and complex modulation of our solution. It is interesting to note that the polarization vector of our mixed vector one-soliton evolves in sphere or hyperboloid depending upon the initial parametric choices.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
1999-01-01
A linear spatial instability model for multiple spatially periodic supersonic rectangular jets is solved using Floquet-Bloch theory. It is assumed that in the region of interest a coherent wave can propagate. For the case studied large spatial growth rates are found. This work is motivated by an increase in mixing found in experimental measurements of spatially periodic supersonic rectangular jets with phase-locked screech and edge tone feedback locked subsonic jets. The results obtained in this paper suggests that phase-locked screech or edge tones may produce correlated spatially periodic jet flow downstream of the nozzles which creates a large span wise multi-nozzle region where a coherent wave can propagate. The large spatial growth rates for eddies obtained by model calculation herein are related to the increased mixing since eddies are the primary mechanism that transfer energy from the mean flow to the large turbulent structures. Calculations of spacial growth rates will be presented for a set of relative Mach numbers and spacings for which experimental measurements have been made. Calculations of spatial growth rates are presented for relative Mach numbers from 1.25 to 1.75 with ratios of nozzle spacing to nozzle width ratios from s/w(sub N) = 4 to s/w(sub N) = 13.7. The model may be of significant scientific and engineering value in the quest to understand and construct supersonic mixer-ejector nozzles which provide increased mixing and reduced noise.
da Cruz, Marcos de O R; Weksler, Marcelo
2018-02-01
The use of genetic data and tree-based algorithms to delimit evolutionary lineages is becoming an important practice in taxonomic identification, especially in morphologically cryptic groups. The effects of different phylogenetic and/or coalescent models in the analyses of species delimitation, however, are not clear. In this paper, we assess the impact of different evolutionary priors in phylogenetic estimation, species delimitation, and molecular dating of the genus Oligoryzomys (Mammalia: Rodentia), a group with complex taxonomy and morphological cryptic species. Phylogenetic and coalescent analyses included 20 of the 24 recognized species of the genus, comprising of 416 Cytochrome b sequences, 26 Cytochrome c oxidase I sequences, and 27 Beta-Fibrinogen Intron 7 sequences. For species delimitation, we employed the General Mixed Yule Coalescent (GMYC) and Bayesian Poisson tree processes (bPTP) analyses, and contrasted 4 genealogical and phylogenetic models: Pure-birth (Yule), Constant Population Size Coalescent, Multiple Species Coalescent, and a mixed Yule-Coalescent model. GMYC analyses of trees from different genealogical models resulted in similar species delimitation and phylogenetic relationships, with incongruence restricted to areas of poor nodal support. bPTP results, however, significantly differed from GMYC for 5 taxa. Oligoryzomys early diversification was estimated to have occurred in the Early Pleistocene, between 0.7 and 2.6 MYA. The mixed Yule-Coalescent model, however, recovered younger dating estimates for Oligoryzomys diversification, and for the threshold for the speciation-coalescent horizon in GMYC. Eight of the 20 included Oligoryzomys species were identified as having two or more independent evolutionary units, indicating that current taxonomy of Oligoryzomys is still unsettled. Copyright © 2017 Elsevier Inc. All rights reserved.
van Helden, Paul D.; Wilson, Douglas; Colijn, Caroline; McLaughlin, Megan M.; Abubakar, Ibrahim; Warren, Robin M.
2012-01-01
Summary: Numerous studies have reported that individuals can simultaneously harbor multiple distinct strains of Mycobacterium tuberculosis. To date, there has been limited discussion of the consequences for the individual or the epidemiological importance of mixed infections. Here, we review studies that documented mixed infections, highlight challenges associated with the detection of mixed infections, and discuss possible implications of mixed infections for the diagnosis and treatment of patients and for the community impact of tuberculosis control strategies. We conclude by highlighting questions that should be resolved in order to improve our understanding of the importance of mixed-strain M. tuberculosis infections. PMID:23034327
MixSIAR: advanced stable isotope mixing models in R
Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Fulton, Kara A.; Liu, Danping; Haynie, Denise L.; Albert, Paul S.
2016-01-01
The NEXT Generation Health study investigates the dating violence of adolescents using a survey questionnaire. Each student is asked to affirm or deny multiple instances of violence in his/her dating relationship. There is, however, evidence suggesting that students not in a relationship responded to the survey, resulting in excessive zeros in the responses. This paper proposes likelihood-based and estimating equation approaches to analyze the zero-inflated clustered binary response data. We adopt a mixed model method to account for the cluster effect, and the model parameters are estimated using a maximum-likelihood (ML) approach that requires a Gaussian–Hermite quadrature (GHQ) approximation for implementation. Since an incorrect assumption on the random effects distribution may bias the results, we construct generalized estimating equations (GEE) that do not require the correct specification of within-cluster correlation. In a series of simulation studies, we examine the performance of ML and GEE methods in terms of their bias, efficiency and robustness. We illustrate the importance of properly accounting for this zero inflation by reanalyzing the NEXT data where this issue has previously been ignored. PMID:26937263
Li, Peng; Ji, Haoran; Wang, Chengshan; ...
2017-03-22
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peng; Ji, Haoran; Wang, Chengshan
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
Lee, Jaeyoung; Yasmin, Shamsunnahar; Eluru, Naveen; Abdel-Aty, Mohamed; Cai, Qing
2018-02-01
In traffic safety literature, crash frequency variables are analyzed using univariate count models or multivariate count models. In this study, we propose an alternative approach to modeling multiple crash frequency dependent variables. Instead of modeling the frequency of crashes we propose to analyze the proportion of crashes by vehicle type. A flexible mixed multinomial logit fractional split model is employed for analyzing the proportions of crashes by vehicle type at the macro-level. In this model, the proportion allocated to an alternative is probabilistically determined based on the alternative propensity as well as the propensity of all other alternatives. Thus, exogenous variables directly affect all alternatives. The approach is well suited to accommodate for large number of alternatives without a sizable increase in computational burden. The model was estimated using crash data at Traffic Analysis Zone (TAZ) level from Florida. The modeling results clearly illustrate the applicability of the proposed framework for crash proportion analysis. Further, the Excess Predicted Proportion (EPP)-a screening performance measure analogous to Highway Safety Manual (HSM), Excess Predicted Average Crash Frequency is proposed for hot zone identification. Using EPP, a statewide screening exercise by the various vehicle types considered in our analysis was undertaken. The screening results revealed that the spatial pattern of hot zones is substantially different across the various vehicle types considered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robust and Sensitive Analysis of Mouse Knockout Phenotypes
Karp, Natasha A.; Melvin, David; Mott, Richard F.
2012-01-01
A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student’s t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene’s function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained. PMID:23300663
How Does a Hydrophobic Macromolecule Respond to Mixed Osmolyte Environment?
Tah, Indrajit; Mondal, Jagannath
2016-10-04
The role of the protecting osmolyte Trimethyl N-oxide (TMAO) in counteracting the denaturing effect of urea on a protein is quite well established. However, the mechanistic role of osmolytes on the hydrophobic interaction underlying protein folding is a topic of contention and is emerging as a key area of biophysical interest. Although recent experiment and computer simulation have established that individual aqueous solution of TMAO and urea respectively stabilizes and destabilizes the collapsed conformation of a hydrophobic polymer, it remains to be explored how a mixed aqueous solution of protecting and denaturing osmolytes influences the conformations of the polymer. In order to bridge the gap, we have simulated the conformational behavior of both a model hydrophobic polymer and a synthetic polymer polystyrene in an aqueous mixture of TMAO and urea. Intriguingly, our free energy based simulations on both the systems show that even though a pure aqueous solution of TMAO stabilizes the collapsed or globular conformation of the hydrophobic polymer, addition of TMAO to an aqueous solution of urea further destabilizes the collapsed conformation of the hydrophobic polymer. We also observe that the extent of destabilization in a mixed osmolyte solution is relatively higher than that in pure aqueous urea solution. The reinforcement of the denaturation effect of the hydrophobic macromolecule in a mixed osmolyte solution is in stark contrast to the well-known counteracting role of TMAO in proteins under denaturing condition of urea. In both model and realistic systems, our results show that in a mixed aqueous solution, greater number of cosolutes preferentially bind to the extended conformation of the polymer relative to that in the collapsed conformation, thereby complying with Tanford-Wyman preferential solvation theory disfavoring the collapsed conformation. The results are robust across a range of osmolyte concentrations and multiple cosolute forcefields. Our findings unequivocally imply that the action of mixed osmolyte solution on hydrophobic polymer is significantly distinct from that of proteins.
NASA Astrophysics Data System (ADS)
Larson, T. E.; Perkins, G.; Longmire, P.; Heikoop, J. M.; Fessenden, J. E.; Rearick, M.; Fabyrka-Martin, J.; Chrystal, A. E.; Dale, M.; Simmons, A. M.
2009-12-01
The groundwater system beneath Los Alamos National Laboratory has been affected by multiple sources of anthropogenic nitrate contamination. Average NO3-N concentrations of up to 18.2±1.7 mg/L have been found in wells in the perched intermediate aquifer beneath one of the more affected sites within Mortandad Canyon. Sources of nitrate potentially reaching the alluvial and intermediate aquifers include: (1) sewage effluent, (2) neutralized nitric acid, (3) neutralized 15N-depleted nitric acid (treated waste from an experiment enriching nitric acid in 15N), and (4) natural background nitrate. Each of these sources is unique in δ18O and δ15N space. Using nitrate stable isotope ratios, a mixing model for the three anthropogenic sources of nitrate was established, after applying a linear subtraction of the background component. The spatial and temporal variability in nitrate contaminant sources through Mortandad Canyon is clearly shown in ternary plots. While microbial denitrification has been shown to change groundwater nitrate stable isotope ratios in other settings, the redox potential, relatively high dissolved oxygen content, increasing nitrate concentrations over time, and lack of observed NO2 in these wells suggest minimal changes to the stable isotope ratios have occurred. Temporal trends indicate that the earliest form of anthropogenic nitrate in this watershed was neutralized nitric acid. Alluvial wells preserve a trend of decreasing nitrate concentrations and mixing models show decreasing contributions of 15N-depleted nitric acid. Nearby intermediate wells show increasing nitrate concentrations and mixing models indicate a larger component derived from 15N-depleted nitric acid. These data indicate that the pulse of neutralized 15N-depleted nitric acid that was released into Mortandad Canyon between 1986 and 1989 has infiltrated through the alluvial aquifer and is currently affecting two intermediate wells. This hypothesis is consistent with previous research suggesting that the perched intermediate aquifers in the Mortandad Canyon watershed are recharged locally from the overlying alluvial aquifers.
Formation and emission mechanisms of Ag nanoclusters in the Ar matrix assembly cluster source
NASA Astrophysics Data System (ADS)
Zhao, Junlei; Cao, Lu; Palmer, Richard E.; Nordlund, Kai; Djurabekova, Flyura
2017-11-01
In this paper, we study the mechanisms of growth of Ag nanoclusters in a solid Ar matrix and the emission of these nanoclusters from the matrix by a combination of experimental and theoretical methods. The molecular dynamics simulations show that the cluster growth mechanism can be described as "thermal spike-enhanced clustering" in multiple sequential ion impact events. We further show that experimentally observed large sputtered metal clusters cannot be formed by direct sputtering of Ag mixed in the Ar. Instead, we describe the mechanism of emission of the metal nanocluster that, at first, is formed in the cryogenic matrix due to multiple ion impacts, and then is emitted as a result of the simultaneous effects of interface boiling and spring force. We also develop an analytical model describing this size-dependent cluster emission. The model bridges the atomistic simulations and experimental time and length scales, and allows increasing the controllability of fast generation of nanoclusters in experiments with a high production rate.
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo; Lunati, Ivan
2016-10-01
We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation, deformation of a cantilever bracket, and Boycott effects). The applicability of the method is not limited to flow in porous media, but can also be employed to describe many other physical systems governed by a similar set of equations, including e.g. multi-component materials.
Multivariate analysis of longitudinal rates of change.
Bryan, Matthew; Heagerty, Patrick J
2016-12-10
Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed in the literature. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, 'accelerated time' methods have been developed which assume that covariates rescale time in longitudinal models for disease progression. In this manuscript, we detail an alternative multivariate model formulation that directly structures longitudinal rates of change and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Swaminathan, Vikhram V; Shannon, Mark A; Bashir, Rashid
2015-04-01
Dielectrophoretic separation of particles finds a variety of applications in the capture of species such as cells, viruses, proteins, DNA from biological systems, as well as other organic and inorganic contaminants from water. The ability to capture particles is constrained by poor volumetric scaling of separation force with respect to particle diameter, as well as the weak penetration of electric fields in the media. In order to improve the separation of sub-micron colloids, we present a scheme based on multiple interdigitated electrode arrays under mixed AC/DC bias. The use of high frequency longitudinal AC bias breaks the shielding effects through electroosmotic micromixing to enhance electric fields through the electrolyte, while a transverse DC bias between the electrode arrays enables penetration of the separation force to capture particles from the bulk of the microchannel. We determine the favorable biasing conditions for field enhancement with the help of analytical models, and experimentally demonstrate the improved capture from sub-micron colloidal suspensions with the mixed AC/DC electrostatic excitation scheme over conventional AC-DEP methods.
Impossibility of Classically Simulating One-Clean-Qubit Model with Multiplicative Error
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Kobayashi, Hirotada; Morimae, Tomoyuki; Nishimura, Harumichi; Tamate, Shuhei; Tani, Seiichiro
2018-05-01
The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently sampled within a constant multiplicative error unless the polynomial-time hierarchy collapses to the third level [T. Morimae, K. Fujii, and J. F. Fitzsimons, Phys. Rev. Lett. 112, 130502 (2014), 10.1103/PhysRevLett.112.130502]. It was open whether we can keep the no-go result while reducing the number of output qubits from three to one. Here, we solve the open problem affirmatively. We also show that the third-level collapse of the polynomial-time hierarchy can be strengthened to the second-level one. The strengthening of the collapse level from the third to the second also holds for other subuniversal models such as the instantaneous quantum polynomial model [M. Bremner, R. Jozsa, and D. J. Shepherd, Proc. R. Soc. A 467, 459 (2011), 10.1098/rspa.2010.0301] and the boson sampling model [S. Aaronson and A. Arkhipov, STOC 2011, p. 333]. We additionally study the classical simulatability of the one-clean-qubit model with further restrictions on the circuit depth or the gate types.
NASA Astrophysics Data System (ADS)
Su, Qi; Li, Aming; Wang, Long
2017-02-01
Spatial reciprocity is generally regarded as a positive rule facilitating the evolution of cooperation. However, a few recent studies show that, in the snowdrift game, spatial structure still could be detrimental to cooperation. Here we propose a model of multiple interactive dynamics, where each individual can cooperate and defect simultaneously against different neighbors. We realize individuals' multiple interactions simply by endowing them with strategies relevant to probabilities, and every one decides to cooperate or defect with a probability. With multiple interactive dynamics, the cooperation level in square lattices is higher than that in the well-mixed case for a wide range of cost-to-benefit ratio r, implying that spatial structure favors cooperative behavior in the snowdrift game. Moreover, in square lattices, the most favorable strategy follows a simple relation of r, which confers theoretically the average evolutionary frequency of cooperative behavior. We further extend our study to various homogeneous and heterogeneous networks, which demonstrates the robustness of our results. Here multiple interactive dynamics stabilizes the positive role of spatial structure on the evolution of cooperation and individuals' distinct reactions to different neighbors can be a new line in understanding the emergence of cooperation.
Multiple Phenotype Association Tests Using Summary Statistics in Genome-Wide Association Studies
Liu, Zhonghua; Lin, Xihong
2017-01-01
Summary We study in this paper jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. PMID:28653391
Multiple phenotype association tests using summary statistics in genome-wide association studies.
Liu, Zhonghua; Lin, Xihong
2018-03-01
We study in this article jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. © 2017, The International Biometric Society.
Zuo, Kuichang; Yuan, Lulu; Wei, Jincheng; Liang, Peng; Huang, Xia
2013-10-01
Mixed ion-exchange resins packed microbial desalination cell (R-MDC) could stabilize the internal resistance, however, the impacts of multiple ions on R-MDC performance was unclear. This study investigated the desalination performance, multiple ions migration behaviors and their impacts on R-MDCs fed with salt solution containing multiple anions and cations. Results showed that R-MDC removed multiple anions better than multiple cations with desalination efficiency of 99% (effluent conductivity <0.05 ms/cm) at hydraulic retention time of 50 h. Competitive migration order was SO4(2-)>NO3(-)>Cl(-) for anions and Ca(2+)≈Mg(2+)>NH4(+)>Na(+) for cations, jointly affected by both their molar conductivity and exchange selectivity on resins. After long-term operation, the existence of higher concentration Ca(2+) and Mg(2+) caused the electric conductivity of mixed resins decrease and scaling on the surface of cation-exchange membrane adjoined with cathode chamber, suggesting that R-MDC would be more suitable for desalination of water with lower hardness. Copyright © 2013 Elsevier Ltd. All rights reserved.
Picoliter Drop-On-Demand Dispensing for Multiplex Liquid Cell Transmission Electron Microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patterson, Joseph P.; Parent, Lucas R.; Cantlon, Joshua
2016-05-03
Abstract Liquid cell transmission electron microscopy (LCTEM) provides a unique insight into the dynamics of nanomaterials in solution. Controlling the addition of multiple solutions to the liquid cell remains a key hurdle in our ability to increase throughput and to study processes dependent on solution mixing including chemical reactions. Here, we report that a piezo dispensing technique allows for mixing of multiple solutions directly within the viewing area. This technique permits deposition of 50 pL droplets of various aqueous solutions onto the liquid cell window, before assembly of the cell in a fully controlled manner. This proof-of-concept study highlights themore » great potential of picoliter dispensing in combination with LCTEM for observing nanoparticle mixing in the solution phase and the creation of chemical gradients.« less
Chevalier, Julie; Chamoux, Catherine; Hammès, Florence; Chicoye, Annie
2016-01-01
The paper aimed to estimate the incremental cost-effectiveness ratio (ICER) at the public published price for delayed-release dimethyl fumarate versus relevant Multiple Sclerosis disease-modifying therapies available in France in June 2015. The economic model was adapted to the French setting in accordance with the Haute Autorité de Santé guidelines using a model previously developed for NICE. A cohort of Relapsing Remitting Multiple Sclerosis patients was simulated over a 30-year time horizon. Twenty one health states were taken into account: Kurtzke Expanded Disability Status Scale (EDSS) 0-9 for Relapsing Remitting Multiple Sclerosis patients, EDSS 0-9 for Secondary Progressive Multiple Sclerosis patients, and death. Estimates of relative treatment efficacy were determined using a mixed-treatment comparison. Probabilities of events were derived from the dimethyl fumarate pivotal clinical trials and the London Ontario Dataset. Costs and utilities were extracted from the published literature from both the payer and societal perspectives. Univariate and probabilistic sensitivity analyses were performed to assess the robustness of the model results. From both perspectives, dimethyl fumarate and interferon beta-1a (IFN beta-1a) 44 mcg were the two optimal treatments, as the other treatments (IFN beta-1a 30 mcg, IFN beta-1b 250 mcg, teriflunomide, glatiramer acetate, fingolimod) were dominated on the efficiency frontier. From the societal perspective, dimethyl fumarate versus IFN beta-1a 44 mcg incurred an incremental cost of €3,684 and an incremental quality-adjusted life year (QALY) of 0.281, corresponding to an ICER of €13,110/QALY. Despite no reference threshold for France, dimethyl fumarate can be considered as a cost-effective option as it is on the efficiency frontier.
NASA Astrophysics Data System (ADS)
Fenocchi, Andrea; Rogora, Michela; Sibilla, Stefano; Ciampittiello, Marzia; Dresti, Claudia
2018-01-01
The impact of air temperature rise is eminent for the large deep lakes in the Italian subalpine district, climate change being caused there by both natural phenomena and anthropogenic greenhouse-gases (GHG) emissions. These oligomictic lakes are experiencing a decrease in the frequency of winter full turnover and an intensification of stability. As a result, hypolimnetic oxygen concentrations are decreasing and nutrients are accumulating in bottom water, with effects on the whole ecosystem functioning. Forecasting the future evolution of the mixing pattern is relevant to assess if a reduction in GHG releases would be able to revert such processes. The study focuses on Lake Maggiore, for which the thermal structure evolution under climate change in the 2016-2085 period was assessed through numerical simulations, performed with the General Lake Model (GLM). Different prospects of regional air temperature rise were considered, given by the Swiss Climate Change Scenarios CH2011. Multiple realisations were performed for each scenario to obtain robust statistical predictions, adopting random series of meteorological data produced with the Vector-Autoregressive Weather Generator (VG). Results show that a reversion in the increasing thermal stability would be possible only if global GHG emissions started to be reduced by 2020, allowing an equilibrium mixing regime to be restored by the end of the twenty-first century. Otherwise, persistent lack of complete-mixing, severe water warming and extensive effects on water quality are to be expected for the centuries to come. These projections can be extended to the other lakes in the subalpine district.
Multiple levels of bilingual language control: evidence from language intrusions in reading aloud.
Gollan, Tamar H; Schotter, Elizabeth R; Gomez, Joanne; Murillo, Mayra; Rayner, Keith
2014-02-01
Bilinguals rarely produce words in an unintended language. However, we induced such intrusion errors (e.g., saying el instead of he) in 32 Spanish-English bilinguals who read aloud single-language (English or Spanish) and mixed-language (haphazard mix of English and Spanish) paragraphs with English or Spanish word order. These bilinguals produced language intrusions almost exclusively in mixed-language paragraphs, and most often when attempting to produce dominant-language targets (accent-only errors also exhibited reversed language-dominance effects). Most intrusion errors occurred for function words, especially when they were not from the language that determined the word order in the paragraph. Eye movements showed that fixating a word in the nontarget language increased intrusion errors only for function words. Together, these results imply multiple mechanisms of language control, including (a) inhibition of the dominant language at both lexical and sublexical processing levels, (b) special retrieval mechanisms for function words in mixed-language utterances, and (c) attentional monitoring of the target word for its match with the intended language.
Coordination of size-control, reproduction and generational memory in freshwater planarians
NASA Astrophysics Data System (ADS)
Yang, Xingbo; Kaj, Kelson; Schwab, David; Collins, Eva-Maria
Uncovering the mechanisms that control size, growth, and division rates of systems reproducing through binary division means understanding basic principles of their life cycle. Recent work has focused on how division rates are regulated in bacteria and yeast, but this question has not yet been addressed in more complex, multicellular organisms. We have acquired a unique large-scale data set on the growth and asexual reproduction of two freshwater planarian species, Dugesia japonica and Dugesia tigrina, which reproduce by transverse fission and succeeding regeneration of head and tail pieces into new worms. We developed a new additive theoretical model that mixes multiple size control strategies based on worm size, growth, and waiting time. Our model quantifies the proportions of each strategy in the mixed dynamics, revealing the ability of the two planarian species to utilize different strategies in a coordinated manner for size control. Additionally, we found that head and tail offspring of both species employ different mechanisms to monitor and trigger their reproduction cycles. Finally, we show that generation-dependent memory effects in planarians need to be taken into account to accurately capture the experimental data.
Do ecohydrology and community dynamics feed back to banded-ecosystem structure and productivity?
NASA Astrophysics Data System (ADS)
Callegaro, Chiara; Ursino, Nadia
2016-04-01
Mixed communities including grass, shrubs and trees are often reported to populate self-organized vegetation patterns. Patterns of survey data suggest that species diversity and complementarity strengthen the dynamics of banded environments. Resource scarcity and local facilitation trigger self organization, whereas coexistence of multiple species in vegetated self-organizing patches, implying competition for water and nutrients and favorable reproduction sites, is made possible by differing adaptation strategies. Mixed community spatial self-organization has so far received relatively little attention, compared with local net facilitation of isolated species. We assumed that soil moisture availability is a proxy for the environmental niche of plant species according to Ursino and Callegaro (2016). Our modelling effort was focused on niche differentiation of coexisting species within a tiger bush type ecosystem. By minimal numerical modelling and stability analysis we try to answer a few open scientific questions: Is there an adaptation strategy that increases biodiversity and ecosystem functioning? Does specific adaptation to environmental niches influence the structure of self-organizing vegetation pattern? What specific niche distribution along the environmental gradient gives the highest global productivity?
Functional pleiotropy and mating system evolution in plants: frequency-independent mating.
Jordan, Crispin Y; Otto, Sarah P
2012-04-01
Mutations that alter the morphology of floral displays (e.g., flower size) or plant development can change multiple functions simultaneously, such as pollen export and selfing rate. Given the effect of these various traits on fitness, pleiotropy may alter the evolution of both mating systems and floral displays, two characters with high diversity among angiosperms. The influence of viability selection on mating system evolution has not been studied theoretically. We model plant mating system evolution when a single locus simultaneously affects the selfing rate, pollen export, and viability. We assume frequency-independent mating, so our model characterizes prior selfing. Pleiotropy between increased viability and selfing rate reduces opportunities for the evolution of pure outcrossing, can favor complete selfing despite high inbreeding depression, and notably, can cause the evolution of mixed mating despite very high inbreeding depression. These results highlight the importance of pleiotropy for mating system evolution and suggest that selection by nonpollinating agents may help explain mixed mating, particularly in species with very high inbreeding depression. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Mechanism Design for Multi-slot Ads Auction in Sponsored Search Markets
NASA Astrophysics Data System (ADS)
Deng, Xiaotie; Sun, Yang; Yin, Ming; Zhou, Yunhong
In this paper, we study pricing models for multi-slot advertisements, where advertisers can bid to place links to their sales webpages at one or multiple slots on a webpage, called the multi-slot AD auction problem. We develop and analyze several important mechanisms, including the VCG mechanism for multi-slot ads auction, the optimal social welfare solution, as well as two weighted GSP-like protocols (mixed and hybrid). Furthermore, we consider that forward-looking Nash equilibrium and prove its existence in the weighted GSP-like pricing protocols.
Multiple Loci are associated with dilated cardiomyopathy in Irish wolfhounds.
Philipp, Ute; Vollmar, Andrea; Häggström, Jens; Thomas, Anne; Distl, Ottmar
2012-01-01
Dilated cardiomyopathy (DCM) is a highly prevalent and often lethal disease in Irish wolfhounds. Complex segregation analysis indicated different loci involved in pathogenesis. Linear fixed and mixed models were used for the genome-wide association study. Using 106 DCM cases and 84 controls we identified one SNP significantly associated with DCM on CFA37 and five SNPs suggestively associated with DCM on CFA1, 10, 15, 21 and 17. On CFA37 MOGAT1 and ACSL3 two enzymes of the lipid metabolism were located near the identified SNP.
Multiple Loci Are Associated with Dilated Cardiomyopathy in Irish Wolfhounds
Philipp, Ute; Vollmar, Andrea; Häggström, Jens; Thomas, Anne; Distl, Ottmar
2012-01-01
Dilated cardiomyopathy (DCM) is a highly prevalent and often lethal disease in Irish wolfhounds. Complex segregation analysis indicated different loci involved in pathogenesis. Linear fixed and mixed models were used for the genome-wide association study. Using 106 DCM cases and 84 controls we identified one SNP significantly associated with DCM on CFA37 and five SNPs suggestively associated with DCM on CFA1, 10, 15, 21 and 17. On CFA37 MOGAT1 and ACSL3 two enzymes of the lipid metabolism were located near the identified SNP. PMID:22761652
Mixed H2/H Infinity Optimization with Multiple H Infinity Constraints
1994-06-01
given by (w = 1P I Ijwj, !5 1); p = 2900 The 2-norm is the energy, and the c-norm is the maximum magnitude of the signal. A good measure of performance is...the system 2-norm is not good for uncertainty management)] is conservative, especially when the uncertainty model is highly structured. In this case, g...57.6035 T [-6.4183, 3.6504] ±30.2811 Although the objective was to design a pure regulator, from Table 5-1 we see that the H2 controller provides good
An Efficient Alternative Mixed Randomized Response Procedure
ERIC Educational Resources Information Center
Singh, Housila P.; Tarray, Tanveer A.
2015-01-01
In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…
NASA Astrophysics Data System (ADS)
Peng, Bo; Zheng, Sifa; Liao, Xiangning; Lian, Xiaomin
2018-03-01
In order to achieve sound field reproduction in a wide frequency band, multiple-type speakers are used. The reproduction accuracy is not only affected by the signals sent to the speakers, but also depends on the position and the number of each type of speaker. The method of optimizing a mixed speaker array is investigated in this paper. A virtual-speaker weighting method is proposed to optimize both the position and the number of each type of speaker. In this method, a virtual-speaker model is proposed to quantify the increment of controllability of the speaker array when the speaker number increases. While optimizing a mixed speaker array, the gain of the virtual-speaker transfer function is used to determine the priority orders of the candidate speaker positions, which optimizes the position of each type of speaker. Then the relative gain of the virtual-speaker transfer function is used to determine whether the speakers are redundant, which optimizes the number of each type of speaker. Finally the virtual-speaker weighting method is verified by reproduction experiments of the interior sound field in a passenger car. The results validate that the optimum mixed speaker array can be obtained using the proposed method.
Investigating students’ mental models about the nature of light in different contexts
NASA Astrophysics Data System (ADS)
Özcan, Özgür
2015-11-01
In this study, we investigated pre-service physics teachers’ mental models of light in different contexts, such as blackbody radiation, the photoelectric effect and the Compton effect. The data collected through the paper-and-pencil questionnaire (PPQ) were analyzed both quantitatively and qualitatively. Sampling of this study consists of a total of 110 physics education students who were taking a modern physics course at two different state universities in Turkey. As a result, three mental models, which were called the beam ray model (BrM), hybrid model (HM) and particle model (PM), were being used by the students while explaining these phenomena. The most model fluctuation was seen in HM and BrM. In addition, some students were in a mixed-model state where they use multiple mental models in explaining a phenomenon and used these models inconsistently. On the other hand, most of the students who used the particle model can be said to be in a pure model state.
Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn
2018-05-01
The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.
Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun
2013-09-01
By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.
ePRISM: A case study in multiple proxy and mixed temporal resolution integration
Robinson, Marci M.; Dowsett, Harry J.
2010-01-01
As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.
A new paper-based platform technology for point-of-care diagnostics.
Gerbers, Roman; Foellscher, Wilke; Chen, Hong; Anagnostopoulos, Constantine; Faghri, Mohammad
2014-10-21
Currently, the Lateral flow Immunoassays (LFIAs) are not able to perform complex multi-step immunodetection tests because of their inability to introduce multiple reagents in a controlled manner to the detection area autonomously. In this research, a point-of-care (POC) paper-based lateral flow immunosensor was developed incorporating a novel microfluidic valve technology. Layers of paper and tape were used to create a three-dimensional structure to form the fluidic network. Unlike the existing LFIAs, multiple directional valves are embedded in the test strip layers to control the order and the timing of mixing for the sample and multiple reagents. In this paper, we report a four-valve device which autonomously directs three different fluids to flow sequentially over the detection area. As proof of concept, a three-step alkaline phosphatase based Enzyme-Linked ImmunoSorbent Assay (ELISA) protocol with Rabbit IgG as the model analyte was conducted to prove the suitability of the device for immunoassays. The detection limit of about 4.8 fm was obtained.
Rapidly rotating second-generation progenitors for the 'blue hook' stars of ω Centauri.
Tailo, Marco; D'Antona, Francesca; Vesperini, Enrico; Di Criscienzo, Marcella; Ventura, Paolo; Milone, Antonino P; Bellini, Andrea; Dotter, Aaron; Decressin, Thibaut; D'Ercole, Annibale; Caloi, Vittoria; Capuzzo-Dolcetta, Roberto
2015-07-16
Horizontal branch stars belong to an advanced stage in the evolution of the oldest stellar galactic population, occurring either as field halo stars or grouped in globular clusters. The discovery of multiple populations in clusters that were previously believed to have single populations gave rise to the currently accepted theory that the hottest horizontal branch members (the 'blue hook' stars, which had late helium-core flash ignition, followed by deep mixing) are the progeny of a helium-rich 'second generation' of stars. It is not known why such a supposedly rare event (a late flash followed by mixing) is so common that the blue hook of ω Centauri contains approximately 30 per cent of the horizontal branch stars in the cluster, or why the blue hook luminosity range in this massive cluster cannot be reproduced by models. Here we report that the presence of helium core masses up to about 0.04 solar masses larger than the core mass resulting from evolution is required to solve the luminosity range problem. We model this by taking into account the dispersion in rotation rates achieved by the progenitors, whose pre-main-sequence accretion disk suffered an early disruption in the dense environment of the cluster's central regions, where second-generation stars form. Rotation may also account for frequent late-flash-mixing events in massive globular clusters.
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
Wildhaber, M.L.; Holan, S.H.; Bryan, J.L.; Gladish, D.W.; Ellersieck, M.
2011-01-01
In 2003, the US Army Corps of Engineers initiated the Pallid Sturgeon Population Assessment Program (PSPAP) to monitor pallid sturgeon and the fish community of the Missouri River. The power analysis of PSPAP presented here was conducted to guide sampling design and effort decisions. The PSPAP sampling design has a nested structure with multiple gear subsamples within a river bend. Power analyses were based on a normal linear mixed model, using a mixed cell means approach, with variance estimates from the original data. It was found that, at current effort levels, at least 20 years for pallid and 10 years for shovelnose sturgeon is needed to detect a 5% annual decline. Modified bootstrap simulations suggest power estimates from the original data are conservative due to excessive zero fish counts. In general, the approach presented is applicable to a wide array of animal monitoring programs.
Ertapenem: a new opportunity for outpatient parenteral antimicrobial therapy.
Tice, Alan D
2004-06-01
Ertapenem is a parenteral carbapenem antimicrobial with pharmacological properties that allow it to be given once daily. This makes it a consideration for outpatient parenteral antimicrobial therapy (OPAT). In comparison with information from the OPAT Outcomes Registry, ertapenem seems well suited for the types of infections and bacteria that are commonly treated with OPAT, plus it has additional activity against anaerobic bacteria. This added spectrum makes it possible to treat complicated skin/skin-structure, complicated intra-abdominal and pelvic infections with a single antibiotic instead of the multiple agents that have usually been required. Ertapenem is also comparable to other OPAT antimicrobials in terms of adverse effects and clinical outcomes. This antimicrobial can be given with any delivery model, although its stability when mixed is such that daily preparation or self-mixing systems need to be considered. Ertapenem should be added to the growing list of once-daily parenteral antibiotics that can be given to outpatients.
Sample flow switching techniques on microfluidic chips.
Pan, Yu-Jen; Lin, Jin-Jie; Luo, Win-Jet; Yang, Ruey-Jen
2006-02-15
This paper presents an experimental investigation into electrokinetically focused flow injection for bio-analytical applications. A novel microfluidic device for microfluidic sample handling is presented. The microfluidic chip is fabricated on glass substrates using conventional photolithographic and chemical etching processes and is bonded using a high-temperature fusion method. The proposed valve-less device is capable not only of directing a single sample flow to a specified output port, but also of driving multiple samples to separate outlet channels or even to a single outlet to facilitate sample mixing. The experimental results confirm that the sample flow can be electrokinetically pre-focused into a narrow stream and guided to the desired outlet port by means of a simple control voltage model. The microchip presented within this paper has considerable potential for use in a variety of applications, including high-throughput chemical analysis, cell fusion, fraction collection, sample mixing, and many other applications within the micro-total-analysis systems field.
Development of Tripropellant CFD Design Code
NASA Technical Reports Server (NTRS)
Farmer, Richard C.; Cheng, Gary C.; Anderson, Peter G.
1998-01-01
A tripropellant, such as GO2/H2/RP-1, CFD design code has been developed to predict the local mixing of multiple propellant streams as they are injected into a rocket motor. The code utilizes real fluid properties to account for the mixing and finite-rate combustion processes which occur near an injector faceplate, thus the analysis serves as a multi-phase homogeneous spray combustion model. Proper accounting of the combustion allows accurate gas-side temperature predictions which are essential for accurate wall heating analyses. The complex secondary flows which are predicted to occur near a faceplate cannot be quantitatively predicted by less accurate methodology. Test cases have been simulated to describe an axisymmetric tripropellant coaxial injector and a 3-dimensional RP-1/LO2 impinger injector system. The analysis has been shown to realistically describe such injector combustion flowfields. The code is also valuable to design meaningful future experiments by determining the critical location and type of measurements needed.
Gutreuter, S.; Boogaard, M.A.
2007-01-01
Predictors of the percentile lethal/effective concentration/dose are commonly used measures of efficacy and toxicity. Typically such quantal-response predictors (e.g., the exposure required to kill 50% of some population) are estimated from simple bioassays wherein organisms are exposed to a gradient of several concentrations of a single agent. The toxicity of an agent may be influenced by auxiliary covariates, however, and more complicated experimental designs may introduce multiple variance components. Prediction methods lag examples of those cases. A conventional two-stage approach consists of multiple bivariate predictions of, say, medial lethal concentration followed by regression of those predictions on the auxiliary covariates. We propose a more effective and parsimonious class of generalized nonlinear mixed-effects models for prediction of lethal/effective dose/concentration from auxiliary covariates. We demonstrate examples using data from a study regarding the effects of pH and additions of variable quantities 2???,5???-dichloro-4???- nitrosalicylanilide (niclosamide) on the toxicity of 3-trifluoromethyl-4- nitrophenol to larval sea lamprey (Petromyzon marinus). The new models yielded unbiased predictions and root-mean-squared errors (RMSEs) of prediction for the exposure required to kill 50 and 99.9% of some population that were 29 to 82% smaller, respectively, than those from the conventional two-stage procedure. The model class is flexible and easily implemented using commonly available software. ?? 2007 SETAC.
NASA Astrophysics Data System (ADS)
Pedretti, D.; Beckie, R. D.; Mayer, K. U.
2015-12-01
The chemistry of drainage from waste-rock piles at mine sites is difficult to predict because of a number of uncertainties including heterogeneous reactive mineral content, distribution of minerals, weathering rates and physical flow properties. In this presentation, we examine the effects of mixing on drainage chemistry over timescales of 100s of years. We use a 1-D streamtube conceptualization of flow in waste rocks and multicomponent reactive transport modeling. We simplify the reactive system to consist of acid-producing sulfide minerals and acid-neutralizing carbonate minerals and secondary sulfate and iron oxide minerals. We create multiple realizations of waste-rock piles with distinct distributions of reactive minerals along each flow path and examine the uncertainty of drainage geochemistry through time. The limited mixing of streamtubes that is characteristic of the vertical unsaturated flow in many waste-rock piles, allows individual flowpaths to sustain acid or neutral conditions to the base of the pile, where the streamtubes mix. Consequently, mixing and the acidity/alkalinity balance of the streamtube waters, and not the overall acid- and base-producing mineral contents, control the instantaneous discharge chemistry. Our results show that the limited mixing implied by preferential flow and the heterogeneous distribution of mineral contents lead to large uncertainty in drainage chemistry over short and medium time scales. However, over longer timescales when one of either the acid-producing or neutralizing primary phases is depleted, the drainage chemistry becomes less controlled by mixing and in turn less uncertain. A correct understanding of the temporal variability of uncertainty is key to make informed long-term decisions in mining settings regarding the management of waste material.
Practical system for generating digital mixed reality video holograms.
Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il
2016-07-10
We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.
Enhanced Eddy-Current Detection Of Weld Flaws
NASA Technical Reports Server (NTRS)
Van Wyk, Lisa M.; Willenberg, James D.
1992-01-01
Mixing of impedances measured at different frequencies reduces noise and helps reveal flaws. In new method, one excites eddy-current probe simultaneously at two different frequencies; usually, one of which integral multiple of other. Resistive and reactive components of impedance of eddy-current probe measured at two frequencies, mixed in computer, and displayed in real time on video terminal of computer. Mixing of measurements obtained at two different frequencies often "cleans up" displayed signal in situations in which band-pass filtering alone cannot: mixing removes most noise, and displayed signal resolves flaws well.
Choi, Ted; Eskin, Eleazar
2013-01-01
Gene expression data, in conjunction with information on genetic variants, have enabled studies to identify expression quantitative trait loci (eQTLs) or polymorphic locations in the genome that are associated with expression levels. Moreover, recent technological developments and cost decreases have further enabled studies to collect expression data in multiple tissues. One advantage of multiple tissue datasets is that studies can combine results from different tissues to identify eQTLs more accurately than examining each tissue separately. The idea of aggregating results of multiple tissues is closely related to the idea of meta-analysis which aggregates results of multiple genome-wide association studies to improve the power to detect associations. In principle, meta-analysis methods can be used to combine results from multiple tissues. However, eQTLs may have effects in only a single tissue, in all tissues, or in a subset of tissues with possibly different effect sizes. This heterogeneity in terms of effects across multiple tissues presents a key challenge to detect eQTLs. In this paper, we develop a framework that leverages two popular meta-analysis methods that address effect size heterogeneity to detect eQTLs across multiple tissues. We show by using simulations and multiple tissue data from mouse that our approach detects many eQTLs undetected by traditional eQTL methods. Additionally, our method provides an interpretation framework that accurately predicts whether an eQTL has an effect in a particular tissue. PMID:23785294
A continuous mixing model for pdf simulations and its applications to combusting shear flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Chen, J.-Y.
1991-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.
Coupled charge migration and fluid mixing in reactive fronts
NASA Astrophysics Data System (ADS)
Ghosh, Uddipta; Bandopadhyay, Aditya; Jougnot, Damien; Le Borgne, Tanguy; Meheust, Yves
2017-04-01
Quantifying fluid mixing in subsurface environments and its consequence on biogeochemical reactions is of paramount importance owing to its role in processes such as contaminant migration, aquifer remediation, CO2 sequestration or clogging processes, to name a few (Dentz et al. 2011). The presence of strong velocity gradients in porous media is expected to lead to enhanced diffusive mixing and augmented reaction rates (Le Borgne et al. 2014). Accurate in situ imaging of subsurface reactive solute transport and mixing remains to date a challenging proposition: the opacity of the medium prevents optical imaging and field methods based on tracer tests do not provide spatial information. Recently developed geophysical methods based on the temporal monitoring of electrical conductivity and polarization have shown promises for mapping and monitoring biogeochemical reactions in the subsurface although it remains challenging to decipher the multiple sources of electrical signals (e.g. Knight et al. 2010). In this work, we explore the coupling between fluid mixing, reaction and charge migration in porous media to evaluate the potential of mapping reaction rates from electrical measurements. To this end, we develop a new theoretical framework based on a lamellar mixing model (Le Borgne et al. 2013) to quantify changes in electrical mobility induced by chemical reactions across mixing fronts. Electrical conductivity and induced polarization are strongly dependent on the concentration of ionic species, which in turn depend on the local reaction rates. Hence, our results suggest that variation in real and complex electrical conductivity may be quantitatively related to the mixing and reaction dynamics. Thus, the presented theory provides a novel upscaling framework for quantifying the coupling between mixing, reaction and charge migration in heterogeneous porous media flows. References: Dentz. et al., Mixing, spreading and reaction in heterogeneous media: A brief review J. Contam. Hydrol. 120-121, 1 (2011). Le Borgne et al. Impact of Fluid Deformation on Mixing-Induced Chemical Reactions in heterogeneous Flows, Geophys. Res. Lett. 41, 7898 (2014). Knight, et al., Geophysics at the interface: Response of geophysical properties to solid-fluid, fluid-fluid, and solid-solid interfaces. Rev. Geophys. 48, (2010). Le Borgne et al. (2013) Stretching, coalescence and mixing in porous media, Phys. Rev. Lett., 110, 204501
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsigabu Gebrehiwet; James R. Henriksen; Luanjing Guo
Multi-component mineral precipitation in porous, subsurface environments is challenging to simulate or engineer when in situ reactant mixing is controlled by diffusion. In contrast to well-mixed systems, the conditions that favor mineral precipitation in porous media are distributed along chemical gradients, which evolve spatially due to concurrent mineral precipitation and modification of solute transport in the media. The resulting physical and chemical characteristics of a mixing/precipitation zone are a consequence of coupling between transport and chemical processes, and the distinctive properties of individual chemical systems. We examined the spatial distribution of precipitates formed in “double diffusion” columns for two chemicalmore » systems, calcium carbonate and calcium phosphate. Polyacrylamide hydrogel was used as a low permeability, high porosity medium to maximize diffusive mixing and minimize pressure- and density-driven flow between reactant solutions. In the calcium phosphate system, multiple, visually dense and narrow bands of precipitates were observed that were reminiscent of previously reported Liesegang patterns. In the calcium carbonate system, wider precipitation zones characterized by more sparse distributions of precipitates and a more open channel structure were observed. In both cases, formation of precipitates inhibited, but did not necessarily eliminate, continued transport and mixing of the reactants. A reactive transport model with fully implicit coupling between diffusion, chemical speciation and precipitation kinetics, but where explicit details of nucleation processes were neglected, was able to qualitatively simulate properties of the precipitation zones. The results help to illustrate how changes in the physical properties of a precipitation zone depend on coupling between diffusion-controlled reactant mixing and chemistry-specific details of precipitation kinetics.« less
Quantifying nutrient sources in an upland catchment using multiple chemical and isotopic tracers
NASA Astrophysics Data System (ADS)
Sebestyen, S. D.; Boyer, E. W.; Shanley, J. B.; Doctor, D. H.; Kendall, C.; Aiken, G. R.
2006-12-01
To explore processes that control the temporal variation of nutrients in surface waters, we measured multiple environmental tracers at the Sleepers River Research Watershed, an upland catchment in northeastern Vermont, USA. Using a set of high-frequency stream water samples, we quantified the variation of nutrients over a range of stream flow conditions with chemical and isotopic tracers of water, nitrate, and dissolved organic carbon (DOC). Stream water concentrations of nitrogen (predominantly in the forms of nitrate and dissolved organic nitrogen) and DOC reflected mixing of water contributed from distinct sources in the forested landscape. Water isotopic signatures and end-member mixing analysis revealed when solutes entered the stream from these sources and that the sources were linked to the stream by preferential shallow subsurface and overland flow paths. Results from the tracers indicated that freshly-leached, terrestrial organic matter was the overwhelming source of high DOC concentrations in stream water. In contrast, in this region where atmospheric nitrogen deposition is chronically elevated, the highest concentrations of stream nitrate were attributable to atmospheric sources that were transported via melting snow and rain fall. These findings are consistent with a conceptual model of the landscape in which coupled hydrological and biogeochemical processes interact to control stream solute variability over time.
NASA Astrophysics Data System (ADS)
Das, Debottam; Ghosh, Kirtiman; Mitra, Manimala; Mondal, Subhadeep
2018-01-01
We consider an extension of the standard model (SM) augmented by two neutral singlet fermions per generation and a leptoquark. In order to generate the light neutrino masses and mixing, we incorporate inverse seesaw mechanism. The right-handed neutrino production in this model is significantly larger than the conventional inverse seesaw scenario. We analyze the different collider signatures of this model and find that the final states associated with three or more leptons, multijet and at least one b -tagged and (or) τ -tagged jet can probe larger RH neutrino mass scale. We have also proposed a same-sign dilepton signal region associated with multiple jets and missing energy that can be used to distinguish the present scenario from the usual inverse seesaw extended SM.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
NASA Astrophysics Data System (ADS)
Jameel, M. Y.; Brewer, S.; Fiorella, R.; Tipple, B. J.; Bowen, G. J.; Terry, S.
2017-12-01
Public water supply systems (PWSS) are complex distribution systems and critical infrastructure, making them vulnerable to physical disruption and contamination. Exploring the susceptibility of PWSS to such perturbations requires detailed knowledge of the supply system structure and operation. Although the physical structure of supply systems (i.e., pipeline connection) is usually well documented for developed cities, the actual flow patterns of water in these systems are typically unknown or estimated based on hydrodynamic models with limited observational validation. Here, we present a novel method for mapping the flow structure of water in a large, complex PWSS, building upon recent work highlighting the potential of stable isotopes of water (SIW) to document water management practices within complex PWSS. We sampled a major water distribution system of the Salt Lake Valley, Utah, measuring SIW of water sources, treatment facilities, and numerous sites within in the supply system. We then developed a hierarchical Bayesian (HB) isotope mixing model to quantify the proportion of water supplied by different sources at sites within the supply system. Known production volumes and spatial distance effects were used to define the prior probabilities for each source; however, we did not include other physical information about the supply system. Our results were in general agreement with those obtained by hydrodynamic models and provide quantitative estimates of contributions of different water sources to a given site along with robust estimates of uncertainty. Secondary properties of the supply system, such as regions of "static" and "dynamic" source (e.g., regions supplied dominantly by one source vs. those experiencing active mixing between multiple sources), can be inferred from the results. The isotope-based HB isotope mixing model offers a new investigative technique for analyzing PWSS and documenting aspects of supply system structure and operation that are otherwise challenging to observe. The method could allow water managers to document spatiotemporal variation in PWSS flow patterns, critical for interrogating the distribution system to inform operation decision making or disaster response, optimize water supply and, monitor and enforce water rights.
Eyre, David W.; Cule, Madeleine L.; Griffiths, David; Crook, Derrick W.; Peto, Tim E. A.
2013-01-01
Bacterial whole genome sequencing offers the prospect of rapid and high precision investigation of infectious disease outbreaks. Close genetic relationships between microorganisms isolated from different infected cases suggest transmission is a strong possibility, whereas transmission between cases with genetically distinct bacterial isolates can be excluded. However, undetected mixed infections—infection with ≥2 unrelated strains of the same species where only one is sequenced—potentially impairs exclusion of transmission with certainty, and may therefore limit the utility of this technique. We investigated the problem by developing a computationally efficient method for detecting mixed infection without the need for resource-intensive independent sequencing of multiple bacterial colonies. Given the relatively low density of single nucleotide polymorphisms within bacterial sequence data, direct reconstruction of mixed infection haplotypes from current short-read sequence data is not consistently possible. We therefore use a two-step maximum likelihood-based approach, assuming each sample contains up to two infecting strains. We jointly estimate the proportion of the infection arising from the dominant and minor strains, and the sequence divergence between these strains. In cases where mixed infection is confirmed, the dominant and minor haplotypes are then matched to a database of previously sequenced local isolates. We demonstrate the performance of our algorithm with in silico and in vitro mixed infection experiments, and apply it to transmission of an important healthcare-associated pathogen, Clostridium difficile. Using hospital ward movement data in a previously described stochastic transmission model, 15 pairs of cases enriched for likely transmission events associated with mixed infection were selected. Our method identified four previously undetected mixed infections, and a previously undetected transmission event, but no direct transmission between the pairs of cases under investigation. These results demonstrate that mixed infections can be detected without additional sequencing effort, and this will be important in assessing the extent of cryptic transmission in our hospitals. PMID:23658511
Meteorological and air pollution modeling for an urban airport
NASA Technical Reports Server (NTRS)
Swan, P. R.; Lee, I. Y.
1980-01-01
Results are presented of numerical experiments modeling meteorology, multiple pollutant sources, and nonlinear photochemical reactions for the case of an airport in a large urban area with complex terrain. A planetary boundary-layer model which predicts the mixing depth and generates wind, moisture, and temperature fields was used; it utilizes only surface and synoptic boundary conditions as input data. A version of the Hecht-Seinfeld-Dodge chemical kinetics model is integrated with a new, rapid numerical technique; both the San Francisco Bay Area Air Quality Management District source inventory and the San Jose Airport aircraft inventory are utilized. The air quality model results are presented in contour plots; the combined results illustrate that the highly nonlinear interactions which are present require that the chemistry and meteorology be considered simultaneously to make a valid assessment of the effects of individual sources on regional air quality.
Ji, Jin; Yang, Jiun-Chan; Larson, Dale N.
2009-01-01
We demonstrate using nanohole arrays of mixed designs and a microwriting process based on dip-pen nanolithography to monitor multiple, different protein binding events simultaneously in real time based on the intensity of Extraordinary Optical Transmission of nanohole arrays. The microwriting process and small footprint of the individual nanohole arrays enabled us to observe different binding events located only 16μm apart, achieving high spatial resolution. We also present a novel concept that incorporates nanohole arrays of different designs to improve confidence and accuracy of binding studies. For proof of concept, two types of nanohole arrays, designed to exhibit opposite responses to protein bindings, were fabricated on one transducer. Initial studies indicate that the mixed designs could help to screen out artifacts such as protein intrinsic signals, providing improved accuracy of binding interpretation. PMID:19297143
A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.
2016-10-01
Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.
"Comments on Howe": Toward a More Inclusive "Scientific Research in Education"
ERIC Educational Resources Information Center
Johnson, R. Burke
2009-01-01
In response to Howe (2009), the author argues that educational research needs multiple thoughtful perspectives. The author's standpoint is that of a mixed methods research methodologist. Mixed methods research provides an antidualistic and syncretic philosophy and set of approaches or possibilities for merging insights from diverse perspectives;…
Lagrangian mixed layer modeling of the western equatorial Pacific
NASA Technical Reports Server (NTRS)
Shinoda, Toshiaki; Lukas, Roger
1995-01-01
Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.
A conflict analysis of 4D descent strategies in a metered, multiple-arrival route environment
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Harris, C. S.
1990-01-01
A conflict analysis was performed on multiple arrival traffic at a typical metered airport. The Flow Management Evaluation Model (FMEM) was used to simulate arrival operations using Denver Stapleton's arrival route structure. Sensitivities of conflict performance to three different 4-D descent strategies (clear-idle Mach/Constant AirSpeed (CAS), constant descent angle Mach/CAS and energy optimal) were examined for three traffic mixes represented by those found at Denver Stapleton, John F. Kennedy and typical en route metering (ERM) airports. The Monte Carlo technique was used to generate simulation entry point times. Analysis results indicate that the clean-idle descent strategy offers the best compromise in overall performance. Performance measures primarily include susceptibility to conflict and conflict severity. Fuel usage performance is extrapolated from previous descent strategy studies.
Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál
2017-07-27
The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.
NASA Technical Reports Server (NTRS)
Tilmes, S.; Pan, L. L.; Hoor, P.; Atlas, E.; Avery, M. A.; Campos, T.; Christensen, L. E.; Diskin, G. S.; Gao, R.-S.; Herman, R. L.;
2010-01-01
We present a climatology of O3, CO, and H2O for the upper troposphere and lower stratosphere (UTLS), based on a large collection of high ]resolution research aircraft data taken between 1995 and 2008. To group aircraft observations with sparse horizontal coverage, the UTLS is divided into three regimes: the tropics, subtropics, and the polar region. These regimes are defined using a set of simple criteria based on tropopause height and multiple tropopause conditions. Tropopause ]referenced tracer profiles and tracer ]tracer correlations show distinct characteristics for each regime, which reflect the underlying transport processes. The UTLS climatology derived here shows many features of earlier climatologies. In addition, mixed air masses in the subtropics, identified by O3 ]CO correlations, show two characteristic modes in the tracer ]tracer space that are a result of mixed air masses in layers above and below the tropopause (TP). A thin layer of mixed air (1.2 km around the tropopause) is identified for all regions and seasons, where tracer gradients across the TP are largest. The most pronounced influence of mixing between the tropical transition layer and the subtropics was found in spring and summer in the region above 380 K potential temperature. The vertical extent of mixed air masses between UT and LS reaches up to 5 km above the TP. The tracer correlations and distributions in the UTLS derived here can serve as a reference for model and satellite data evaluation
Basson, Jacob; Sung, Yun Ju; de Las Fuentes, Lisa; Schwander, Karen L; Vazquez, Ana; Rao, Dabeeru C
2016-01-01
Blood pressure (BP) has been shown to be substantially heritable, yet identified genetic variants explain only a small fraction of the heritability. Gene-smoking interactions have detected novel BP loci in cross-sectional family data. Longitudinal family data are available and have additional promise to identify BP loci. However, this type of data presents unique analysis challenges. Although several methods for analyzing longitudinal family data are available, which method is the most appropriate and under what conditions has not been fully studied. Using data from three clinic visits from the Framingham Heart Study, we performed association analysis accounting for gene-smoking interactions in BP at 31,203 markers on chromosome 22. We evaluated three different modeling frameworks: generalized estimating equations (GEE), hierarchical linear modeling, and pedigree-based mixed modeling. The three models performed somewhat comparably, with multiple overlaps in the most strongly associated loci from each model. Loci with the greatest significance were more strongly supported in the longitudinal analyses than in any of the component single-visit analyses. The pedigree-based mixed model was more conservative, with less inflation in the variant main effect and greater deflation in the gene-smoking interactions. The GEE, but not the other two models, resulted in substantial inflation in the tail of the distribution when variants with minor allele frequency <1% were included in the analysis. The choice of analysis method should depend on the model and the structure and complexity of the familial and longitudinal data. © 2015 WILEY PERIODICALS, INC.
Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela
2014-04-22
In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements. Combining a traditional quasi-experimental controlled pre- and post-test design with an explanatory mixed methods model permits an additional assessment of organizational and behavioral changes affecting complex processes. Through this impact evaluation approach, our design will not only create robust evidence measures for the outcome of interest, but also generate insights on how and why the investigated interventions produce certain intended and unintended effects and allows for a more in-depth evaluation approach.
Lee, Chia-Yen; Chang, Chin-Lung; Wang, Yao-Nan; Fu, Lung-Ming
2011-01-01
The aim of microfluidic mixing is to achieve a thorough and rapid mixing of multiple samples in microscale devices. In such devices, sample mixing is essentially achieved by enhancing the diffusion effect between the different species flows. Broadly speaking, microfluidic mixing schemes can be categorized as either “active”, where an external energy force is applied to perturb the sample species, or “passive”, where the contact area and contact time of the species samples are increased through specially-designed microchannel configurations. Many mixers have been proposed to facilitate this task over the past 10 years. Accordingly, this paper commences by providing a high level overview of the field of microfluidic mixing devices before describing some of the more significant proposals for active and passive mixers. PMID:21686184
Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena
2013-12-01
There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly significant difference for each dimension measured (except for the inter-abutment distance between the first and the second die) between any two groups of stone models obtained from the four impression techniques. Pair wise comparison for each measurement did not reveal any significant difference (except for the faciolingual distance of the third die) between the casts produced using the two step double mix impression technique and the matrix impression system. The two step double mix impression technique produced stone dies that showed the least dimensional variation. During fabrication of a cast restoration, laboratory procedures should not only compensate for the cement thickness, but also for the increase or decrease in die dimensions.
Research and Guidance on Drinking Water Contaminant Mixtures
Accurate assessment of potential human health risk(s) from multiple-route exposures to multiple chemicals in drinking water is needed because of widespread daily exposure to this complex mixture. Hundreds of chemicals have been identified in drinking water with the mix of chemic...
ERIC Educational Resources Information Center
Kutschera, P. C.; Pelayo, Jose Maria G., III
2012-01-01
Multiple anecdotal accounts and a thin body of extant empirical research on an estimated 250,000 multiple generation, mixed-heritage military Amerasians in the Philippines, and Pan Amerasians residing in other East and Southeast Asian societies, indicates substantial past and present stigmatization and discrimination--particularly Amerasians of…
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
NASA Astrophysics Data System (ADS)
Henze, D.; Noone, D.
2017-12-01
A third of the world's biomass burning aerosol (BBA) particles are generated in southern Africa, and these particles are swept into the midlevel troposphere over the southeast Atlantic Ocean. The presence of these aerosols over the marine environment of the south east Atlantic offers a unique natural laboratory for studying aerosol effects on climate, and specifically a modification to the hydrologic cycle and microphysical characteristics of clouds. Different rates of condensation with high aerosol numbers change the precipitation rates in drizzling stratiform clouds, while the mixing of aerosols into the cloud layer is synonymous with entrainment from above cloud top near the top of the subtropical inversion. To better understanding the magnitude of the aerosol influence on southeast Atlantic boundary layer clouds we analyze the cloud-top entrainment and drizzle as a function of aerosol loading to determine the impact of BBA. Entrainment was determined from mixing line analysis based on profile measurements of moist static energy, total water, and the two most common heavy isotopes of water - HDO and H218O. Data was collected on the P-3 Orion aircraft during the NASA 2017 ORACLES campaign. Using these measurements, a box model was constructed using the combined conservation laws associated with all four of these quantities to estimate the entrainment and rainout of cloud liquid. The population of profiles sampled by the aircraft over the course of the 30 day mission spans varying concentrations of BBA. Initial plots of the water isotope mixing lines show where and to what degree the BBA air mass has mixed into the boundary layer air mass from above. This is demonstrated by the fact that the mixing end-members are the same for the different areas sampled, but the rate at which the various mixing lines are traversed as a function of altitude varies. Further, the mixing lines as a function of height traverse back and forth between end members multiple times over one profile. This suggests that air masses are mixing by `layering' into each other, and helps us to better represent entrainment in our box model. Meanwhile, isotope ratios measured below vs above the cloud layer show that the air above the clouds is depleted of heavy water isotopes in comparison to below - the degree of depletion could correspond to drizzle amount.
A flavor symmetry model for bilarge leptonic mixing and the lepton masses
NASA Astrophysics Data System (ADS)
Ohlsson, Tommy; Seidl, Gerhart
2002-11-01
We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.
Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E
2017-09-10
Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.
Multi-diversity combining and selection for relay-assisted mixed RF/FSO system
NASA Astrophysics Data System (ADS)
Chen, Li; Wang, Weidong
2017-12-01
We propose and analyze multi-diversity combining and selection to enhance the performance of relay-assisted mixed radio frequency/free-space optics (RF/FSO) system. We focus on a practical scenario for cellular network where a single-antenna source is communicating to a multi-apertures destination through a relay equipped with multiple receive antennas and multiple transmit apertures. The RF single input multiple output (SIMO) links employ either maximal-ratio combining (MRC) or receive antenna selection (RAS), and the FSO multiple input multiple output (MIMO) links adopt either repetition coding (RC) or transmit laser selection (TLS). The performance is evaluated via an outage probability analysis over Rayleigh fading RF links and Gamma-Gamma atmospheric turbulence FSO links with pointing errors where channel state information (CSI) assisted amplify-and-forward (AF) scheme is considered. Asymptotic closed-form expressions at high signal-to-noise ratio (SNR) are also derived. Coding gain and diversity order for different combining and selection schemes are further discussed. Numerical results are provided to verify and illustrate the analytical results.
Dagenais, Emmanuelle; Rouleau, Isabelle; Tremblay, Alexandra; Demers, Mélanie; Roger, Élaine; Jobin, Céline; Duquette, Pierre
2016-01-01
Patients diagnosed with multiple sclerosis (MS) often report prospective memory (PM) deficits. Although PM is important for daily functioning, it is not formally assessed in clinical practice. The aim of this study was to examine the role of executive functions in MS patients' PM revealed by the effect of strength of cue-action association on PM performance. Thirty-nine MS patients were compared to 18 healthy controls matched for age, gender, and education on a PM task modulating the strength of association between the cue and the intended action. Deficits in MS patients affecting both prospective and retrospective components of PM were confirmed using 2 × 2 × 2 mixed analyses of variance (ANOVAs). Among patients, multiple regression analyses revealed that the impairment was modulated by the efficiency of executive functions, whereas retrospective memory seemed to have little impact on PM performance, contrary to expectation. More specifically, results of 2 × 2 × 2 mixed-model analyses of covariance (ANCOVAs) showed that low-executive patients had more difficulty detecting and, especially, retrieving the appropriate action when the cue and the action were unrelated, whereas high-executive patients' performance seemed to be virtually unaffected by the cue-action association. Using an objective measure, these findings confirm the presence of PM deficits in MS. They also suggest that such deficits depend on executive functioning and can be reduced when automatic PM processes are engaged through semantic cue-action association. They underscore the importance of assessing PM in clinical settings through a cognitive evaluation and offer an interesting avenue for rehabilitation.