Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage
ERIC Educational Resources Information Center
Galyardt, April
2012-01-01
This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…
Modeling and Analysis of Mixed Synchronous/Asynchronous Systems
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan
2012-01-01
Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.
Finite element modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.
1983-01-01
Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.
Janssen, Dirk P
2012-03-01
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Peters, J. M.
1981-01-01
Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.
Separate-channel analysis of two-channel microarrays: recovering inter-spot information.
Smyth, Gordon K; Altman, Naomi S
2013-05-26
Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
ERIC Educational Resources Information Center
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
Data on copula modeling of mixed discrete and continuous neural time series.
Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou
2016-06-01
Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.
Semiparametric mixed-effects analysis of PK/PD models using differential equations.
Wang, Yi; Eskridge, Kent M; Zhang, Shunpu
2008-08-01
Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.
CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS
Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Visualized analysis of mixed numeric and categorical data via extended self-organizing map.
Hsu, Chung-Chian; Lin, Shu-Han
2012-01-01
Many real-world datasets are of mixed types, having numeric and categorical attributes. Even though difficult, analyzing mixed-type datasets is important. In this paper, we propose an extended self-organizing map (SOM), called MixSOM, which utilizes a data structure distance hierarchy to facilitate the handling of numeric and categorical values in a direct, unified manner. Moreover, the extended model regularizes the prototype distance between neighboring neurons in proportion to their map distance so that structures of the clusters can be portrayed better on the map. Extensive experiments on several synthetic and real-world datasets are conducted to demonstrate the capability of the model and to compare MixSOM with several existing models including Kohonen's SOM, the generalized SOM and visualization-induced SOM. The results show that MixSOM is superior to the other models in reflecting the structure of the mixed-type data and facilitates further analysis of the data such as exploration at various levels of granularity.
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.
Uncertainty in mixing models: a blessing in disguise?
NASA Astrophysics Data System (ADS)
Delsman, J. R.; Oude Essink, G. H. P.
2012-04-01
Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.
Significance of the model considering mixed grain-size for inverse analysis of turbidites
NASA Astrophysics Data System (ADS)
Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.
2016-12-01
A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.
NASA Technical Reports Server (NTRS)
Sherif, S.A.; Hunt, P. L.; Holladay, J. B.; Lear, W. E.; Steadham, J. M.
1998-01-01
Jet pumps are devices capable of pumping fluids to a higher pressure by inducing the motion of a secondary fluid employing a high speed primary fluid. The main components of a jet pump are a primary nozzle, secondary fluid injectors, a mixing chamber, a throat, and a diffuser. The work described in this paper models the flow of a two-phase primary fluid inducing a secondary liquid (saturated or subcooled) injected into the jet pump mixing chamber. The model is capable of accounting for phase transformations due to compression, expansion, and mixing. The model is also capable of incorporating the effects of the temperature and pressure dependency in the analysis. The approach adopted utilizes an isentropic constant pressure mixing in the mixing chamber and at times employs iterative techniques to determine the flow conditions in the different parts of the jet pump.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
Real longitudinal data analysis for real people: building a good enough mixed model.
Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E
2010-02-20
Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong
2016-01-01
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471
A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.
Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon
2007-02-01
Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805
Analysis of rocket engine injection combustion processes
NASA Technical Reports Server (NTRS)
Salmon, J. W.; Saltzman, D. H.
1977-01-01
Mixing methodology improvement for the JANNAF DER and CICM injection/combustion analysis computer programs was accomplished. ZOM plane prediction model development was improved for installation into the new standardized DER computer program. An intra-element mixing model developing approach was recommended for gas/liquid coaxial injection elements for possible future incorporation into the CICM computer program.
USDA-ARS?s Scientific Manuscript database
Mixing models have been used to predict sediment source contributions. The inherent problem of the mixing models limited the number of sediment sources. The objective of this study is to develop and evaluate a new method using Discriminant Function Analysis (DFA) to fingerprint sediment source contr...
Effects of imperfect mixing on low-density polyethylene reactor dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, C.M.; Dihora, J.O.; Ray, W.H.
1998-07-01
Earlier work considered the effect of feed conditions and controller configuration on the runaway behavior of LDPE autoclave reactors assuming a perfectly mixed reactor. This study provides additional insight on the dynamics of such reactors by using an imperfectly mixed reactor model and bifurcation analysis to show the changes in the stability region when there is imperfect macroscale mixing. The presence of imperfect mixing substantially increases the range of stable operation of the reactor and makes the process much easier to control than for a perfectly mixed reactor. The results of model analysis and simulations are used to identify somemore » of the conditions that lead to unstable reactor behavior and to suggest ways to avoid reactor runaway or reactor extinction during grade transitions and other process operation disturbances.« less
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
MIXOR: a computer program for mixed-effects ordinal regression analysis.
Hedeker, D; Gibbons, R D
1996-03-01
MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
2017-09-03
Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model
NASA Astrophysics Data System (ADS)
Megann, A.; Nurser, G.
2014-12-01
Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Mixing with applications to inertial-confinement-fusion implosions
NASA Astrophysics Data System (ADS)
Rana, V.; Lim, H.; Melvin, J.; Glimm, J.; Cheng, B.; Sharp, D. H.
2017-01-01
Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.
Mixing with applications to inertial-confinement-fusion implosions.
Rana, V; Lim, H; Melvin, J; Glimm, J; Cheng, B; Sharp, D H
2017-01-01
Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
NASA Astrophysics Data System (ADS)
Gürcan, Eser Kemal
2017-04-01
The most commonly used methods for analyzing time-dependent data are multivariate analysis of variance (MANOVA) and nonlinear regression models. The aim of this study was to compare some MANOVA techniques and nonlinear mixed modeling approach for investigation of growth differentiation in female and male Japanese quail. Weekly individual body weight data of 352 male and 335 female quail from hatch to 8 weeks of age were used to perform analyses. It is possible to say that when all the analyses are evaluated, the nonlinear mixed modeling is superior to the other techniques because it also reveals the individual variation. In addition, the profile analysis also provides important information.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong
2016-04-07
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Analyzing Mixed-Dyadic Data Using Structural Equation Models
ERIC Educational Resources Information Center
Peugh, James L.; DiLillo, David; Panuzio, Jillian
2013-01-01
Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; D'Costa, Joseph F.
1991-01-01
This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.
Analysis and modeling of subgrid scalar mixing using numerical data
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun
2013-09-01
By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.
Simulating the Cyclone Induced Turbulent Mixing in the Bay of Bengal using COAWST Model
NASA Astrophysics Data System (ADS)
Prakash, K. R.; Nigam, T.; Pant, V.
2017-12-01
Mixing in the upper oceanic layers (up to a few tens of meters from surface) is an important process to understand the evolution of sea surface properties. Enhanced mixing due to strong wind forcing at surface leads to deepening of mixed layer that affects the air-sea exchange of heat and momentum fluxes and modulates sea surface temperature (SST). In the present study, we used Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) model to demonstrate and quantify the enhanced cyclone induced turbulent mixing in case of a severe cyclonic storm. The COAWST model was configured over the Bay of Bengal (BoB) and used to simulate the atmospheric and oceanic conditions prevailing during the tropical cyclone (TC) Phailin that occurred over the BoB during 10-15 October 2013. The model simulated cyclone track was validated with IMD best-track and model SST validated with daily AVHRR SST data. Validation shows that model simulated track & intensity, SST and salinity were in good agreement with observations and the cyclone induced cooling of the sea surface was well captured by the model. Model simulations show a considerable deepening (by 10-15 m) of the mixed layer and shoaling of thermocline during TC Phailin. The power spectrum analysis was performed on the zonal and meridional baroclinic current components, which shows strongest energy at 14 m depth. Model results were analyzed to investigate the non-uniform energy distribution in the water column from surface up to the thermocline depth. The rotary spectra analysis highlights the downward direction of turbulent mixing during the TC Phailin period. Model simulations were used to quantify and interpret the near-inertial mixing, which were generated by cyclone induced strong wind stress and the near-inertial energy. These near-inertial oscillations are responsible for the enhancement of the mixing operative in the strong post-monsoon (October-November) stratification in the BoB.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
An Analysis of Results of a High-Resolution World Ocean Circulation Model.
1988-03-01
Level Experiments ............... 16 a. Baseline (Laplacian Mixing) Integration ........ 16 b. Isopycnal Mixing Integration ................... 18 3...One-Half Degree, Twenty Level Experiments .......... 18 a. Baseline (Three Year Interior Restoring) Integration...TWENTY LEVEL EXPERIMENTS .................... 21 1. Baseline (Laplacian Mixing) Integration ............. 21 2. Isopycnal Mixing Integration
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-03-15
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.
Global analysis of fermion mixing with exotics
NASA Technical Reports Server (NTRS)
Nardi, Enrico; Roulet, Esteban; Tommasini, Daniele
1991-01-01
The limits are analyzed on deviation of the lepton and quark weak-couplings from their standard model values in a general class of models where the known fermions are allowed to mix with new heavy particles with exotic SU(2) x U(1) quantum number assignments (left-handed singlets or right-handed doublets). These mixings appear in many extensions of the electroweak theory such as models with mirror fermions, E(sub 6) models, etc. The results update previous analyses and improve considerably the existing bounds.
Formulation of Water Quality Models for Streams, Lakes and Reservoirs: Modeler’s Perspective
1989-07-01
dilution of efflu- ent plumes . These mixing models also address the question of whether a pol- lutant has been sufficiently diluted to meet discharge...PS releases, e.g. DISPER or TADPOL (Almquist et al. 1977) for passive mixing in the far field, and various jet and plume mixing models in uniform or...Experiment Station, Vicksburg, MS. Harleman, D. R. F. 1982 (Mar). " Hydrothermal Analysis of Lakes and Reser- voirs, Journal of Hydraulics Division
Fully-coupled analysis of jet mixing problems. Part 1. Shock-capturing model, SCIPVIS
NASA Technical Reports Server (NTRS)
Dash, S. M.; Wolf, D. E.
1984-01-01
A computational model, SCIPVIS, is described which predicts the multiple cell shock structure in imperfectly expanded, turbulent, axisymmetric jets. The model spatially integrates the parabolized Navier-Stokes jet mixing equations using a shock-capturing approach in supersonic flow regions and a pressure-split approximation in subsonic flow regions. The regions are coupled using a viscous-characteristic procedure. Turbulence processes are represented via the solution of compressibility-corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive analysis of the wake-like mixing process occurring behind Mach discs is handled in a rigorous manner. Calculations are presented exhibiting the fundamental interactive processes occurring in supersonic jets and the model is assessed via comparisons with detailed laboratory data for a variety of under- and overexpanded jets.
Analysis and testing of high entrainment single nozzle jet pumps with variable mixing tubes
NASA Technical Reports Server (NTRS)
Hickman, K. E.; Hill, P. G.; Gilbert, G. B.
1972-01-01
An analytical model was developed to predict the performance characteristics of axisymmetric single-nozzle jet pumps with variable area mixing tubes. The primary flow may be subsonic or supersonic. The computer program uses integral techniques to calculate the velocity profiles and the wall static pressures that result from the mixing of the supersonic primary jet and the subsonic secondary flow. An experimental program was conducted to measure mixing tube wall static pressure variations, velocity profiles, and temperature profiles in a variable area mixing tube with a supersonic primary jet. Static pressure variations were measured at four different secondary flow rates. These test results were used to evaluate the analytical model. The analytical results compared well to the experimental data. Therefore, the analysis is believed to be ready for use to relate jet pump performance characteristics to mixing tube design.
Correcting for population structure and kinship using the linear mixed model: theory and extensions.
Hoffman, Gabriel E
2013-01-01
Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.
Perspectives On Dilution Jet Mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.
1990-01-01
NASA recently completed program of measurements and modeling of mixing of transverse jets with ducted crossflow, motivated by need to design or tailor temperature pattern at combustor exit in gas turbine engines. Objectives of program to identify dominant physical mechanisms governing mixing, extend empirical models to provide near-term predictive capability, and compare numerical code calculations with data to guide future analysis improvement efforts.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine mechanism of the metabolic interactions occurring during simultaneous inhalation exposures to the organic solvents chloroform and trichloroethylene (TCE).
V...
A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine the mechanism of metabolic interactions occurring during simultaneous exposures to the organic solvents chloroform and trichloroethylene (TCE). Visualization-based se...
Analysis of lithology: Vegetation mixes in multispectral images
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M.; Adams, J. D.
1982-01-01
Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Development of stable isotope mixing models in ecology - Dublin
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Historical development of stable isotope mixing models in ecology
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Perth
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Fremantle
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Sydney
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Okong'o, Nora; Bellan, Josette
2005-01-01
Models for large eddy simulation (LES) are assessed on a database obtained from direct numerical simulations (DNS) of supercritical binary-species temporal mixing layers. The analysis is performed at the DNS transitional states for heptane/nitrogen, oxygen/hydrogen and oxygen/helium mixing layers. The incorporation of simplifying assumptions that are validated on the DNS database leads to a set of LES equations that requires only models for the subgrid scale (SGS) fluxes, which arise from filtering the convective terms in the DNS equations. Constant-coefficient versions of three different models for the SGS fluxes are assessed and calibrated. The Smagorinsky SGS-flux model shows poor correlations with the SGS fluxes, while the Gradient and Similarity models have high correlations, as well as good quantitative agreement with the SGS fluxes when the calibrated coefficients are used.
Hawaii Ocean Mixing Experiment: Program Summary
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)
2002-01-01
It is becoming apparent that insufficient mixing occurs in the pelagic ocean to maintain the large scale thermohaline circulation. Observed mixing rates fall a factor of ten short of classical indices such as Munk's "Abyssal Recipe." The growing suspicion is that most of the mixing in the sea occurs near topography. Exciting recent observations by Polzin et al., among others, fuel this speculation. If topographic mixing is indeed important, it must be acknowledged that its geographic distribution, both laterally and vertically, is presently unknown. The vertical distribution of mixing plays a critical role in the Stommel Arons model of the ocean interior circulation. In recent numerical studies, Samelson demonstrates the extreme sensitivity of flow in the abyssal ocean to the spatial distribution of mixing. We propose to study the topographic mixing problem through an integrated program of modeling and observation. We focus on tidally forced mixing as the global energetics of this process have received (and are receiving) considerable study. Also, the well defined frequency of the forcing and the unique geometry of tidal scattering serve to focus the experiment design. The Hawaiian Ridge is selected as a study site. Strong interaction between the barotropic tide and the Ridge is known to take place. The goals of the Hawaiian Ocean Mixing Experiment (HOME) are to quantify the rate of tidal energy loss to mixing at the Ridge and to identify the mechanisms by which energy is lost and mixing generated. We are challenged to develop a sufficiently comprehensive picture that results can be generalized from Hawaii to the global ocean. To achieve these goals, investigators from five institutions have designed HOME, a program of historic data analysis, modeling and field observation. The Analysis and Modeling efforts support the design of the field experiments. As the program progresses, a global model of the barotropic (depth independent) tide, and two models of the baroclinic (depth varying) tide, all validated with near-Ridge data, will be applied, to reveal the mechanisms of tidal energy conversion along the Ridge, and allow spatial and temporal integration of the rate of conversion. Field experiments include a survey to identify "hot spots" of enhanced mixing and barotropic to baroclinic conversion, a Nearfield study identifying the dominant mechanisms responsible for topographic mixing, and a Farfield program which quantifies the barotropic energy flux convergence at the Ridge and the flux divergence associated with low mode baroclinic waves radiation. The difference is a measure of the tidal power available for mixing at the Ridge. Field work is planned from years 2000 through 2002, with analysis and modeling efforts extending through early 2006. If successful, HOME will yield an understanding of the dominant topographic mixing processes applicable throughout the global ocean. It will advance understanding of two central problems in ocean science, the maintenance of the abyssal stratification, and the dissipation of the tides. HOME data will be used to improve the parameterization of dissipation in models which presently assimilate TOPEX-POSEIDON observations. The improved understanding of the dynamics and spatial distribution of mixing processes will benefit future long-term programs such as CLIVAR.
CFD simulation of gas and non-Newtonian fluid two-phase flow in anaerobic digesters.
Wu, Binxin
2010-07-01
This paper presents an Eulerian multiphase flow model that characterizes gas mixing in anaerobic digesters. In the model development, liquid manure is assumed to be water or a non-Newtonian fluid that is dependent on total solids (TS) concentration. To establish the appropriate models for different TS levels, twelve turbulence models are evaluated by comparing the frictional pressure drops of gas and non-Newtonian fluid two-phase flow in a horizontal pipe obtained from computational fluid dynamics (CFD) with those from a correlation analysis. The commercial CFD software, Fluent12.0, is employed to simulate the multiphase flow in the digesters. The simulation results in a small-sized digester are validated against the experimental data from literature. Comparison of two gas mixing designs in a medium-sized digester demonstrates that mixing intensity is insensitive to the TS in confined gas mixing, whereas there are significant decreases with increases of TS in unconfined gas mixing. Moreover, comparison of three mixing methods indicates that gas mixing is more efficient than mixing by pumped circulation while it is less efficient than mechanical mixing.
Modeling condensation with a noncondensable gas for mixed convection flow
NASA Astrophysics Data System (ADS)
Liao, Yehong
2007-05-01
This research theoretically developed a novel mixed convection model for condensation with a noncondensable gas. The model developed herein is comprised of three components: a convection regime map; a mixed convection correlation; and a generalized diffusion layer model. These components were developed in a way to be consistent with the three-level methodology in MELCOR. The overall mixed convection model was implemented into MELCOR and satisfactorily validated with data covering a wide variety of test conditions. In the development of the convection regime map, two analyses with approximations of the local similarity method were performed to solve the multi-component two-phase boundary layer equations. The first analysis studied effects of the bulk velocity on a basic natural convection condensation process and setup conditions to distinguish natural convection from mixed convection. It was found that the superimposed velocity increases condensation heat transfer by sweeping away the noncondensable gas accumulated at the condensation boundary. The second analysis studied effects of the buoyancy force on a basic forced convection condensation process and setup conditions to distinguish forced convection from mixed convection. It was found that the superimposed buoyancy force increases condensation heat transfer by thinning the liquid film thickness and creating a steeper noncondensable gas concentration profile near the condensation interface. In the development of the mixed convection correlation accounting for suction effects, numerical data were obtained from boundary layer analysis for the three convection regimes and used to fit a curve for the Nusselt number of the mixed convection regime as a function of the Nusselt numbers of the natural and forced convection regimes. In the development of the generalized diffusion layer model, the driving potential for mass transfer was expressed as the temperature difference between the bulk and the liquid-gas interface using the Clausius-Clapeyron equation. The model was developed on a mass basis instead of a molar basis to be consistent with general conservation equations. It was found that vapor diffusion is not only driven by a gradient of the molar fraction but also a gradient of the mixture molecular weight at the diffusion layer.
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
The use of mixed effects ANCOVA to characterize vehicle emission profiles
DOT National Transportation Integrated Search
2000-09-01
A mixed effects analysis of covariance model to characterize mileage dependent emissions profiles for any given group of vehicles having a common model design is used in this paper. These types of evaluations are used by the U.S. Environmental Protec...
Joint physical and numerical modeling of water distribution networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.
2009-01-01
This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in somemore » cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.« less
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data
Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-01-01
Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741
USDA-ARS?s Scientific Manuscript database
The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...
NASA Technical Reports Server (NTRS)
Menon, Suresh
1992-01-01
An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A combustor design with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary zone product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the second stage. Numerical predictions using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used to study multispecies mixing as well as propane-air and hydrogen-air jet nonpremixed flames, and has been used to predict NO(x) production in the mixing region. Comparison with available experimental data show good agreement, thereby providing validation of the mixing model. With this demonstration, this mixing model is ready to be implemented in conjunction with steady-state prediction methods and provide an improved engineering design analysis tool.
Arnault, Denise Saint; Fetters, Michael D.
2013-01-01
Mixed methods research has made significant in-roads in the effort to examine complex health related phenomenon. However, little has been published on the funding of mixed methods research projects. This paper addresses that gap by presenting an example of an NIMH funded project using a mixed methods QUAL-QUAN triangulation design entitled “The Mixed-Method Analysis of Japanese Depression.” We present the Cultural Determinants of Health Seeking model that framed the study, the specific aims, the quantitative and qualitative data sources informing the study, and overview of the mixing of the two studies. Finally, we examine reviewer's comments and our insights related to writing mixed method proposal successful for achieving RO1 level funding. PMID:25419196
Converting isotope ratios to diet composition - the use of mixing models - June 2010
One application of stable isotope analysis is to reconstruct diet composition based on isotopic mass balance. The isotopic value of a consumer’s tissue reflects the isotopic values of its food sources proportional to their dietary contributions. Isotopic mixing models are used ...
Experimental Testing and Modeling Analysis of Solute Mixing at Water Distribution Pipe Junctions
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. Here we have categorized pipe junctions into five hydraulic types, for which flow distribution factors and analytical equations for describing the solute mixing ...
Remedying excessive numerical diapycnal mixing in a global 0.25° NEMO configuration
NASA Astrophysics Data System (ADS)
Megann, Alex; Nurser, George; Storkey, Dave
2016-04-01
If numerical ocean models are to simulate faithfully the upwelling branches of the global overturning circulation, they need to have a good representation of the diapycnal mixing processes which contribute to conversion of the bottom and deep waters produced in high latitudes into less dense watermasses. It is known that the default class of depth-coordinate ocean models such as NEMO and MOM5, as used in many state-of-the art coupled climate models and Earth System Models, have excessive numerical diapycnal mixing, resulting from irreversible advection across coordinate surfaces. The GO5.0 configuration of the NEMO ocean model, on an "eddy-permitting" 0.25° global grid, is used in the current UK GC1 and GC2 coupled models. Megann and Nurser (2016) have shown, using the isopycnal watermass analysis of Lee et al (2002), that spurious numerical mixing is substantially larger than the explicit mixing prescribed by the mixing scheme used by the model. It will be shown that increasing the biharmonic viscosity by a factor of three tends to suppress small-scale noise in the vertical velocity in the model. This significantly reduces the numerical mixing in GO5.0, and we shall show that it also leads to large-scale improvements in model biases.
NASA Astrophysics Data System (ADS)
Nakashima, Yoshito; Komatsubara, Junko
Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.
Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D
NASA Technical Reports Server (NTRS)
Wolf, D. E.; Sinha, N.; Dash, S. M.
1988-01-01
Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Using Mixed-Effects Structural Equation Models to Study Student Academic Development.
ERIC Educational Resources Information Center
Pike, Gary R.
1992-01-01
A study at the University of Tennessee Knoxville used mixed-effect structural equation models incorporating latent variables as an alternative to conventional methods of analyzing college students' (n=722) first-year-to-senior academic gains. Results indicate, contrary to previous analysis, that coursework and student characteristics interact to…
USDA-ARS?s Scientific Manuscript database
A nondestructive and sensitive method was developed to detect the presence of mixed pesticides of acetamiprid, chlorpyrifos and carbendazim on apples by surface-enhanced Raman spectroscopy (SERS). Self-modeling mixture analysis (SMA) was used to extract and identify the Raman spectra of individual p...
Wu, Guo Hao; Ehm, Alexandra; Bellone, Marco; Pradelli, Lorenzo
2017-01-01
A prior meta-analysis showed favorable metabolic effects of structured triglyceride (STG) lipid emulsions in surgical and critically ill patients compared with mixed medium-chain/long-chain triglycerides (MCT/LCT) emulsions. Limited data on clinical outcomes precluded pharmacoeconomic analysis. We performed an updated meta-analysis and developed a cost model to compare overall costs for STGs vs MCT/LCTs in Chinese hospitals. We searched Medline, Embase, Wanfang Data, the China Hospital Knowledge Database, and Google Scholar for clinical trials comparing STGs to mixed MCT/LCTs in surgical or critically ill adults published between October 10, 2013 and September 19, 2015. Newly identified studies were pooled with the prior studies and an updated meta-analysis was performed. A deterministic simulation model was used to compare the effects of STGs and mixed MCT/LCT's on Chinese hospital costs. The literature search identified six new trials, resulting in a total of 27 studies in the updated meta-analysis. Statistically significant differences favoring STGs were observed for cumulative nitrogen balance, pre- albumin and albumin concentrations, plasma triglycerides, and liver enzymes. STGs were also associated with a significant reduction in the length of hospital stay (mean difference, -1.45 days; 95% confidence interval, -2.48 to -0.43; p=0.005) versus mixed MCT/LCTs. Cost analysis demonstrated a net cost benefit of ¥675 compared with mixed MCT/LCTs. STGs are associated with improvements in metabolic function and reduced length of hospitalization in surgical and critically ill patients compared with mixed MCT/LCT emulsions. Cost analysis using data from Chinese hospitals showed a corresponding cost benefit.
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.
MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.
Hedeker, D; Gibbons, R D
1996-05-01
MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.
An IR Sounding-Based Analysis of the Saharan Air Layer in North Africa
NASA Technical Reports Server (NTRS)
Nicholls, Stephen D.; Mohr, Karen I.
2018-01-01
Intense daytime surface heating over barren-to-sparsely vegetated surfaces results in dry convective mixing. In the absence of external forcing such as mountain waves, the dry convection can produce a deep, well-mixed, nearly isentropic boundary layer that becomes a well-mixed residual layer in the evening. These well-mixed layers (WML) retain their unique mid-tropospheric thermal and humidity structure for several days. To detect the SAL and characterize its properties, AIRS Level 2 Ver. 6 temperature and humidity products (2003-Present) are evaluated against rawinsondes and compared to model analysis at each of the 55 rawinsonde stations in northern Africa. To distinguish WML from Saharan air layers (WMLs of Saharan origin), the detection involved a two-step process: 1) algorithm-based detection of WMLs in dry environments (less than 7 g per kilogram mixing ratio) 2) identification of Sahara air layers (SAL) by applying Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) back trajectories to determine the history of each WML. WML occurrence rates from AIRS closely resemble that from rawinsondes, yet rates from model analysis were up to 30% higher than observations in the Sahara due to model errors. Despite the overly frequent occurrence of WMLs from model analysis, HYSPLIT trajectory analysis showed that SAL occurrence rates (given a WML exists) from rawinsondes, AIRS, and model analysis were nearly identical. Although the number of WMLs varied among the data sources, the proportion of WMLs which were classified as SAL was nearly the same. The analysis of SAL bulk properties showed that AIRS and model analysis exhibited a slight warm and moist bias relative to rawinsondes in non-Saharan locations, but model analysis was notably warmer than rawinsondes and AIRS within the Sahara. The latter result is likely associated with the dearth of available data assimilated by model analysis in the Sahara. The variability of SAL thicknesses was reasonably captured by both AIRS and model analysis, but the former favor layers than are thinner than observations. Finally, further analysis of HYSPLIT trajectories revealed that fewer than 10% and 33% of all SAL back trajectories passed through regions with notable precipitation (>100 mm accumulated along the trajectory path) or Aerosol Optical Depth (AOD greater than 0.4, 75th percentile of AOD) on average, respectively. Trajectory analysis indicated that only 57% of Saharan and 24% of non-Saharan WMLs are definitively of Saharan origin (Saharan requirement: Two consecutive days in Sahara and 24 or more of those hours within 72 hours of detection). Non-SAL WMLs either originate from local-to-regionally generated residual layers or from mid-latitude air streams that do not linger over the Sahara for a sufficient time period. Initial analysis shows these non-SAL WMLs tend to be both notably cooler and slightly moister than their SAL counter parts. Continuing analysis will address what role Saharan and non-Saharan air masses characteristics may play on local and regional environmental conditions.
Estimating the numerical diapycnal mixing in an eddy-permitting ocean model
NASA Astrophysics Data System (ADS)
Megann, Alex
2018-01-01
Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
Using the Mixed Rasch Model to analyze data from the beliefs and attitudes about memory survey.
Smith, Everett V; Ying, Yuping; Brown, Scott W
2012-01-01
In this study, we used the Mixed Rasch Model (MRM) to analyze data from the Beliefs and Attitudes About Memory Survey (BAMS; Brown, Garry, Silver, and Loftus, 1997). We used the original 5-point BAMS data to investigate the functioning of the "Neutral" category via threshold analysis under a 2-class MRM solution. The "Neutral" category was identified as not eliciting the model expected responses and observations in the "Neutral" category were subsequently treated as missing data. For the BAMS data without the "Neutral" category, exploratory MRM analyses specifying up to 5 latent classes were conducted to evaluate data-model fit using the consistent Akaike information criterion (CAIC). For each of three BAMS subscales, a two latent class solution was identified as fitting the mixed Rasch rating scale model the best. Results regarding threshold analysis, person parameters, and item fit based on the final models are presented and discussed as well as the implications of this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kucha, E.I.
1984-01-01
A general method was developed to calculate two dimensional (axisymmetric) mixing of a compressible jet in a variable cross-sectional area mixing channel of the ejector. The analysis considers mixing of the primary and secondary fluids at constant pressure and incorporates finite difference approximations to the conservation equations. The flow model is based on the mixing length approximations. A detailed study and modeling of the flow phenomenon determines the best (optimum) mixing channel geometry of the ejector. The detailed ejector performance characteristics are predicted by incorporating the flow model into a solar-powered ejector cycle cooling system computer model. Freon-11 is usedmore » as both the primary and secondary fluids. Performance evaluation of the cooling system is examined for its coefficient of performance (COP) under a variety of operating conditions. A study is also conducted on a modified ejector cycle in which a secondary pump is introduced at the exit of the evaporator. Results show a significant improvement in the overall performance over that of the conventional ejector cycle (without a secondary pump). Comparison between one and two-dimensional analyses indicates that the two-dimensional ejector fluid flow analysis predicts a better overall system performance. This is true for both the conventional and modified ejector cycles.« less
Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei
2017-11-01
A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.
Decision-case mix model for analyzing variation in cesarean rates.
Eldenburg, L; Waller, W S
2001-01-01
This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.
Alahverdjieva, V S; Grigoriev, D O; Fainerman, V B; Aksenenko, E V; Miller, R; Möhwald, H
2008-02-21
The competitive adsorption at the air-water interface from mixed adsorption layers of hen egg-white lysozyme with a non-ionic surfactant (C10DMPO) was studied and compared to the mixture with an ionic surfactant (SDS) using bubble and drop shape analysis tensiometry, ellipsometry, and surface dilational rheology. The set of equilibrium and kinetic data of the mixed solutions is described by a thermodynamic model developed recently. The theoretical description of the mixed system is based on the model parameters for the individual components.
NASA Astrophysics Data System (ADS)
Han, Yingying; Gong, Pu; Zhou, Xiang
2016-02-01
In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.
Magezi, David A
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P
2009-04-01
Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R
2018-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Interpretable inference on the mixed effect model with the Box-Cox transformation.
Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M
2017-07-10
We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Physiological effects of diet mixing on consumer fitness: a meta-analysis.
Lefcheck, Jonathan S; Whalen, Matthew A; Davenport, Theresa M; Stone, Joshua P; Duffy, J Emmett
2013-03-01
The degree of dietary generalism among consumers has important consequences for population, community, and ecosystem processes, yet the effects on consumer fitness of mixing food types have not been examined comprehensively. We conducted a meta-analysis of 161 peer-reviewed studies reporting 493 experimental manipulations of prey diversity to test whether diet mixing enhances consumer fitness based on the intrinsic nutritional quality of foods and consumer physiology. Averaged across studies, mixed diets conferred significantly higher fitness than the average of single-species diets, but not the best single prey species. More than half of individual experiments, however, showed maximal growth and reproduction on mixed diets, consistent with the predicted benefits of a balanced diet. Mixed diets including chemically defended prey were no better than the average prey type, opposing the prediction that a diverse diet dilutes toxins. Finally, mixed-model analysis showed that the effect of diet mixing was stronger for herbivores than for higher trophic levels. The generally weak evidence for the nutritional benefits of diet mixing in these primarily laboratory experiments suggests that diet generalism is not strongly favored by the inherent physiological benefits of mixing food types, but is more likely driven by ecological and environmental influences on consumer foraging.
The analysis and modelling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1991-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
The analysis and modeling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1989-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Chen, Qianghua; Wang, Yanghong; Luo, Huifu; Wu, Huan; Ma, Binwu
2018-06-01
In the laser self-mixing interference vibration measurement system, the self mixing interference signal is usually weak so that it can be hardly distinguished from the environmental noise. In order to solve this problem, we present a self-mixing interference optical path with a pre-feedback mirror, a pre-feedback mirror is added between the object and the collimator lens, corresponding feedback light enters into the inner cavity of the laser and the interference by the pre-feedback mirror occurs. The pre-feedback system is established after that. The self-mixing interference theoretical model with a pre-feedback based on the F-P model is derived. The theoretical analysis shows that the amplitude of the intensity of the interference signal can be improved by 2-4 times. The influence factors of system are also discussed. The experiment results show that the amplitude of the signal is greatly improved, which agrees with the theoretical analysis.
Using mixed methods research in medical education: basic guidelines for researchers.
Schifferdecker, Karen E; Reed, Virginia A
2009-07-01
Mixed methods research involves the collection, analysis and integration of both qualitative and quantitative data in a single study. The benefits of a mixed methods approach are particularly evident when studying new questions or complex initiatives and interactions, which is often the case in medical education research. Basic guidelines for when to use mixed methods research and how to design a mixed methods study in medical education research are not readily available. The purpose of this paper is to remedy that situation by providing an overview of mixed methods research, research design models relevant for medical education research, examples of each research design model in medical education research, and basic guidelines for medical education researchers interested in mixed methods research. Mixed methods may prove superior in increasing the integrity and applicability of findings when studying new or complex initiatives and interactions in medical education research. They deserve an increased presence and recognition in medical education research.
Consequences of an Abelian family symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramond, P.
1996-01-01
The addition of an Abelian family symmetry to the Minimal Super-symmetric Standard Model reproduces the observed hierarchies of quark and lepton masses and quark mixing angles, only if it is anomalous. Green-Schwarz compensation of its anomalies requires the electroweak mixing angle to be sin{sup 2}{theta}{sub {omega}} = 3/8 at the string scale, without any assumed GUT structure, suggesting a superstring origin for the standard model. The analysis is extended to neutrino masses and the lepton mixing matrix.
Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins
Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.
2011-01-01
Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199
Verzhbitskiy, I A; Kouzov, A P; Rachet, F; Chrysos, M
2011-06-14
A line-mixing shape analysis of the isotropic remnant Raman spectrum of the 2ν(3) overtone of CO(2) is reported at room temperature and for densities, ρ, rising up to tens of amagats. The analysis, experimental and theoretical, employs tools of non-resonant light scattering spectroscopy and uses the extended strong collision model (ESCM) to simulate the strong line mixing effects and to evidence motional narrowing. Excellent agreement at any pressure is observed between the calculated spectra and our experiment, which, along with the easy numerical implementation of the ESCM, makes this model stand out clearly above other semiempirical models for band shape calculations. The hitherto undefined, explicit ρ-dependence of the vibrational relaxation rate is given. Our study intends to improve the understanding of pressure-induced phenomena in a gas that is still in the forefront of the news.
Computation of wake/exhaust mixing downstream of advanced transport aircraft
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Teske, Milton E.; Bilanin, Alan J.
1993-01-01
The mixing of engine exhaust with the vortical wake of high speed aircraft operating in the stratosphere can play an important role in the formation of chemical products that deplete atmospheric ozone. An accurate analysis of this type of interaction is therefore necessary as a part of the assessment of the impact of proposed High Speed Civil Transport (HSCT) designs on atmospheric chemistry. This paper describes modifications to the parabolic Navier-Stokes flow field analysis in the UNIWAKE unified aircraft wake model to accommodate the computation of wake/exhaust mixing and the simulation of reacting flow. The present implementation uses a passive chemistry model in which the reacting species are convected and diffused by the fluid dynamic solution but in which the evolution of the species does not affect the flow field. The resulting analysis, UNIWAKE/PCHEM (Passive CHEMistry) has been applied to the analysis of wake/exhaust flows downstream of representative HSCT configurations. The major elements of the flow field model are described, as are the results of sample calculations illustrating the behavior of the thermal exhaust plume and the production of species important to the modeling of condensation in the wake. Appropriate steps for further development of the UNIWAKE/PCHEM model are also outlined.
ERIC Educational Resources Information Center
Collins, Cyleste C.; Dressler, William W.
2008-01-01
This study uses mixed methods and theory from cognitive anthropology to examine the cultural models of domestic violence among domestic violence agency workers, welfare workers, nurses, and a general population comparison group. Data collection and analysis uses quantitative and qualitative techniques, and the findings are integrated for…
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
A LISREL Model for the Analysis of Repeated Measures with a Patterned Covariance Matrix.
ERIC Educational Resources Information Center
Rovine, Michael J.; Molenaar, Peter C. M.
1998-01-01
Presents a LISREL model for the estimation of the repeated measures analysis of variance (ANOVA) with a patterned covariance matrix. The model is demonstrated for a 5 x 2 (Time x Group) ANOVA in which the data are assumed to be serially correlated. Similarities with the Statistical Analysis System PROC MIXED model are discussed. (SLD)
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
NASA Astrophysics Data System (ADS)
Prakash, Kumar Ravi; Nigam, Tanuja; Pant, Vimlesh
2018-04-01
A coupled atmosphere-ocean-wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB) during 10-14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere-ocean-wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere-ocean-wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave-current interaction and nonlinear wave-wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.
Thermal stratification potential in rocket engine coolant channels
NASA Technical Reports Server (NTRS)
Kacynski, Kenneth J.
1992-01-01
The potential for rocket engine coolant channel flow stratification was computationally studied. A conjugate, 3-D, conduction/advection analysis code (SINDA/FLUINT) was used. Core fluid temperatures were predicted to vary by over 360 K across the coolant channel, at the throat section, indicating that the conventional assumption of a fully mixed fluid may be extremely inaccurate. Because of the thermal stratification of the fluid, the walls exposed to the rocket engine exhaust gases will be hotter than an assumption of full mixing would imply. In this analysis, wall temperatures were 160 K hotter in the turbulent mixing case than in the full mixing case. The discrepancy between the full mixing and turbulent mixing analyses increased with increasing heat transfer. Both analysis methods predicted identical channel resistances at the coolant inlet, but in the stratified analysis the thermal resistance was negligible. The implications are significant. Neglect of thermal stratification could lead to underpredictions in nozzle wall temperatures. Even worse, testing at subscale conditions may be inadequate for modeling conditions that would exist in a full scale engine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2011-05-17
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank to ensure uniformity of the discharge stream. Mixing is accomplished with one to four dual-nozzle slurry pumps located within the tank liquid. For the work, a Tank 48 simulation model with a maximum of four slurry pumps in operation has been developed to estimate flow patterns for efficient solid mixing. The modeling calculations were performed by using two modeling approaches. One approach is a single-phase Computational Fluid Dynamics (CFD) model to evaluate the flow patterns and qualitativemore » mixing behaviors for a range of different modeling conditions since the model was previously benchmarked against the test results. The other is a two-phase CFD model to estimate solid concentrations in a quantitative way by solving the Eulerian governing equations for the continuous fluid and discrete solid phases over the entire fluid domain of Tank 48. The two-phase results should be considered as the preliminary scoping calculations since the model was not validated against the test results yet. A series of sensitivity calculations for different numbers of pumps and operating conditions has been performed to provide operational guidance for solids suspension and mixing in the tank. In the analysis, the pump was assumed to be stationary. Major solid obstructions including the pump housing, the pump columns, and the 82 inch central support column were included. The steady state and three-dimensional analyses with a two-equation turbulence model were performed with FLUENT{trademark} for the single-phase approach and CFX for the two-phase approach. Recommended operational guidance was developed assuming that local fluid velocity can be used as a measure of sludge suspension and spatial mixing under single-phase tank model. For quantitative analysis, a two-phase fluid-solid model was developed for the same modeling conditions as the single-phase model. The modeling results show that the flow patterns driven by four pump operation satisfy the solid suspension requirement, and the average solid concentration at the plane of the transfer pump inlet is about 12% higher than the tank average concentrations for the 70 inch tank level and about the same as the tank average value for the 29 inch liquid level. When one of the four pumps is not operated, the flow patterns are satisfied with the minimum suspension velocity criterion. However, the solid concentration near the tank bottom is increased by about 30%, although the average solid concentrations near the transfer pump inlet have about the same value as the four-pump baseline results. The flow pattern results show that although the two-pump case satisfies the minimum velocity requirement to suspend the sludge particles, it provides the marginal mixing results for the heavier or larger insoluble materials such as MST and KTPB particles. The results demonstrated that when more than one jet are aiming at the same position of the mixing tank domain, inefficient flow patterns are provided due to the highly localized momentum dissipation, resulting in inactive suspension zone. Thus, after completion of the indexed solids suspension, pump rotations are recommended to avoid producing the nonuniform flow patterns. It is noted that when tank liquid level is reduced from the highest level of 70 inches to the minimum level of 29 inches for a given number of operating pumps, the solid mixing efficiency becomes better since the ratio of the pump power to the mixing volume becomes larger. These results are consistent with the literature results.« less
Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.
Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng
2014-06-01
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.
Controls on Mixing-Dependent Denitrification in Hyporheic Zones
NASA Astrophysics Data System (ADS)
Hester, E. T.; Young, K. I.; Widdowson, M. A.
2013-12-01
Interaction of surface water and groundwater in hyporheic sediments of river systems is known to create unique biogeochemical conditions that can attenuate contaminants flowing downstream. Oxygen, carbon, and the contaminants themselves (e.g., excess nitrate) often advect together through the hyporheic zone from sources in surface water. However, the ability of the hyporheic zone to attenuate contaminants in upwelling groundwater plumes as they exit to rivers is less known. Such reactions may be more dependent on mixing of carbon and oxygen sources from surface water with contaminants from deeper groundwater. We simulated hyporheic flow cells and upwelling groundwater together with mixing-dependent denitrification of an upwelling nitrate plume in shallow riverbed sediments using MODFLOW and SEAM3D. For our first set of model scenarios, we set biogeochemical boundary conditions to be consistent with situations where only mixing-dependent denitrification occurred within the model domain. This occurred where dissolved organic carbon (DOC) advecting from surface water through hyporheic flow cells meets nitrate upwelling from deeper groundwater. This would be common where groundwater is affected by septic systems which contribute nitrate that upwells into streams that do not have significant nitrate sources from upstream. We conducted a sensitivity analysis that showed that mixing-dependent denitrification increased with parameters that increase mixing itself, such as the degree of heterogeneity of sediment hydraulic conductivity (K). Mixing-dependent denitrification also increased with certain biogeochemical boundary concentrations such as increasing DOC or decreasing dissolved oxygen (DO) advecting from surface water. For our second set of model scenarios, we set biogeochemical boundary conditions to be consistent with common situations where non-mixing-dependent denitrification also occurred within the model domain. For example, when nitrate concentrations are substantial in water advecting from surface water, non-mixing-dependent denitrification can occur within the hyporheic flow cells. This would be common where surface water and groundwater have high nitrate concentrations in agricultural areas. We conducted a sensitivity analysis for this set of model scenarios as well, to evaluate controls on the relative balance of mixing-dependent and non-mixing-dependent denitrification. We found that non-mixing-dependent denitrification often has higher potential to consume nitrate than mixing-dependent denitrification. This is because non-mixing-dependent denitrification is not confined to the relatively small mixing zone between upwelling groundwater and hyporheic flow cells, and hence often has longer residence times available for consumption of existing oxygen followed by consumption of nitrate. Nevertheless, the potential for hyporheic zones to attenuate upwelling nitrate plumes appears to be substantial, yet is variable depending on geomorphic, hydraulic, and biogeochemical conditions.
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R.
2017-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided. PMID:29230152
Inferring mixed-culture growth from total biomass data in a wavelet approach
NASA Astrophysics Data System (ADS)
Ibarra-Junquera, V.; Escalante-Minakata, P.; Murguía, J. S.; Rosu, H. C.
2006-10-01
It is shown that the presence of mixed-culture growth in batch fermentation processes can be very accurately inferred from total biomass data by means of the wavelet analysis for singularity detection. This is accomplished by considering simple phenomenological models for the mixed growth and the more complicated case of mixed growth on a mixture of substrates. The main quantity provided by the wavelet analysis is the Hölder exponent of the singularity that we determine for our illustrative examples. The numerical results point to the possibility that Hölder exponents can be used to characterize the nature of the mixed-culture growth in batch fermentation processes with potential industrial applications. Moreover, the analysis of the same data affected by the common additive Gaussian noise still lead to the wavelet detection of the singularities although the Hölder exponent is no longer a useful parameter.
Development and Validation of a 3-Dimensional CFB Furnace Model
NASA Astrophysics Data System (ADS)
Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti
At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.
Financial modeling/case-mix analysis.
Heck, S; Esmond, T
1983-06-01
The authors describe a case mix system developed by users which goes beyond DRG requirements to respond to management's clinical/financial data needs for marketing, planning, budgeting and financial analysis as well as reimbursement. Lessons learned in development of the system and the clinical/financial base will be helpful to those currently contemplating the implementation of such a system or evaluating available software.
NASA Astrophysics Data System (ADS)
Romano, N.; Petroselli, A.; Grimaldi, S.
2012-04-01
With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.
Experimental and theoretical characterization of an AC electroosmotic micromixer.
Sasaki, Naoki; Kitamori, Takehiko; Kim, Haeng-Boo
2010-01-01
We have reported on a novel microfluidic mixer based on AC electroosmosis. To elucidate the mixer characteristics, we performed detailed measurements of mixing under various experimental conditions including applied voltage, frequency and solution viscosity. The results are discussed through comparison with results obtained from a theoretical model of AC electroosmosis. As predicted from the theoretical model, we found that a larger voltage (approximately 20 V(p-p)) led to more rapid mixing, while the dependence of the mixing on frequency (1-5 kHz) was insignificant under the present experimental conditions. Furthermore, the dependence of the mixing on viscosity was successfully explained by the theoretical model, and the applicability of the mixer in viscous solution (2.83 mPa s) was confirmed experimentally. By using these results, it is possible to estimate the mixing performance under given conditions. These estimations can provide guidelines for using the mixer in microfluidic chemical analysis.
Interpreting cost of ownership for mix-and-match lithography
NASA Astrophysics Data System (ADS)
Levine, Alan L.; Bergendahl, Albert S.
1994-05-01
Cost of ownership modeling is a critical and emerging tool that provides significant insight into the ways to optimize device manufacturing costs. The development of a model to deal with a particular application, mix-and-match lithography, was performed in order to determine the level of cost savings and the optimum ways to create these savings. The use of sensitivity analysis with cost of ownership allows the user to make accurate trade-offs between technology and cost. The use and interpretation of the model results are described in this paper. Parameters analyzed include several manufacturing considerations -- depreciation, maintenance, engineering and operator labor, floorspace, resist, consumables and reticles. Inherent in this study is the ability to customize this analysis for a particular operating environment. Results demonstrate the clear advantages of a mix-and-match approach for three different operating environments. These case studies also demonstrate various methods to efficiently optimize cost savings strategies.
Delamination modeling of laminate plate made of sublaminates
NASA Astrophysics Data System (ADS)
Kormaníková, Eva; Kotrasová, Kamila
2017-07-01
The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.
Extension of the Haseman-Elston regression model to longitudinal data.
Won, Sungho; Elston, Robert C; Park, Taesung
2006-01-01
We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.
Regression analysis of mixed recurrent-event and panel-count data with additive rate models.
Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L
2015-03-01
Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007, The Statistical Analysis of Recurrent Events. New York: Springer-Verlag; Zhao et al., 2011, Test 20, 1-42). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013, Statistics in Medicine 32, 1954-1963). In this article, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study. © 2014, The International Biometric Society.
Regression Analysis of Mixed Recurrent-Event and Panel-Count Data with Additive Rate Models
Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L.
2015-01-01
Summary Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007; Zhao et al., 2011). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013). In this paper, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study. PMID:25345405
Multivariate statistical approach to estimate mixing proportions for unknown end members
Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.
2012-01-01
A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.
Examination of turbulent entrainment-mixing mechanisms using a combined approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, C.; Liu, Y.; Niu, S.
2011-10-01
Turbulent entrainment-mixing mechanisms are investigated by applying a combined approach to the aircraft measurements of three drizzling and two nondrizzling stratocumulus clouds collected over the U.S. Department of Energy's Atmospheric Radiation Measurement Southern Great Plains site during the March 2000 cloud Intensive Observation Period. Microphysical analysis shows that the inhomogeneous entrainment-mixing process occurs much more frequently than the homogeneous counterpart, and most cases of the inhomogeneous entrainment-mixing process are close to the extreme scenario, having drastically varying cloud droplet concentration but roughly constant volume-mean radius. It is also found that the inhomogeneous entrainment-mixing process can occur both near the cloudmore » top and in the middle level of a cloud, and in both the nondrizzling clouds and nondrizzling legs in the drizzling clouds. A new dimensionless number, the scale number, is introduced as a dynamical measure for different entrainment-mixing processes, with a larger scale number corresponding to a higher degree of homogeneous entrainment mixing. Further empirical analysis shows that the scale number that separates the homogeneous from the inhomogeneous entrainment-mixing process is around 50, and most legs have smaller scale numbers. Thermodynamic analysis shows that sampling average of filament structures finer than the instrumental spatial resolution also contributes to the dominance of inhomogeneous entrainment-mixing mechanism. The combined microphysical-dynamical-thermodynamic analysis sheds new light on developing parameterization of entrainment-mixing processes and their microphysical and radiative effects in large-scale models.« less
Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.
Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J
2017-10-15
Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh
2009-05-01
Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Microfluidic Injector Models Based on Artificial Neural Networks
2005-06-15
medicine, and chemistry [1], [2]. They generally perform chemical analysis involving sample preparation, mixing , reaction, injection, separation analysis...algorithms have been validated against many ex- periments found in the literature demonstrating microfluidic mixing , joule heating, injection, and...385 [7] K. Seiler, Z. H. Fan, K. Fluri, and D. J. Harrison, “ Electroosmotic pump- ing and valveless control of fluid flow within a manifold of
Effects of Crimped Fiber Paths on Mixed Mode Delamination Behaviors in Woven Fabric Composites
2016-09-01
continuum finite - element models. Three variations of a plain-woven fabric architecture—each of which had different crimped fiber paths—were considered... Finite - Element Analysis Fracture Mechanics Fracture Toughness Mixed Modes Strain Energy Release Rate 16. SECURITY...polymer FB Fully balanced laminate FEA Finite - element analysis FTCM Fracture toughness conversion mechanism G Shear modulus GI, GII, GIII Mode
A New Mixing Diagnostic and Gulf Oil Spill Movement
2010-10-01
could be used with new estimates of the suppression parameter to yield appreciably larger estimates of the hydrogen content in the shallow lunar ...paradigm for mixing in fluid flows with simple time dependence. Its skeletal structure is based on analysis of invariant attracting and repelling...continues to the present day. Model analysis and forecasts are compared to independent (nonassimilated) infrared frontal po- sitions and drifter trajectories
Simulations of arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE
Xie, Shaocheng; Boyle, James; Klein, Stephen A.; ...
2008-02-27
[1] Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of themore » boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. Furthermore, this paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.« less
Simulations of Arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE
NASA Astrophysics Data System (ADS)
Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Liu, Xiaohong; Ghan, Steven
2008-02-01
Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of the boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. This paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.
NASA Astrophysics Data System (ADS)
Kim, Ji-Hyun; Kim, Kyoung-Ho; Thao, Nguyen Thi; Batsaikhan, Bayartungalag; Yun, Seong-Taek
2017-06-01
In this study, we evaluated the water quality status (especially, salinity problems) and hydrogeochemical processes of an alluvial aquifer in a floodplain of the Red River delta, Vietnam, based on the hydrochemical and isotopic data of groundwater samples (n = 23) from the Kien Xuong district of the Thai Binh province. Following the historical inundation by paleo-seawater during coastal progradation, the aquifer has been undergone progressive freshening and land reclamation to enable settlements and farming. The hydrochemical data of water samples showed a broad hydrochemical change, from Na-Cl through Na-HCO3 to Ca-HCO3 types, suggesting that groundwater was overall evolved through the freshening process accompanying cation exchange. The principal component analysis (PCA) of the hydrochemical data indicates the occurrence of three major hydrogeochemical processes occurring in an aquifer, namely: 1) progressive freshening of remaining paleo-seawater, 2) water-rock interaction (i.e., dissolution of silicates), and 3) redox process including sulfate reduction, as indicated by heavy sulfur and oxygen isotope compositions of sulfate. To quantitatively assess the hydrogeochemical processes, the end-member mixing analysis (EMMA) and the forward mixing modeling using PHREEQC code were conducted. The EMMA results show that the hydrochemical model with the two-dimensional mixing space composed of PC 1 and PC 2 best explains the mixing in the study area; therefore, we consider that the groundwater chemistry mainly evolved by mixing among three end-members (i.e., paleo-seawater, infiltrating rain, and the K-rich groundwater). The distinct depletion of sulfate in groundwater, likely due to bacterial sulfate reduction, can also be explained by EMMA. The evaluation of mass balances using geochemical modeling supports the explanation that the freshening process accompanying direct cation exchange occurs through mixing among three end-members involving the K-rich groundwater. This study shows that the multiple end-members mixing model is useful to more successfully assess complex hydrogeochemical processes occurring in a salinized aquifer under freshening, as compared to the conventional interpretation using the theoretical mixing line based on only two end-members (i.e., seawater and rainwater).
Approximating a nonlinear advanced-delayed equation from acoustics
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2016-10-01
We approximate the solution of a particular non-linear mixed type functional differential equation from physiology, the mucosal wave model of the vocal oscillation during phonation. The mathematical equation models a superficial wave propagating through the tissues. The numerical scheme is adapted from the work presented in [1, 2, 3], using homotopy analysis method (HAM) to solve the non linear mixed type equation under study.
Analysis of collision safety associated with CEM and conventional cars mixed within a consist
DOT National Transportation Integrated Search
2003-11-16
collision dynamics model of a passenger train-to-passenger train collision has been developed to simulate the potential safety hazards and benefits associated with mixing conventional and crash energy management (CEM) cars within a consist. This pape...
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-08-16
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-01-01
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065
Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhien
Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentrationmore » retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations of mixed-phase cloud simulations by CAM5 were performed. Measurement results indicate that ice concentrations control stratiform mixed-phase cloud properties. The improvement of ice concentration parameterization in the CAM5 was done in close collaboration with Dr. Xiaohong Liu, PNNL (now at University of Wyoming).« less
NASA Astrophysics Data System (ADS)
Zhang, Junhua; Lohmann, Ulrike
2003-08-01
The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.
NASA Technical Reports Server (NTRS)
Miller, R. S.; Bellan, J.
1997-01-01
An Investigation of the statistical description of binary mixing and/or reaction between a carrier gas and an evaporated vapor species in two-phase gas-liquid turbulent flows is perfomed through both theroetical analysis and comparisons with results from direct numerical simulations (DNS) of a two-phase mixing layer.
Phylogeny of sipunculan worms: A combined analysis of four gene regions and morphology.
Schulze, Anja; Cutler, Edward B; Giribet, Gonzalo
2007-01-01
The intra-phyletic relationships of sipunculan worms were analyzed based on DNA sequence data from four gene regions and 58 morphological characters. Initially we analyzed the data under direct optimization using parsimony as optimality criterion. An implied alignment resulting from the direct optimization analysis was subsequently utilized to perform a Bayesian analysis with mixed models for the different data partitions. For this we applied a doublet model for the stem regions of the 18S rRNA. Both analyses support monophyly of Sipuncula and most of the same clades within the phylum. The analyses differ with respect to the relationships among the major groups but whereas the deep nodes in the direct optimization analysis generally show low jackknife support, they are supported by 100% posterior probability in the Bayesian analysis. Direct optimization has been useful for handling sequences of unequal length and generating conservative phylogenetic hypotheses whereas the Bayesian analysis under mixed models provided high resolution in the basal nodes of the tree.
Differentiation of mixed biological traces in sexual assaults using DNA fragment analysis
Apostolov, Аleksandar
2014-01-01
During the investigation of sexual abuse, it is not rare that mixed genetic material from two or more persons is detected. In such cases, successful profiling can be achieved using DNA fragment analysis, resulting in individual genetic profiles of offenders and their victims. This has led to an increase in the percentage of identified perpetrators of sexual offenses. The classic and modified genetic models used, allowed us to refine and implement appropriate extraction, polymerase chain reaction and electrophoretic procedures with individual assessment and approach to conducting research. Testing mixed biological traces using DNA fragment analysis appears to be the only opportunity for identifying perpetrators in gang rapes. PMID:26019514
Mixing in the shear superposition micromixer: three-dimensional analysis.
Bottausci, Frederic; Mezić, Igor; Meinhart, Carl D; Cardonne, Caroline
2004-05-15
In this paper, we analyse mixing in an active chaotic advection micromixer. The micromixer consists of a main rectangular channel and three cross-stream secondary channels that provide ability for time-dependent actuation of the flow stream in the direction orthogonal to the main stream. Three-dimensional motion in the mixer is studied. Numerical simulations and modelling of the flow are pursued in order to understand the experiments. It is shown that for some values of parameters a simple model can be derived that clearly represents the flow nature. Particle image velocimetry measurements of the flow are compared with numerical simulations and the analytical model. A measure for mixing, the mixing variance coefficient (MVC), is analysed. It is shown that mixing is substantially improved with multiple side channels with oscillatory flows, whose frequencies are increasing downstream. The optimization of MVC results for single side-channel mixing is presented. It is shown that dependence of MVC on frequency is not monotone, and a local minimum is found. Residence time distributions derived from the analytical model are analysed. It is shown that, while the average Lagrangian velocity profile is flattened over the steady flow, Taylor-dispersion effects are still present for the current micromixer configuration.
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
NASA Technical Reports Server (NTRS)
Bishop, James
1995-01-01
Work on completing our analysis of the Voyager UVS solar occultation data acquired during Neptune encounter is essentially complete, as testified by the attached poster materials. The photochemical modeling addresses the recent revision in branching ratios for radical production in the photolysis of methane at H Lyman alpha implied by the lab measurements of Mordaunt et al. (1993). The software generated in this effort has been useful for checking the degree to which photochemical models addressing other datasets (mainly infrared) are consistent with the UVS data. This work complements the UVS modeling results in that the IR data refer to deeper pressure levels; as regards the modeling of UVS data, the most significant result is the convincing support for the presence of a stagnant lower stratosphere. Evidence for strong dynamical (mixing) transport of minor constituents at shallower pressures is provided by the UVS data analysis.
NASA Astrophysics Data System (ADS)
Zhao, H.; Hao, Y.; Liu, X.; Hou, M.; Zhao, X.
2018-04-01
Hyperspectral remote sensing is a completely non-invasive technology for measurement of cultural relics, and has been successfully applied in identification and analysis of pigments of Chinese historical paintings. Although the phenomenon of mixing pigments is very usual in Chinese historical paintings, the quantitative analysis of the mixing pigments in the ancient paintings is still unsolved. In this research, we took two typical mineral pigments, vermilion and stone yellow as example, made precisely mixed samples using these two kinds of pigments, and measured their spectra in the laboratory. For the mixing spectra, both fully constrained least square (FCLS) method and derivative of ratio spectroscopy (DRS) were performed. Experimental results showed that the mixing spectra of vermilion and stone yellow had strong nonlinear mixing characteristics, but at some bands linear unmixing could also achieve satisfactory results. DRS using strong linear bands can reach much higher accuracy than that of FCLS using full bands.
Analysis of the mixing processes in the subtropical Advancetown Lake, Australia
NASA Astrophysics Data System (ADS)
Bertone, Edoardo; Stewart, Rodney A.; Zhang, Hong; O'Halloran, Kelvin
2015-03-01
This paper presents an extensive investigation of the mixing processes occurring in the subtropical monomictic Advancetown Lake, which is the main water body supplying the Gold Coast City in Australia. Meteorological, chemical and physical data were collected from weather stations, laboratory analysis of grab samples and an in-situ Vertical Profiling System (VPS), for the period 2008-2012. This comprehensive, high frequency dataset was utilised to develop a one-dimensional model of the vertical transport and mixing processes occurring along the water column. Multivariate analysis revealed that air temperature and rain forecasts enabled a reliable prediction of the strength of the lake stratification. Vertical diffusion is the main process driving vertical mixing, particularly during winter circulation. However, a high reservoir volume and warm winters can limit the degree of winter mixing, causing only partial circulation to occur, as was the case in 2013. This research study provides a comprehensive approach for understanding and predicting mixing processes for similar lakes, whenever high-frequency data are available from VPS or other autonomous water monitoring systems.
Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis
Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas
2016-01-01
The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246
NASA Astrophysics Data System (ADS)
Leonard, T.; Spence, S.; Early, J.; Filsinger, D.
2013-12-01
Mixed flow turbines represent a potential solution to the increasing requirement for high pressure, low velocity ratio operation in turbocharger applications. While literature exists for the use of these turbines at such operating conditions, there is a lack of detailed design guidance for defining the basic geometry of the turbine, in particular, the cone angle - the angle at which the inlet of the mixed flow turbine is inclined to the axis. This investigates the effect and interaction of such mixed flow turbine design parameters. Computational Fluids Dynamics was initially used to investigate the performance of a modern radial turbine to create a baseline for subsequent mixed flow designs. Existing experimental data was used to validate this model. Using the CFD model, a number of mixed flow turbine designs were investigated. These included studies varying the cone angle and the associated inlet blade angle. The results of this analysis provide insight into the performance of a mixed flow turbine with respect to cone and inlet blade angle.
NASA Technical Reports Server (NTRS)
Kuchar, A. P.; Chamberlin, R.
1980-01-01
A scale model performance test was conducted as part of the NASA Energy Efficient Engine (E3) Program, to investigate the geometric variables that influence the aerodynamic design of exhaust system mixers for high-bypass, mixed-flow engines. Mixer configuration variables included lobe number, penetration and perimeter, as well as several cutback mixer geometries. Mixing effectiveness and mixer pressure loss were determined using measured thrust and nozzle exit total pressure and temperature surveys. Results provide a data base to aid the analysis and design development of the E3 mixed-flow exhaust system.
Materiel Acquisition Management of U.S. Army Attack Helicopters
1989-06-02
used to evaluate the existing helicopter program periodically in order to determine utility in reference to all evaluation criteria. Defintion of... mixed integer linear programming model, the Phoenix model has demonstrated the potential to assist in the analysis of strategic and operational issues in...Fleet Max i of Aircraft per Fleet Programmed Buys .. -- Technology Unit Production mix Retirement Start-up ROTIE Flying Hour Aviation Overheadl I Aviation
QCD sum-rules analysis of vector (1-) heavy quarkonium meson-hybrid mixing
NASA Astrophysics Data System (ADS)
Palameta, A.; Ho, J.; Harnett, D.; Steele, T. G.
2018-02-01
We use QCD Laplace sum rules to study meson-hybrid mixing in vector (1-) heavy quarkonium. We compute the QCD cross-correlator between a heavy meson current and a heavy hybrid current within the operator product expansion. In addition to leading-order perturbation theory, we include four- and six-dimensional gluon condensate contributions as well as a six-dimensional quark condensate contribution. We construct several single and multiresonance models that take known hadron masses as inputs. We investigate which resonances couple to both currents and so exhibit meson-hybrid mixing. Compared to single resonance models that include only the ground state, we find that models that also include excited states lead to significantly improved agreement between QCD and experiment. In the charmonium sector, we find that meson-hybrid mixing is consistent with a two-resonance model consisting of the J /ψ and a 4.3 GeV resonance. In the bottomonium sector, we find evidence for meson-hybrid mixing in the ϒ (1 S ) , ϒ (2 S ), ϒ (3 S ), and ϒ (4 S ).
NASA Technical Reports Server (NTRS)
Graf, Wiley E.
1991-01-01
A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
Statistical methodology for the analysis of dye-switch microarray experiments
Mary-Huard, Tristan; Aubert, Julie; Mansouri-Attia, Nadera; Sandra, Olivier; Daudin, Jean-Jacques
2008-01-01
Background In individually dye-balanced microarray designs, each biological sample is hybridized on two different slides, once with Cy3 and once with Cy5. While this strategy ensures an automatic correction of the gene-specific labelling bias, it also induces dependencies between log-ratio measurements that must be taken into account in the statistical analysis. Results We present two original statistical procedures for the statistical analysis of individually balanced designs. These procedures are compared with the usual ML and REML mixed model procedures proposed in most statistical toolboxes, on both simulated and real data. Conclusion The UP procedure we propose as an alternative to usual mixed model procedures is more efficient and significantly faster to compute. This result provides some useful guidelines for the analysis of complex designs. PMID:18271965
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
An operational global-scale ocean thermal analysis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.
1990-04-01
The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less
Groundwater flow processes and mixing in active volcanic systems: the case of Guadalajara (Mexico)
NASA Astrophysics Data System (ADS)
Hernández-Antonio, A.; Mahlknecht, J.; Tamez-Meléndez, C.; Ramos-Leal, J.; Ramírez-Orozco, A.; Parra, R.; Ornelas-Soto, N.; Eastoe, C. J.
2015-02-01
Groundwater chemistry and isotopic data from 40 production wells in the Atemajac and Toluquilla Valleys, located in and around the Guadalajara metropolitan area, were determined to develop a conceptual model of groundwater flow processes and mixing. Multivariate analysis including cluster analysis and principal component analysis were used to elucidate distribution patterns of constituents and factors controlling groundwater chemistry. Based on this analysis, groundwater was classified into four groups: cold groundwater, hydrothermal water, polluted groundwater and mixed groundwater. Cold groundwater is characterized by low temperature, salinity, and Cl and Na concentrations and is predominantly of Na-HCO3 type. It originates as recharge at Primavera caldera and is found predominantly in wells in the upper Atemajac Valley. Hydrothermal water is characterized by high salinity, temperature, Cl, Na, HCO3, and the presence of minor elements such as Li, Mn and F. It is a mixed HCO3 type found in wells from Toluquilla Valley and represents regional flow circulation through basaltic and andesitic rocks. Polluted groundwater is characterized by elevated nitrate and sulfate concentrations and is usually derived from urban water cycling and subordinately from agricultural practices. Mixed groundwaters between cold and hydrothermal components are predominantly found in the lower Atemajac Valley. Tritium method elucidated that practically all of the sampled groundwater contains at least a small fraction of modern water. The multivariate mixing model M3 indicates that the proportion of hydrothermal fluids in sampled well water is between 13 (local groundwater) and 87% (hydrothermal water), and the proportion of polluted water in wells ranges from 0 to 63%. This study may help local water authorities to identify and quantify groundwater contamination and act accordingly.
A mixed-effects regression model for longitudinal multivariate ordinal data.
Liu, Li C; Hedeker, Donald
2006-03-01
A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.
Stellar evolution with turbulent diffusion. I. A new formalism of mixing.
NASA Astrophysics Data System (ADS)
Deng, L.; Bressan, A.; Chiosi, C.
1996-09-01
In this paper we present a new formulation of diffusive mixing in stellar interiors aimed at casting light on the kind of mixing that should take place in the so-called overshoot regions surrounding fully convective zones. Key points of the analysis are the inclusion the concept of scale length most effective for mixing, by means of which the diffusion coefficient is formulated, and the inclusion of intermittence and stirring, two properties of turbulence known from laboratory fluid dynamics. The formalism is applied to follow the evolution of a 20Msun_ star with composition Z=0.008 and Y=0.25. Depending on the value of the diffusion coefficient holding in the overshoot region, the evolutionary behaviour of the test stars goes from the case of virtually no mixing (semiconvective like structures) to that of full mixing over there (standard overshoot models). Indeed, the efficiency of mixing in this region drives the extension of the intermediate fully convective shell developing at the onset of the the shell H-burning, and in turn the path in the HR Diagram (HRD). Models with low efficiency of mixing burn helium in the core at high effective temperatures, models with intermediate efficiency perform extended loops in the HRD, finally models with high efficiency spend the whole core He-burning phase at low effective temperatures. In order to cast light on this important point of stellar structure, we test whether or not in the regions of the H-burning shell a convective layer can develop. More precisely, we examine whether the Schwarzschild or the Ledoux criterion ought to be adopted in this region. Furthermore, we test the response of stellar models to the kind of mixing supposed to occur in the H-burning shell regions. Finally, comparing the time scale of thermal dissipation to the evolutionary time scale, we get the conclusion that no mixing in this region should occur. The models with intermediate efficiency of mixing and no mixing at all in the shell H-burning regions are of particular interest as they possess at the same time evolutionary characteristics that are separately typical of models calculated with different schemes of mixing. In other words, the new models share the same properties of models with standard overshoot, namely a wider main sequence band, higher luminosity, and longer lifetimes than classical models, but they also possess extended loops that are the main signature of the classical (semiconvective) description of convection at the border of the core.
Lu, Jun; Li, Li-Ming; He, Ping-Ping; Cao, Wei-Hua; Zhan, Si-Yan; Hu, Yong-Hua
2004-06-01
To introduce the application of mixed linear model in the analysis of secular trend of blood pressure under antihypertensive treatment. A community-based postmarketing surveillance of benazepril was conducted in 1831 essential hypertensive patients (age range from 35 to 88 years) in Shanghai. Data of blood pressure was analyzed every 3 months with mixed linear model to describe the secular trend of blood pressure and changes of age-specific and gender-specific. The changing trends of systolic blood pressure (SBP) and diastolic blood pressure (DBP) were found to fit the curvilinear models. A piecewise model was fit for pulse pressure (PP), i.e., curvilinear model in the first 9 months and linear model after 9 months of taking medication. Both blood pressure and its velocity gradually slowed down. There were significant variation for the curve parameters of intercept, slope, and acceleration. Blood pressure in patients with higher initial levels was persistently declining in the 3-year-treatment. However blood pressures of patients with relatively low initial levels remained low when dropped down to some degree. Elderly patients showed high SBP but low DBP, so as with higher PP. The velocity and sizes of blood pressure reductions increased with the initial level of blood pressure. Mixed linear model is flexible and robust when applied to the analysis of longitudinal data but with missing values and can also make the maximum use of available information.
Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.
The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less
Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing
Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.
2018-01-10
The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less
Coding response to a case-mix measurement system based on multiple diagnoses.
Preyra, Colin
2004-08-01
To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.
Three Dimensional CFD Analysis of the GTX Combustor
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Bond, R. B.; Edwards, J. R.
2002-01-01
The annular combustor geometry of a combined-cycle engine has been analyzed with three-dimensional computational fluid dynamics. Both subsonic combustion and supersonic combustion flowfields have been simulated. The subsonic combustion analysis was executed in conjunction with a direct-connect test rig. Two cold-flow and one hot-flow results are presented. The simulations compare favorably with the test data for the two cold flow calculations; the hot-flow data was not yet available. The hot-flow simulation indicates that the conventional ejector-ramjet cycle would not provide adequate mixing at the conditions tested. The supersonic combustion ramjet flowfield was simulated with frozen chemistry model. A five-parameter test matrix was specified, according to statistical design-of-experiments theory. Twenty-seven separate simulations were used to assemble surrogate models for combustor mixing efficiency and total pressure recovery. ScramJet injector design parameters (injector angle, location, and fuel split) as well as mission variables (total fuel massflow and freestream Mach number) were included in the analysis. A promising injector design has been identified that provides good mixing characteristics with low total pressure losses. The surrogate models can be used to develop performance maps of different injector designs. Several complex three-way variable interactions appear within the dataset that are not adequately resolved with the current statistical analysis.
DOT National Transportation Integrated Search
2010-02-01
A finite element model for analysis of mass concrete was developed in this study. To validate the developed model, large concrete blocks made with four different mixes of concrete, typical of use in mass concrete applications in Florida, were made an...
NASA Astrophysics Data System (ADS)
Galloway, A. W. E.; Eisenlord, M. E.; Brett, M. T.
2016-02-01
Stable isotope (SI) based mixing models are the most common approach used to infer resource pathways in consumers. However, SI based analyses are often underdetermined, and consumer SI fractionation is usually unknown. The use of fatty acid (FA) tracers in mixing models offers an alternative approach that can resolve the underdetermined constraint. A limitation to both methods is the considerable uncertainty about consumer `trophic modification' (TM) of dietary FA or SI, which occurs as consumers transform dietary resources into tissues. We tested the utility of SI and FA approaches for inferring the diets of the marine benthic isopod (Idotea wosnesenskii) fed various marine macroalgae in controlled feeding trials. Our analyses quantified how the accuracy and precision of Bayesian mixing models was influenced by choice of algorithm (SIAR vs MixSIR), fractionation (assumed or known), and whether the model was under or overdetermined (seven sources and two vs 26 tracers) for cases where isopods were fed an exclusive diet of one of the seven different macroalgae. Using the conventional approach (i.e., 2 SI with assumed TM) resulted in average model outputs, i.e., the contribution from the exclusive resource = 0.20 ± 0.23 (0.00-0.79), mean ± SD (95% credible interval), that only differed slightly from the prior assumption. Using the FA based approach with known TM greatly improved model performance, i.e., the contribution from the exclusive resource = 0.91 ± 0.10 (0.58-0.99). The choice of algorithm only made a difference when fractionation was known and the model was overdetermined (FA approach). In this case SIAR and MixSIR had outputs of 0.86 ± 0.11 (0.48-0.96) and 0.96 ± 0.05 (0.79-1.00), respectively. This analysis shows the choice of dietary tracers and the assumption of consumer trophic modification greatly influence the performance of mixing model dietary reconstructions, and ultimately our understanding of what resources actually support aquatic consumers.
An a priori DNS study of the shadow-position mixing model
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; ...
2016-01-15
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less
Mixing {Xi}--{Xi}' Effects and Static Properties of Heavy {Xi}'s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliev, T. M.; Ozpineci, A.; Zamiralov, V. S.
It is shown the importance of mixing of heavy baryons {Xi}--{Xi}' with the new quantum numbers for analysis of its characteristics. The quark model of Ono is used as an example. Masses of new baryons as well as mixing angles of the states {Xi}--{Xi}' are obtained. The same reasoning is shown to be valid for the interpolating currents of these baryons in the framework of the QCD sum rules.
Multi-objective Analysis for a Sequencing Planning of Mixed-model Assembly Line
NASA Astrophysics Data System (ADS)
Shimizu, Yoshiaki; Waki, Toshiya; Yoo, Jae Kyu
Diversified customer demands are raising importance of just-in-time and agile manufacturing much more than before. Accordingly, introduction of mixed-model assembly lines becomes popular to realize the small-lot-multi-kinds production. Since it produces various kinds on the same assembly line, a rational management is of special importance. With this point of view, this study focuses on a sequencing problem of mixed-model assembly line including a paint line as its preceding process. By taking into account the paint line together, reducing work-in-process (WIP) inventory between these heterogeneous lines becomes a major concern of the sequencing problem besides improving production efficiency. Finally, we have formulated the sequencing problem as a bi-objective optimization problem to prevent various line stoppages, and to reduce the volume of WIP inventory simultaneously. Then we have proposed a practical method for the multi-objective analysis. For this purpose, we applied the weighting method to derive the Pareto front. Actually, the resulting problem is solved by a meta-heuristic method like SA (Simulated Annealing). Through numerical experiments, we verified the validity of the proposed approach, and discussed the significance of trade-off analysis between the conflicting objectives.
Turbulent Mixing of Primary and Secondary Flow Streams in a Rocket-Based Combined Cycle Engine
NASA Technical Reports Server (NTRS)
Cramer, J. M.; Greene, M. U.; Pal, S.; Santoro, R. J.; Turner, Jim (Technical Monitor)
2002-01-01
This viewgraph presentation gives an overview of the turbulent mixing of primary and secondary flow streams in a rocket-based combined cycle (RBCC) engine. A significant RBCC ejector mode database has been generated, detailing single and twin thruster configurations and global and local measurements. On-going analysis and correlation efforts include Marshall Space Flight Center computational fluid dynamics modeling and turbulent shear layer analysis. Potential follow-on activities include detailed measurements of air flow static pressure and velocity profiles, investigations into other thruster spacing configurations, performing a fundamental shear layer mixing study, and demonstrating single-shot Raman measurements.
Groundwater flow processes and mixing in active volcanic systems: the case of Guadalajara (Mexico)
NASA Astrophysics Data System (ADS)
Hernández-Antonio, A.; Mahlknecht, J.; Tamez-Meléndez, C.; Ramos-Leal, J.; Ramírez-Orozco, A.; Parra, R.; Ornelas-Soto, N.; Eastoe, C. J.
2015-09-01
Groundwater chemistry and isotopic data from 40 production wells in the Atemajac and Toluquilla valleys, located in and around the Guadalajara metropolitan area, were determined to develop a conceptual model of groundwater flow processes and mixing. Stable water isotopes (δ2H, δ18O) were used to trace hydrological processes and tritium (3H) to evaluate the relative contribution of modern water in samples. Multivariate analysis including cluster analysis and principal component analysis were used to elucidate distribution patterns of constituents and factors controlling groundwater chemistry. Based on this analysis, groundwater was classified into four groups: cold groundwater, hydrothermal groundwater, polluted groundwater and mixed groundwater. Cold groundwater is characterized by low temperature, salinity, and Cl and Na concentrations and is predominantly of Na-HCO3-type. It originates as recharge at "La Primavera" caldera and is found predominantly in wells in the upper Atemajac Valley. Hydrothermal groundwater is characterized by high salinity, temperature, Cl, Na and HCO3, and the presence of minor elements such as Li, Mn and F. It is a mixed-HCO3 type found in wells from Toluquilla Valley and represents regional flow circulation through basaltic and andesitic rocks. Polluted groundwater is characterized by elevated nitrate and sulfate concentrations and is usually derived from urban water cycling and subordinately from agricultural return flow. Mixed groundwaters between cold and hydrothermal components are predominantly found in the lower Atemajac Valley. Twenty-seven groundwater samples contain at least a small fraction of modern water. The application of a multivariate mixing model allowed the mixing proportions of hydrothermal fluids, polluted waters and cold groundwater in sampled water to be evaluated. This study will help local water authorities to identify and dimension groundwater contamination, and act accordingly. It may be broadly applicable to other active volcanic systems on Earth.
Newsome, Seth D.; Yeakel, Justin D.; Wheatley, Patrick V.; Tinker, M. Tim
2012-01-01
Ecologists are increasingly using stable isotope analysis to inform questions about variation in resource and habitat use from the individual to community level. In this study we investigate data sets from 2 California sea otter (Enhydra lutris nereis) populations to illustrate the advantages and potential pitfalls of applying various statistical and quantitative approaches to isotopic data. We have subdivided these tools, or metrics, into 3 categories: IsoSpace metrics, stable isotope mixing models, and DietSpace metrics. IsoSpace metrics are used to quantify the spatial attributes of isotopic data that are typically presented in bivariate (e.g., δ13C versus δ15N) 2-dimensional space. We review IsoSpace metrics currently in use and present a technique by which uncertainty can be included to calculate the convex hull area of consumers or prey, or both. We then apply a Bayesian-based mixing model to quantify the proportion of potential dietary sources to the diet of each sea otter population and compare this to observational foraging data. Finally, we assess individual dietary specialization by comparing a previously published technique, variance components analysis, to 2 novel DietSpace metrics that are based on mixing model output. As the use of stable isotope analysis in ecology continues to grow, the field will need a set of quantitative tools for assessing isotopic variance at the individual to community level. Along with recent advances in Bayesian-based mixing models, we hope that the IsoSpace and DietSpace metrics described here will provide another set of interpretive tools for ecologists.
NASA Astrophysics Data System (ADS)
Brooks, J. N.; Hassanein, A.; Sizyuk, T.
2013-07-01
Plasma interactions with mixed-material surfaces are being analyzed using advanced modeling of time-dependent surface evolution/erosion. Simulations use the REDEP/WBC erosion/redeposition code package coupled to the HEIGHTS package ITMC-DYN mixed-material formation/response code, with plasma parameter input from codes and data. We report here on analysis for a DIII-D Mo/C containing tokamak divertor. A DIII-D/DiMES probe experiment simulation predicts that sputtered molybdenum from a 1 cm diameter central spot quickly saturates (˜4 s) in the 5 cm diameter surrounding carbon probe surface, with subsequent re-sputtering and transport to off-probe divertor regions, and with high (˜50%) redeposition on the Mo spot. Predicted Mo content in the carbon agrees well with post-exposure probe data. We discuss implications and mixed-material analysis issues for Be/W mixing at the ITER outer divertor, and Li, C, Mo mixing at an NSTX divertor.
A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.
Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin
2017-02-01
The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.
B decays in an asymmetric left-right model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Mariana; Hayreter, Alper; Turan, Ismail
2010-08-01
Motivated by recently observed disagreements with the standard model predictions in B decays, we study b{yields}d, s transitions in an asymmetric class of SU(2){sub L}xSU(2){sub R}xU(1){sub B-L} models, with a simple one-parameter structure of the right-handed mixing matrix for the quarks, which obeys the constraints from kaon physics. We use experimental constraints on the branching ratios of b{yields}s{gamma}, b{yields}ce{nu}{sub e}, and B{sub d,s}{sup 0}-B{sub d,s}{sup 0} mixing to restrict the parameters of the model: g{sub R}/g{sub L}, M{sub W{sub 2}}, M{sub H}{sup {+-}}, tan{beta} as well as the elements of the right-handed quark mixing matrix V{sub CKM}{sup R}. We presentmore » a comparison with the more commonly used (manifest) left-right symmetric model. Our analysis exposes the parameters most sensitive to b transitions and reveals a large parameter space where left- and right-handed quarks mix differently, opening the possibility of observing marked differences in behavior between the standard model and the left-right model.« less
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Experiment Analysis and Modelling of Compaction Behaviour of Ag60Cu30Sn10 Mixed Metal Powders
NASA Astrophysics Data System (ADS)
Zhou, Mengcheng; Huang, Shangyu; Liu, Wei; Lei, Yu; Yan, Shiwei
2018-03-01
A novel process method combines powder compaction and sintering was employed to fabricate thin sheets of cadmium-free silver based filler metals, the compaction densification behaviour of Ag60Cu30Sn10 mixed metal powders was investigated experimentally. Based on the equivalent density method, the density-dependent Drucker-Prager Cap (DPC) model was introduced to model the powder compaction behaviour. Various experiment procedures were completed to determine the model parameters. The friction coefficients in lubricated and unlubricated die were experimentally determined. The determined material parameters were validated by experiments and numerical simulation of powder compaction process using a user subroutine (USDFLD) in ABAQUS/Standard. The good agreement between the simulated and experimental results indicates that the determined model parameters are able to describe the compaction behaviour of the multicomponent mixed metal powders, which can be further used for process optimization simulations.
Analysis of the type II robotic mixed-model assembly line balancing problem
NASA Astrophysics Data System (ADS)
Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad
2017-06-01
In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.
Applications of MIDAS regression in analysing trends in water quality
NASA Astrophysics Data System (ADS)
Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.
2014-04-01
We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.
Constitutive Behavior of Mixed Sn-Pb/Sn-3.0Ag-0.5Cu Solder Alloys
NASA Astrophysics Data System (ADS)
Tucker, J. P.; Chan, D. K.; Subbarayan, G.; Handwerker, C. A.
2012-03-01
During the transition from Pb-containing solders to Pb-free solders, joints composed of a mixture of Sn-Pb and Sn-Ag-Cu often result from either mixed assemblies or rework. Comprehensive characterization of the mechanical behavior of these mixed solder alloys resulting in a deformationally complete constitutive description is necessary to predict failure of mixed alloy solder joints. Three alloys with 1 wt.%, 5 wt.%, and 20 wt.% Pb were selected so as to represent reasonable ranges of Pb contamination expected from different 63Sn-37Pb components mixed with Sn-3.0Ag-0.5Cu. Creep and displacement-controlled tests were performed on specially designed assemblies at temperatures of 25°C, 75°C, and 125°C using a double lap shear test setup that ensures a nearly homogeneous state of plastic strain at the joint interface. The observed changes in creep and tensile behavior with Pb additions were related to phase equilibria and microstructure differences observed through differential scanning calorimetric and scanning electron microscopic cross-sectional analysis. As Pb content increased, the steady-state creep strain rates increased, and primary creep decreased. Even 1 wt.% Pb addition was sufficient to induce substantially large creep strains relative to the Sn-3.0Ag-0.5Cu alloy. We describe rate-dependent constitutive models for Pb-contaminated Sn-Ag-Cu solder alloys, ranging from the traditional time-hardening creep model to the viscoplastic Anand model. We illustrate the utility of these constitutive models by examining the inelastic response of a chip-scale package (CSP) under thermomechanical loading through finite-element analysis. The models predict that, as Pb content increases, total inelastic dissipation decreases.
Estimating the numerical diapycnal mixing in the GO5.0 ocean model
NASA Astrophysics Data System (ADS)
Megann, Alex; Nurser, George
2014-05-01
Constant-depth (or "z-coordinate") ocean models such as MOM and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes (e.g. Hofmann and Maqueda, 2006), and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2013). It uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. Two approaches to quantifying the numerical diapycnal mixing in this model are described: the first is based on the isopycnal watermass analysis of Lee et al (2002), while the second uses a passive tracer to diagnose mixing across density surfaces. Results from these two methods will be compared and contrasted. Hofmann, M. and Maqueda, M. A. M., 2006. Performance of a second-order moments advection scheme in an ocean general circulation model. JGR-Oceans, 111(C5). Lee, M.-M., Coward, A.C., Nurser, A.G., 2002. Spurious diapycnal mixing of deep waters in an eddy-permitting global ocean model. JPO 32, 1522-1535 Megann, A., Storkey, D., Aksenov, Y., Alderson, S., Calvert, D., Graham, T., Hyder, P., Siddorn, J., and Sinha, B., 2013: GO5.0: The joint NERC-Met Office NEMO global ocean model for use in coupled and forced applications, Geosci. Model Dev. Discuss., 6, 5747-5799,.
Current developments in forensic interpretation of mixed DNA samples (Review).
Hu, Na; Cong, Bin; Li, Shujin; Ma, Chunling; Fu, Lihong; Zhang, Xiaojing
2014-05-01
A number of recent improvements have provided contemporary forensic investigations with a variety of tools to improve the analysis of mixed DNA samples in criminal investigations, producing notable improvements in the analysis of complex trace samples in cases of sexual assult and homicide. Mixed DNA contains DNA from two or more contributors, compounding DNA analysis by combining DNA from one or more major contributors with small amounts of DNA from potentially numerous minor contributors. These samples are characterized by a high probability of drop-out or drop-in combined with elevated stutter, significantly increasing analysis complexity. At some loci, minor contributor alleles may be completely obscured due to amplification bias or over-amplification, creating the illusion of additional contributors. Thus, estimating the number of contributors and separating contributor genotypes at a given locus is significantly more difficult in mixed DNA samples, requiring the application of specialized protocols that have only recently been widely commercialized and standardized. Over the last decade, the accuracy and repeatability of mixed DNA analyses available to conventional forensic laboratories has greatly advanced in terms of laboratory technology, mathematical models and biostatistical software, generating more accurate, rapid and readily available data for legal proceedings and criminal cases.
Current developments in forensic interpretation of mixed DNA samples (Review)
HU, NA; CONG, BIN; LI, SHUJIN; MA, CHUNLING; FU, LIHONG; ZHANG, XIAOJING
2014-01-01
A number of recent improvements have provided contemporary forensic investigations with a variety of tools to improve the analysis of mixed DNA samples in criminal investigations, producing notable improvements in the analysis of complex trace samples in cases of sexual assult and homicide. Mixed DNA contains DNA from two or more contributors, compounding DNA analysis by combining DNA from one or more major contributors with small amounts of DNA from potentially numerous minor contributors. These samples are characterized by a high probability of drop-out or drop-in combined with elevated stutter, significantly increasing analysis complexity. At some loci, minor contributor alleles may be completely obscured due to amplification bias or over-amplification, creating the illusion of additional contributors. Thus, estimating the number of contributors and separating contributor genotypes at a given locus is significantly more difficult in mixed DNA samples, requiring the application of specialized protocols that have only recently been widely commercialized and standardized. Over the last decade, the accuracy and repeatability of mixed DNA analyses available to conventional forensic laboratories has greatly advanced in terms of laboratory technology, mathematical models and biostatistical software, generating more accurate, rapid and readily available data for legal proceedings and criminal cases. PMID:24748965
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Foulley, Jean-Louis; Van Dyk, David A
2000-01-01
This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399
PDF turbulence modeling and DNS
NASA Technical Reports Server (NTRS)
Hsu, A. T.
1992-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in probability density function (pdf). A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models. The effect of Coriolis forces on compressible homogeneous turbulence is studied using direct numerical simulation (DNS). The numerical method used in this study is an eight order compact difference scheme. Contrary to the conclusions reached by previous DNS studies on incompressible isotropic turbulence, the present results show that the Coriolis force increases the dissipation rate of turbulent kinetic energy, and that anisotropy develops as the Coriolis force increases. The Taylor-Proudman theory does apply since the derivatives in the direction of the rotation axis vanishes rapidly. A closer analysis reveals that the dissipation rate of the incompressible component of the turbulent kinetic energy indeed decreases with a higher rotation rate, consistent with incompressible flow simulations (Bardina), while the dissipation rate of the compressible part increases; the net gain is positive. Inertial waves are observed in the simulation results.
NASA Technical Reports Server (NTRS)
Mizukami, M.; Saunders, J. D.
1995-01-01
The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a two-dimensional (2D) Navier-Stokes flow solver. Parametric studies were performed on turbulence models, computational grids and bleed models. The computer flowfield was substantially different from the original inviscid design, due to interactions of shocks, boundary layers, and bleed. Good agreement with experimental data was obtained in many aspects. Many of the discrepancies were thought to originate primarily from 3D effects. Therefore, a balance should be struck between expending resources on a high fidelity 2D simulation, and the inherent limitations of 2D analysis. The solutions were fairly insensitive to turbulence models, grids and bleed models. Overall, the k-e turbulence model, and the bleed models based on unchoked bleed hole discharge coefficients or uniform velocity are recommended. The 2D Navier-Stokes methods appear to be a useful tool for the design and analysis of supersonic inlets, by providing a higher fidelity simulation of the inlet flowfield than inviscid methods, in a reasonable turnaround time.
Application of Lidar Data to the Performance Evaluations of ...
The Tropospheric Ozone (O3) Lidar Network (TOLNet) provides time/height O3 measurements from near the surface to the top of the troposphere to describe in high-fidelity spatial-temporal distributions, which is uniquely useful to evaluate the temporal evolution of O3 profiles in air quality models. This presentation describes the application of the Lidar data to the performance evaluation of CMAQ simulated O3 vertical profiles during the summer, 2014. Two-way coupled WRF-CMAQ simulations with 12km and 4km domains centered over Boulder, Colorado were performed during this time period. The analysis on the time series of observed and modeled O3 mixing ratios at different vertical layers indicates that the model frequently underestimated the observed values, and the underestimation was amplified in the middle model layers (~1km above the ground). When the lightning strikes detected by the National Lightning Detection Network (NLDN) were analyzed along with the observed O3 time series, it was found that the daily maximum O3 mixing ratios correlated well with the lightning strikes in the vicinity of the Lidar station. The analysis on temporal vertical profiles of both observed and modeled O3 mixing ratios on episodic days suggests that the model resolutions (12km and 4km) do not make any significant difference for this analysis (at this specific location and simulation period), but high O3 levels in the middle layers were linked to lightning activity that occurred in t
Identification and evaluation of composition in food powder using point-scan Raman spectral imaging
USDA-ARS?s Scientific Manuscript database
This study used Raman spectral imaging coupled with self-modeling mixture analysis (SMA) for identification of three components mixed into a complex food powder mixture. Vanillin, melamine, and sugar were mixed together at 10 different concentration levels (spanning 1% to 10%, w/w) into powdered non...
Statistical basis and outputs of stable isotope mixing models: Comment on Fry (2013)
A recent article by Fry (2013; Mar Ecol Prog Ser 472:1−13) reviewed approaches to solving underdetermined stable isotope mixing systems, and presented a new graphical approach and set of summary statistics for the analysis of such systems. In his review, Fry (2013) mis-characteri...
Reed, Frances M; Fitzgerald, Les; Rae, Melanie
2016-01-01
To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.
Systematic analysis of the unique band gap modulation of mixed halide perovskites.
Kim, Jongseob; Lee, Sung-Hoon; Chung, Choong-Heui; Hong, Ki-Ha
2016-02-14
Solar cells based on organic-inorganic hybrid metal halide perovskites have been proven to be one of the most promising candidates for the next generation thin film photovoltaic cells. Mixing Br or Cl into I-based perovskites has been frequently tried to enhance the cell efficiency and stability. One of the advantages of mixed halides is the modulation of band gap by controlling the composition of the incorporated halides. However, the reported band gap transition behavior has not been resolved yet. Here a theoretical model is presented to understand the electronic structure variation of metal mixed-halide perovskites through hybrid density functional theory. Comparative calculations in this work suggest that the band gap correction including spin-orbit interaction is essential to describe the band gap changes of mixed halides. In our model, both the lattice variation and the orbital interactions between metal and halides play key roles to determine band gap changes and band alignments of mixed halides. It is also presented that the band gap of mixed halide thin films can be significantly affected by the distribution of halide composition.
Stratified mixing by microorganisms
NASA Astrophysics Data System (ADS)
Wagner, Gregory; Young, William; Lauga, Eric
2013-11-01
Vertical mixing is of fundamental significance to the general circulation, climate, and life in the ocean. In this work we consider whether organisms swimming at low Reynolds numbers might collectively contribute substantially to vertical mixing. Scaling analysis indicates that the mixing efficiency η, or the ratio between the rate of potential energy conversion and total work done on the fluid, should scale with η ~(a / l) 3 as a / l --> 0 , where a is the size of the organism and l = (νκ /N2)1/4 is an intrinsic length scale of a stratified fluid with kinematic viscosity ν, tracer diffusivity κ, and buoyancy frequency N2. A regularized singularity model demonstrates this scaling, indicating that in this same limit η ~ 1.2 (a / l) 3 for vertical swimming and η ~ 0.14 (a / l ) 3 for horizontal swimming. The model further predicts the absolute maximum mixing efficiency of an ensemble of randomly oriented organisms is around 6% and that the greatest mixing efficiencies in the ocean (in regions of strong salt-stratification) are closer to 0.1%, implying that the total contribution of microorganisms to vertical ocean mixing is negligible.
Implications of Upwells as Hydrodynamic Jets in a Pulse Jet Mixed System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pease, Leonard F.; Bamberger, Judith A.; Minette, Michael J.
2015-08-01
This report evaluates the physics of the upwell flow in pulse jet mixed systems in the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Although the initial downward flow and radial flow from pulse jet mixers (PJMs) has been analyzed in some detail, the upwells have received considerably less attention despite having significant implications for vessel mixing. Do the upwells behave like jets? How do the upwells scale? When will the central upwell breakthrough? What proportion of the vessel is blended by the upwells themselves? Indeed, how the physics of the central upwell is affected by multiple PJMs (e.g., sixmore » in the proposed mixing vessels), non-Newtonian rheology, and significant multicomponent solids loadings remain unexplored. The central upwell must satisfy several criteria to be considered a free jet. First, it must travel for several diameters in a nearly constant direction. Second, its velocity must decay with the inverse of elevation. Third, it should have an approximately Gaussian profile. Fourth, the influence of surface or body forces must be negligible. A combination of historical data in a 12.75 ft test vessel, newly analyzed data from the 8 ft test vessel, and conservation of momentum arguments derived specifically for PJM operating conditions demonstrate that the central upwell satisfies these criteria where vigorous breakthrough is achieved. An essential feature of scaling from one vessel to the next is the requirement that the underlying physics does not change adversely. One may have confidence in scaling if (1) correlations and formulas capture the relevant physics; (2) the underlying physics does not change from the conditions under which it was developed to the conditions of interest; (3) all factors relevant to scaling have been incorporated, including flow, material, and geometric considerations; and (4) the uncertainty in the relationships is sufficiently narrow to meet required specifications. Although the central upwell satisfies these criteria when vigorous breakthrough is achieved, not all available data follow the free jet profile for the central upwell, particularly at lower nozzle velocities. Alternative flow regimes are considered and new models for cloud height, “cavern height,” and the rate of jet penetration (jet celerity) are benchmarked against data to anchor scaling analyses. This analytical modeling effort to provide a technical basis for scaling PJM mixed vessels has significant implications for vessel mixing, because jet physics underlies “cavern” height, cloud height, and the volume of mixing considerations. A new four-parameter cloud height model compares favorably to experimental results. This model is predictive of breakthrough in 8 ft vessel tests with the two-part simulant. Analysis of the upwell in the presence of yield stresses finds evidence of expanding turbulent jets, confined turbulent jets, and confined laminar flows. For each, the critical elevation at which jet momentum depletes is predicted, which compare favorably to experimental cavern height data. Partially coupled momentum and energy balances suggest that these are limiting cases of a gradual transition from a turbulent expanding flow to a confined laminar flow. This analysis of the central upwell alone lays essential groundwork for complete analysis of mode three mixing (i.e., breakthrough with slow peripheral mixing). Consideration of jet celerity shows that the rate of jet penetration is a governing consideration in breakthrough to the surface. Estimates of the volume of mixing are presented. This analysis shows that flow along the vessel wall is sluggish such that the central upwell governs the volume of mixing. This analysis of the central upwell alone lays essential groundwork for complete analysis of mode three mixing and estimates of hydrogen release rates from first principles.« less
A Growth Model for Academic Program Life Cycle (APLC): A Theoretical and Empirical Analysis
ERIC Educational Resources Information Center
Acquah, Edward H. K.
2010-01-01
Academic program life cycle concept states each program's life flows through several stages: introduction, growth, maturity, and decline. A mixed-influence diffusion growth model is fitted to enrolment data on academic programs to analyze the factors determining progress of academic programs through their life cycles. The regression analysis yield…
Shi, J Q; Wang, B; Will, E J; West, R M
2012-11-20
We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
On the Choice of Variable for Atmospheric Moisture Analysis
NASA Technical Reports Server (NTRS)
Dee, Dick P.; DaSilva, Arlindo M.; Atlas, Robert (Technical Monitor)
2002-01-01
The implications of using different control variables for the analysis of moisture observations in a global atmospheric data assimilation system are investigated. A moisture analysis based on either mixing ratio or specific humidity is prone to large extrapolation errors, due to the high variability in space and time of these parameters and to the difficulties in modeling their error covariances. Using the logarithm of specific humidity does not alleviate these problems, and has the further disadvantage that very dry background estimates cannot be effectively corrected by observations. Relative humidity is a better choice from a statistical point of view, because this field is spatially and temporally more coherent and error statistics are therefore easier to obtain. If, however, the analysis is designed to preserve relative humidity in the absence of moisture observations, then the analyzed specific humidity field depends entirely on analyzed temperature changes. If the model has a cool bias in the stratosphere this will lead to an unstable accumulation of excess moisture there. A pseudo-relative humidity can be defined by scaling the mixing ratio by the background saturation mixing ratio. A univariate pseudo-relative humidity analysis will preserve the specific humidity field in the absence of moisture observations. A pseudorelative humidity analysis is shown to be equivalent to a mixing ratio analysis with flow-dependent covariances. In the presence of multivariate (temperature-moisture) observations it produces analyzed relative humidity values that are nearly identical to those produced by a relative humidity analysis. Based on a time series analysis of radiosonde observed-minus-background differences it appears to be more justifiable to neglect specific humidity-temperature correlations (in a univariate pseudo-relative humidity analysis) than to neglect relative humidity-temperature correlations (in a univariate relative humidity analysis). A pseudo-relative humidity analysis is easily implemented in an existing moisture analysis system, by simply scaling observed-minus background moisture residuals prior to solving the analysis equation, and rescaling the analyzed increments afterward.
Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J
2009-06-01
We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
ERIC Educational Resources Information Center
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
Honnavar, Gajanan V; Ramesh, K P; Bhat, S V
2014-01-23
The mixed alkali metal effect is a long-standing problem in glasses. Electron paramagnetic resonance (EPR) is used by several researchers to study the mixed alkali metal effect, but a detailed analysis of the nearest neighbor environment of the glass former using spin-Hamiltonian parameters was elusive. In this study we have prepared a series of vanadate glasses having general formula (mol %) 40 V2O5-30BaF2-(30 - x)LiF-xRbF with x = 5, 10, 15, 20, 25, and 30. Spin-Hamiltonian parameters of V(4+) ions were extracted by simulating and fitting to the experimental spectra using EasySpin. From the analysis of these parameters it is observed that the replacement of lithium ions by rubidium ions follows a "preferential substitution model". Using this proposed model, we were able to account for the observed variation in the ratio of the g parameter, which goes through a maximum. This reflects an asymmetric to symmetric changeover of the alkali metal ion environment around the vanadium site. Further, this model also accounts for the variation in oxidation state of vanadium ion, which was confirmed from the variation in signal intensity of EPR spectra.
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses
Preyra, Colin
2004-01-01
Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940
A Laterally-Mobile Mixed Polymer/Polyelectrolyte Brush Undergoes a Macroscopic Phase Separation
NASA Astrophysics Data System (ADS)
Lee, Hoyoung; Park, Hae-Woong; Tsouris, Vasilios; Choi, Je; Mustafa, Rafid; Lim, Yunho; Meron, Mati; Lin, Binhua; Won, You-Yeon
2013-03-01
We studied mixed PEO and PDMAEMA brushes. The question we attempted to answer was: When the chain grafting points are laterally mobile, how will this lateral mobility influence the structure and phase behavior of the mixed brush? Two different model mixed PEO/PDMAEMA brush systems were prepared: a mobile mixed brush by spreading a mixture of two diblock copolymers, PEO-PnBA and PDMAEMA-PnBA, onto the air-water interface, and an inseparable mixed brush using a PEO-PnBA-PDMAEMA triblock copolymer having respective brush molecular weights matched to those of the diblock copolymers. These two systems were investigated by surface pressure-area isotherm, X-ray reflectivity and AFM imaging measurements. The results suggest that the mobile mixed brush undergoes a lateral macroscopic phase separation at high chain grafting densities, whereas the inseparable system is only microscopically phase separated under comparable brush density conditions. We also conducted an SCF analysis of the phase behavior of the mixed brush system. This analysis further supported the experimental findings. The macroscopic phase separation observed in the mobile system is in contrast to the microphase separation behavior commonly observed in two-dimensional laterally-mobile small molecule mixtures.
Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.
Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier
2011-11-01
One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.
Schulz, Vincent; Chen, Min; Tuck, David
2010-01-01
Background Genotyping platforms such as single nucleotide polymorphism (SNP) arrays are powerful tools to study genomic aberrations in cancer samples. Allele specific information from SNP arrays provides valuable information for interpreting copy number variation (CNV) and allelic imbalance including loss-of-heterozygosity (LOH) beyond that obtained from the total DNA signal available from array comparative genomic hybridization (aCGH) platforms. Several algorithms based on hidden Markov models (HMMs) have been designed to detect copy number changes and copy-neutral LOH making use of the allele information on SNP arrays. However heterogeneity in clinical samples, due to stromal contamination and somatic alterations, complicates analysis and interpretation of these data. Methods We have developed MixHMM, a novel hidden Markov model using hidden states based on chromosomal structural aberrations. MixHMM allows CNV detection for copy numbers up to 7 and allows more complete and accurate description of other forms of allelic imbalance, such as increased copy number LOH or imbalanced amplifications. MixHMM also incorporates a novel sample mixing model that allows detection of tumor CNV events in heterogeneous tumor samples, where cancer cells are mixed with a proportion of stromal cells. Conclusions We validate MixHMM and demonstrate its advantages with simulated samples, clinical tumor samples and a dilution series of mixed samples. We have shown that the CNVs of cancer cells in a tumor sample contaminated with up to 80% of stromal cells can be detected accurately using Illumina BeadChip and MixHMM. Availability The MixHMM is available as a Python package provided with some other useful tools at http://genecube.med.yale.edu:8080/MixHMM. PMID:20532221
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.
Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano
2017-11-08
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.
Flow analysis for efficient design of wavy structured microchannel mixing devices
NASA Astrophysics Data System (ADS)
Kanchan, Mithun; Maniyeri, Ranjith
2018-04-01
Microfluidics is a rapidly growing field of applied research which is strongly driven by demands of bio-technology and medical innovation. Lab-on-chip (LOC) is one such application which deals with integrating bio-laboratory on micro-channel based single fluidic chip. Since fluid flow in such devices is restricted to laminar regime, designing an efficient passive modulator to induce chaotic mixing for such diffusion based flow is a major challenge. In the present work two-dimensional numerical simulation of viscous incompressible flow is carried out using immersed boundary method (IBM) to obtain an efficient design for wavy structured micro-channel mixing devices. The continuity and Navier-Stokes equations governing the flow are solved by fractional step based finite volume method on a staggered Cartesian grid system. IBM uses Eulerian co-ordinates to describe fluid flow and Lagrangian co-ordinates to describe solid boundary. Dirac delta function is used to couple both these co-ordinate variables. A tether forcing term is used to impose the no-slip boundary condition on the wavy structure and fluid interface. Fluid flow analysis by varying Reynolds number is carried out for four wavy structure models and one straight line model. By analyzing fluid accumulation zones and flow velocities, it can be concluded that straight line structure performs better mixing for low Reynolds number and Model 2 for higher Reynolds number. Thus wavy structures can be incorporated in micro-channels to improve mixing efficiency.
Modelling and simulation of passive Lab-on-a-Chip (LoC) based micromixer for clinical application
NASA Astrophysics Data System (ADS)
Saikat, Chakraborty; Sharath, M.; Srujana, M.; Narayan, K.; Pattnaik, Prasant Kumar
2016-03-01
In biomedical application, micromixer is an important component because of many processes requires rapid and efficient mixing. At micro scale, the flow is Laminar due to small channel size which enables controlled rapid mixing. The reduction in analysis time along with high throughput can be achieved with the help of rapid mixing. In LoC application, micromixer is used for mixing of fluids especially for the devices which requires efficient mixing. Micromixer of this type of microfluidic devices with a rapid mixing is useful in application such as DNA/RNA synthesis, drug delivery system & biological agent detection. In this work, we design and simulate a microfluidic based passive rapid micromixer for lab-on-a-chip application.
NASA Astrophysics Data System (ADS)
Howells, A. E.; Oiler, J.; Fecteau, K.; Boyd, E. S.; Shock, E.
2014-12-01
The parameters influencing species diversity in natural ecosystems are difficult to assess due to the long and experimentally prohibitive timescales needed to develop causative relationships among measurements. Ecological diversity-disturbance models suggest that disturbance is a mechanism for increased species diversity, allowing for coexistence of species at an intermediate level of disturbance. Observing this mechanism often requires long timescales, such as the succession of a forest after a fire. In this study we evaluated the effect of mixing of two end member hydrothermal fluids on the diversity and structure of a microbial community where disturbance occurs on small temporal and spatial scales. Outflow channels from two hot springs of differing geochemical composition in Yellowstone National Park, one pH 3.3 and 36 °C and the other pH 7.6 and 61 °C flow together to create a mixing zone on the order of a few meters. Geochemical measurements were made at both in-coming streams and at a site of complete mixing downstream of the mixing zone, at pH 6.5 and 46 °C. Compositions were estimated across the mixing zone at 1 cm intervals using microsensor temperature and conductivity measurements and a mixing model. Qualitatively, there are four distinct ecotones existing over ranges in temperature and pH across the mixing zone. Community analysis of the 16S rRNA genes of these ecotones show a peak in diversity at maximal mixing. Principle component analysis of community 16S rRNA genes reflects coexistence of species with communities at maximal mixing plotting intermediate to communities at distal ends of the mixing zone. These spatial biological and geochemical observations suggest that the mixing zone is a dynamic ecosystem where geochemistry and biological diversity are governed by changes in the flow rate and geochemical composition of the two hot spring sources. In ecology, understanding how environmental disruption increases species diversity is a foundation for ecosystem conservation. By studying a hot spring environment where detailed measurements of geochemical variation and community diversity can be made at small spatial scales, the mechanisms by which maximal diversity is achieved can be tested and may assist in applications of diversity-disturbance models for larger ecosystems.
Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus
2017-02-01
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.
How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Analyzing Association Mapping in Pedigree-Based GWAS Using a Penalized Multitrait Mixed Model
Liu, Jin; Yang, Can; Shi, Xingjie; Li, Cong; Huang, Jian; Zhao, Hongyu; Ma, Shuangge
2017-01-01
Genome-wide association studies (GWAS) have led to the identification of many genetic variants associated with complex diseases in the past 10 years. Penalization methods, with significant numerical and statistical advantages, have been extensively adopted in analyzing GWAS. This study has been partly motivated by the analysis of Genetic Analysis Workshop (GAW) 18 data, which have two notable characteristics. First, the subjects are from a small number of pedigrees and hence related. Second, for each subject, multiple correlated traits have been measured. Most of the existing penalization methods assume independence between subjects and traits and can be suboptimal. There are a few methods in the literature based on mixed modeling that can accommodate correlations. However, they cannot fully accommodate the two types of correlations while conducting effective marker selection. In this study, we develop a penalized multitrait mixed modeling approach. It accommodates the two different types of correlations and includes several existing methods as special cases. Effective penalization is adopted for marker selection. Simulation demonstrates its satisfactory performance. The GAW 18 data are analyzed using the proposed method. PMID:27247027
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Multilevel Models for Binary Data
ERIC Educational Resources Information Center
Powers, Daniel A.
2012-01-01
The methods and models for categorical data analysis cover considerable ground, ranging from regression-type models for binary and binomial data, count data, to ordered and unordered polytomous variables, as well as regression models that mix qualitative and continuous data. This article focuses on methods for binary or binomial data, which are…
Chughtai, A A; Qadeer, E; Khan, W; Hadi, H; Memon, I A
2013-03-01
To improve involvement of the private sector in the national tuberculosis (TB) programme in Pakistan various public-private mix projects were set up between 2004 and 2009. A retrospective analysis of data was made to study 6 different public-private mix models for TB control in Pakistan and estimate the contribution of the various private providers to TB case notification and treatment outcome. The number of TB cases notified through the private sector increased significantly from 77 cases in 2004 to 37,656 in 2009. Among the models, the nongovernmental organization model made the greatest contribution to case notification (58.3%), followed by the hospital-based model (18.9%). Treatment success was highest for the district-led model (94.1%) and lowest for the hospital-based model (74.2%). The private sector made an important contribution to the national data through the various public-private mix projects. Issues of sustainability and the lack of treatment supporters are discussed as reasons for lack of success of some projects.
Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T
2012-10-01
In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.
A Unified Development of Basis Reduction Methods for Rotor Blade Analysis
NASA Technical Reports Server (NTRS)
Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)
2001-01-01
The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.
Li, Tao; Sun, Guihua; Ma, Shengzhong; Liang, Kai; Yang, Chupeng; Li, Bo; Luo, Weidong
2016-11-15
Concentration, spatial distribution, composition and sources of polycyclic aromatic hydrocarbons (PAHs) were investigated based on measurements of 16 PAH compounds in surface sediments of the western Taiwan Strait. Total PAH concentrations ranged from 2.41 to 218.54ngg -1 . Cluster analysis identified three site clusters representing the northern, central and southern regions. Sedimentary PAHs mainly originated from a mixture of pyrolytic and petrogenic in the north, from pyrolytic in the central, and from petrogenic in the south. An end-member mixing model was performed using PAH compound data to estimate mixing proportions for unknown end-members (i.e., extreme-value sample points) proposed by principal component analysis (PCA). The results showed that the analyzed samples can be expressed as mixtures of three end-members, and the mixing of different end-members was strongly related to the transport pathway controlled by two currents, which alternately prevail in the Taiwan Strait during different seasons. Copyright © 2016. Published by Elsevier Ltd.
Analysis of mixed traffic flow with human-driving and autonomous cars based on car-following model
NASA Astrophysics Data System (ADS)
Zhu, Wen-Xing; Zhang, H. M.
2018-04-01
We investigated the mixed traffic flow with human-driving and autonomous cars. A new mathematical model with adjustable sensitivity and smooth factor was proposed to describe the autonomous car's moving behavior in which smooth factor is used to balance the front and back headway in a flow. A lemma and a theorem were proved to support the stability criteria in traffic flow. A series of simulations were carried out to analyze the mixed traffic flow. The fundamental diagrams were obtained from the numerical simulation results. The varying sensitivity and smooth factor of autonomous cars affect traffic flux, which exhibits opposite varying tendency with increasing parameters before and after the critical density. Moreover, the sensitivity of sensors and smooth factors play an important role in stabilizing the mixed traffic flow and suppressing the traffic jam.
NASA Astrophysics Data System (ADS)
Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu; Ren, Xinrong
2018-02-01
Making sense of modeled atmospheric composition requires not only comparison to in situ measurements but also knowing and quantifying the sensitivity of the model to its input factors. Using a global sensitivity method involving the simultaneous perturbation of many chemical transport model input factors, we find the model uncertainty for ozone (O3), hydroxyl radical (OH), and hydroperoxyl radical (HO2) mixing ratios, and apportion this uncertainty to specific model inputs for the DC-8 flight tracks corresponding to the NASA Intercontinental Chemical Transport Experiment (INTEX) campaigns of 2004 and 2006. In general, when uncertainties in modeled and measured quantities are accounted for, we find agreement between modeled and measured oxidant mixing ratios with the exception of ozone during the Houston flights of the INTEX-B campaign and HO2 for the flights over the northernmost Pacific Ocean during INTEX-B. For ozone and OH, modeled mixing ratios were most sensitive to a bevy of emissions, notably lightning NOx, various surface NOx sources, and isoprene. HO2 mixing ratios were most sensitive to CO and isoprene emissions as well as the aerosol uptake of HO2. With ozone and OH being generally overpredicted by the model, we find better agreement between modeled and measured vertical profiles when reducing NOx emissions from surface as well as lightning sources.
Linear Mixed Models: Gum and Beyond
NASA Astrophysics Data System (ADS)
Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens
2014-04-01
In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.
NASA Astrophysics Data System (ADS)
Schilling, Oliver S.; Gerber, Christoph; Partington, Daniel J.; Purtschert, Roland; Brennwald, Matthias S.; Kipfer, Rolf; Hunkeler, Daniel; Brunner, Philip
2017-12-01
To provide a sound understanding of the sources, pathways, and residence times of groundwater water in alluvial river-aquifer systems, a combined multitracer and modeling experiment was carried out in an important alluvial drinking water wellfield in Switzerland. 222Rn, 3H/3He, atmospheric noble gases, and the novel 37Ar-method were used to quantify residence times and mixing ratios of water from different sources. With a half-life of 35.1 days, 37Ar allowed to successfully close a critical observational time gap between 222Rn and 3H/3He for residence times of weeks to months. Covering the entire range of residence times of groundwater in alluvial systems revealed that, to quantify the fractions of water from different sources in such systems, atmospheric noble gases and helium isotopes are tracers suited for end-member mixing analysis. A comparison between the tracer-based mixing ratios and mixing ratios simulated with a fully-integrated, physically-based flow model showed that models, which are only calibrated against hydraulic heads, cannot reliably reproduce mixing ratios or residence times of alluvial river-aquifer systems. However, the tracer-based mixing ratios allowed the identification of an appropriate flow model parametrization. Consequently, for alluvial systems, we recommend the combination of multitracer studies that cover all relevant residence times with fully-coupled, physically-based flow modeling to better characterize the complex interactions of river-aquifer systems.
A mathematical model for the transfer of soil solutes to runoff under water scouring.
Yang, Ting; Wang, Quanjiu; Wu, Laosheng; Zhang, Pengyu; Zhao, Guangxu; Liu, Yanli
2016-11-01
The transfer of nutrients from soil to runoff often causes unexpected pollution in water bodies. In this study, a mathematical model that relates to the detachment of soil particles by water flow and the degree of mixing between overland flow and soil nutrients was proposed. The model assumes that the mixing depth is an integral of average water flow depth, and it was evaluated by experiments with three water inflow rates to bare soil surfaces and to surfaces with eight treatments of different stone coverages. The model predicted outflow rates were compared with the experimentally observed data to test the accuracy of the infiltration parameters obtained by curve fitting the models to the data. Further analysis showed that the comprehensive mixing coefficient (ke) was linearly correlated with Reynolds' number Re (R(2)>0.9), and this relationship was verified by comparing the simulated potassium concentration and cumulative mass with observed data, respectively. The best performance with the bias error analysis (Nash Sutcliffe coefficient of efficiency (NS), relative error (RE) and the coefficient of determination (R(2))) showed that the predicted data by the proposed model was in good agreement with the measured data. Thus the model can be used to guide soil-water and fertilization management to minimize nutrient runoff from cropland. Copyright © 2016 Elsevier B.V. All rights reserved.
Large eddy simulation model for wind-driven sea circulation in coastal areas
NASA Astrophysics Data System (ADS)
Petronio, A.; Roman, F.; Nasello, C.; Armenio, V.
2013-12-01
In the present paper a state-of-the-art large eddy simulation model (LES-COAST), suited for the analysis of water circulation and mixing in closed or semi-closed areas, is presented and applied to the study of the hydrodynamic characteristics of the Muggia bay, the industrial harbor of the city of Trieste, Italy. The model solves the non-hydrostatic, unsteady Navier-Stokes equations, under the Boussinesq approximation for temperature and salinity buoyancy effects, using a novel, two-eddy viscosity Smagorinsky model for the closure of the subgrid-scale momentum fluxes. The model employs: a simple and effective technique to take into account wind-stress inhomogeneity related to the blocking effect of emerged structures, which, in turn, can drive local-scale, short-term pollutant dispersion; a new nesting procedure to reconstruct instantaneous, turbulent velocity components, temperature and salinity at the open boundaries of the domain using data coming from large-scale circulation models (LCM). Validation tests have shown that the model reproduces field measurement satisfactorily. The analysis of water circulation and mixing in the Muggia bay has been carried out under three typical breeze conditions. Water circulation has been shown to behave as in typical semi-closed basins, with an upper layer moving along the wind direction (apart from the anti-cyclonic veering associated with the Coriolis force) and a bottom layer, thicker and slower than the upper one, moving along the opposite direction. The study has shown that water vertical mixing in the bay is inhibited by a large level of stable stratification, mainly associated with vertical variation in salinity and, to a minor extent, with temperature variation along the water column. More intense mixing, quantified by sub-critical values of the gradient Richardson number, is present in near-coastal regions where upwelling/downwelling phenomena occur. The analysis of instantaneous fields has detected the presence of large cross-sectional eddies spanning the whole water column and contributing to vertical mixing, associated with the presence of sub-surface horizontal turbulent structures. Analysis of water renewal within the bay shows that, under the typical breeze regimes considered in the study, the residence time of water in the bay is of the order of a few days. Finally, vertical eddy viscosity has been calculated and shown to vary by a couple of orders of magnitude along the water column, with larger values near the bottom surface where density stratification is smaller.
NASA Technical Reports Server (NTRS)
Boulet, C.; Ma, Q.
2016-01-01
Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
An ideal-typical model for comparing interprofessional relations and skill mix in health care.
Schönfelder, Walter; Nilsen, Elin Anita
2016-11-08
Comparisons of health system performance, including the regulations of interprofessional relations and the skill mix between health professions are challenging. National strategies for regulating interprofessional relations vary widely across European health care systems. Unambiguously defined and generally accepted performance indicators have to remain generic, with limited power for recognizing the organizational structures regulating interprofessional relations in different health systems. A coherent framework for in-depth comparisons of different models for organizing interprofessional relations and the skill mix between professional groups is currently not available. This study aims to develop an ideal-typical framework for categorizing skill mix and interprofessional relations in health care, and to assess the potential impact for different ideal types on care coordination and integrated service delivery. A document analysis of the Health Systems in Transition (HiT) reports published by the European Observatory on Health Systems and Policies was conducted. The HiT reports to 31 European health systems were analyzed using a qualitative content analysis and a process of meaning condensation. The educational tracks available to nurses have an impact on the professional autonomy for nurses, the hierarchy between professional groups, the emphasis given to negotiating skill mix, interdisciplinary teamwork and the extent of cooperation across the health and social service interface. Based on the results of the document analysis, three ideal types for regulating interprofessional relations and skill mix in health care are delimited. For each ideal type, outcomes on service coordination and holistic service delivery are described. Comparisons of interprofessional relations are necessary for proactive health human resource policies. The proposed ideal-typical framework provides the means for in-depth comparisons of interprofessional relations in the health care workforce beyond of what is possible with directly comparable, but generic performance indicators.
NASA Astrophysics Data System (ADS)
Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.
2018-03-01
This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields from the global models along the CMAQ boundaries. Using boundary conditions from AM3 yielded higher springtime ozone columns burdens in the middle and lower troposphere compared to boundary conditions from the other models. For surface ozone, the differences between the AM3-driven CMAQ simulations and the CMAQ simulations driven by other large-scale models are especially pronounced during spring and winter where they can reach more than 10 ppb for seasonal mean ozone mixing ratios and as much as 15 ppb for domain-averaged daily maximum 8 h average ozone on individual days. In contrast, the differences between the C-IFS-, GEOS-Chem-, and H-CMAQ-driven regional-scale CMAQ simulations are typically smaller. Comparing simulated surface ozone mixing ratios to observations and computing seasonal and regional model performance statistics revealed that boundary conditions can have a substantial impact on model performance. Further analysis showed that boundary conditions can affect model performance across the entire range of the observed distribution, although the impacts tend to be lower during summer and for the very highest observed percentiles. The results are discussed in the context of future model development and analysis opportunities.
Ware, John; Kort, Eric A; DeCola, Phil; Duren, Riley
2016-08-27
Atmospheric observations of greenhouse gases provide essential information on sources and sinks of these key atmospheric constituents. To quantify fluxes from atmospheric observations, representation of transport-especially vertical mixing-is a necessity and often a source of error. We report on remotely sensed profiles of vertical aerosol distribution taken over a 2 year period in Pasadena, California. Using an automated analysis system, we estimate daytime mixing layer depth, achieving high confidence in the afternoon maximum on 51% of days with profiles from a Sigma Space Mini Micropulse LiDAR (MiniMPL) and on 36% of days with a Vaisala CL51 ceilometer. We note that considering ceilometer data on a logarithmic scale, a standard method, introduces, an offset in mixing height retrievals. The mean afternoon maximum mixing height is 770 m Above Ground Level in summer and 670 m in winter, with significant day-to-day variance (within season σ = 220m≈30%). Taking advantage of the MiniMPL's portability, we demonstrate the feasibility of measuring the detailed horizontal structure of the mixing layer by automobile. We compare our observations to planetary boundary layer (PBL) heights from sonde launches, North American regional reanalysis (NARR), and a custom Weather Research and Forecasting (WRF) model developed for greenhouse gas (GHG) monitoring in Los Angeles. NARR and WRF PBL heights at Pasadena are both systematically higher than measured, NARR by 2.5 times; these biases will cause proportional errors in GHG flux estimates using modeled transport. We discuss how sustained lidar observations can be used to reduce flux inversion error by selecting suitable analysis periods, calibrating models, or characterizing bias for correction in post processing.
Lee, Jaeyoung; Yasmin, Shamsunnahar; Eluru, Naveen; Abdel-Aty, Mohamed; Cai, Qing
2018-02-01
In traffic safety literature, crash frequency variables are analyzed using univariate count models or multivariate count models. In this study, we propose an alternative approach to modeling multiple crash frequency dependent variables. Instead of modeling the frequency of crashes we propose to analyze the proportion of crashes by vehicle type. A flexible mixed multinomial logit fractional split model is employed for analyzing the proportions of crashes by vehicle type at the macro-level. In this model, the proportion allocated to an alternative is probabilistically determined based on the alternative propensity as well as the propensity of all other alternatives. Thus, exogenous variables directly affect all alternatives. The approach is well suited to accommodate for large number of alternatives without a sizable increase in computational burden. The model was estimated using crash data at Traffic Analysis Zone (TAZ) level from Florida. The modeling results clearly illustrate the applicability of the proposed framework for crash proportion analysis. Further, the Excess Predicted Proportion (EPP)-a screening performance measure analogous to Highway Safety Manual (HSM), Excess Predicted Average Crash Frequency is proposed for hot zone identification. Using EPP, a statewide screening exercise by the various vehicle types considered in our analysis was undertaken. The screening results revealed that the spatial pattern of hot zones is substantially different across the various vehicle types considered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Ill-posedness in modeling mixed sediment river morphodynamics
NASA Astrophysics Data System (ADS)
Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid
2018-04-01
In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.
A Monte-Carlo Analysis of Organic Volatility with Aerosol Microphysics
NASA Astrophysics Data System (ADS)
Gao, Chloe; Tsigaridis, Kostas; Bauer, Susanne E.
2017-04-01
A newly developed box model, MATRIX-VBS, includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves aerosol mass and number concentrations and aerosol mixing state. The new scheme advanced the representation of organic aerosols in models by improving the traditional and simplistic treatment of organic aerosols as non-volatile and with a fixed size distribution. Further development includes adding the condensation of organics on coarse mode aerosols - dust and sea salt, thus making all organics in the system semi-volatile. To test and simplify the model, a Monte-Carlo analysis is performed to pin point which processes affect organics the most under varied chemical and meteorological conditions. Since the model's parameterizations have the ability to capture a very wide range of conditions, all possible scenarios on Earth across the whole parameter space, including temperature, humidity, location, emissions and oxidant levels, are examined. The Monte-Carlo simulations provide quantitative information on the sensitivity of the newly developed model and help us understand how organics are affecting the size distribution, mixing state and volatility distribution at varying levels of meteorological conditions and pollution levels. In addition, these simulations give information on which parameters play a critical role in the aerosol distribution and evolution in the atmosphere and which do not, that will facilitate the simplification of the box model, an important step in its implementation in the global model GISS ModelE as a module.
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Linear stability analysis of particle-laden hypopycnal plumes
NASA Astrophysics Data System (ADS)
Farenzena, Bruno Avila; Silvestrini, Jorge Hugo
2017-12-01
Gravity-driven riverine outflows are responsible for carrying sediments to the coastal waters. The turbulent mixing in these flows is associated with shear and gravitational instabilities such as Kelvin-Helmholtz, Holmboe, and Rayleigh-Taylor. Results from temporal linear stability analysis of a two-layer stratified flow are presented, investigating the behavior of settling particles and mixing region thickness on the flow stability in the presence of ambient shear. The particles are considered suspended in the transport fluid, and its sedimentation is modeled with a constant valued settling velocity. Three scenarios, regarding the mixing region thickness, were identified: the poorly mixed environment, the strong mixed environment, and intermediate scenario. It was observed that Kelvin-Helmholtz and settling convection modes are the two fastest growing modes depending on the particles settling velocity and the total Richardson number. The second scenario presents a modified Rayleigh-Taylor instability, which is the dominant mode. The third case can have Kelvin-Helmholtz, settling convection, and modified Rayleigh-Taylor modes as the fastest growing mode depending on the combination of parameters.
Scalar mixing and strain dynamics methodologies for PIV/LIF measurements of vortex ring flows
NASA Astrophysics Data System (ADS)
Bouremel, Yann; Ducci, Andrea
2017-01-01
Fluid mixing operations are central to possibly all chemical, petrochemical, and pharmaceutical industries either being related to biphasic blending in polymerisation processes, cell suspension for biopharmaceuticals production, and fractionation of complex oil mixtures. This work aims at providing a fundamental understanding of the mixing and stretching dynamics occurring in a reactor in the presence of a vortical structure, and the vortex ring was selected as a flow paradigm of vortices commonly encountered in stirred and shaken reactors in laminar flow conditions. High resolution laser induced fluorescence and particle imaging velocimetry measurements were carried out to fully resolve the flow dissipative scales and provide a complete data set to fully assess macro- and micro-mixing characteristics. The analysis builds upon the Lamb-Oseen vortex work of Meunier and Villermaux ["How vortices mix," J. Fluid Mech. 476, 213-222 (2003)] and the engulfment model of Baldyga and Bourne ["Simplification of micromixing calculations. I. Derivation and application of new model," Chem. Eng. J. 42, 83-92 (1989); "Simplification of micromixing calculations. II. New applications," ibid. 42, 93-101 (1989)] which are valid for diffusion-free conditions, and a comparison is made between three methodologies to assess mixing characteristics. The first method is commonly used in macro-mixing studies and is based on a control area analysis by estimating the variation in time of the concentration standard deviation, while the other two are formulated to provide an insight into local segregation dynamics, by either using an iso-concentration approach or an iso-concentration gradient approach to take into account diffusion.
Robust and Sensitive Analysis of Mouse Knockout Phenotypes
Karp, Natasha A.; Melvin, David; Mott, Richard F.
2012-01-01
A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student’s t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene’s function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained. PMID:23300663
Regulation mechanisms in mixed and pure culture microbial fermentation.
Hoelzle, Robert D; Virdis, Bernardino; Batstone, Damien J
2014-11-01
Mixed-culture fermentation is a key central process to enable next generation biofuels and biocommodity production due to economic and process advantages over application of pure cultures. However, a key limitation to the application of mixed-culture fermentation is predicting culture product response, related to metabolic regulation mechanisms. This is also a limitation in pure culture bacterial fermentation. This review evaluates recent literature in both pure and mixed culture studies with a focus on understanding how regulation and signaling mechanisms interact with metabolic routes and activity. In particular, we focus on how microorganisms balance electron sinking while maximizing catabolic energy generation. Analysis of these mechanisms and their effect on metabolism dynamics is absent in current models of mixed-culture fermentation. This limits process prediction and control, which in turn limits industrial application of mixed-culture fermentation. A key mechanism appears to be the role of internal electron mediating cofactors, and related regulatory signaling. This may determine direction of electrons towards either hydrogen or reduced organics as end-products and may form the basis for future mechanistic models. © 2014 Wiley Periodicals, Inc.
Determining the impact of cell mixing on signaling during development.
Uriu, Koichiro; Morelli, Luis G
2017-06-01
Cell movement and intercellular signaling occur simultaneously to organize morphogenesis during embryonic development. Cell movement can cause relative positional changes between neighboring cells. When intercellular signals are local such cell mixing may affect signaling, changing the flow of information in developing tissues. Little is known about the effect of cell mixing on intercellular signaling in collective cellular behaviors and methods to quantify its impact are lacking. Here we discuss how to determine the impact of cell mixing on cell signaling drawing an example from vertebrate embryogenesis: the segmentation clock, a collective rhythm of interacting genetic oscillators. We argue that comparing cell mixing and signaling timescales is key to determining the influence of mixing. A signaling timescale can be estimated by combining theoretical models with cell signaling perturbation experiments. A mixing timescale can be obtained by analysis of cell trajectories from live imaging. After comparing cell movement analyses in different experimental settings, we highlight challenges in quantifying cell mixing from embryonic timelapse experiments, especially a reference frame problem due to embryonic motions and shape changes. We propose statistical observables characterizing cell mixing that do not depend on the choice of reference frames. Finally, we consider situations in which both cell mixing and signaling involve multiple timescales, precluding a direct comparison between single characteristic timescales. In such situations, physical models based on observables of cell mixing and signaling can simulate the flow of information in tissues and reveal the impact of observed cell mixing on signaling. © 2017 Japanese Society of Developmental Biologists.
A random distribution reacting mixing layer model
NASA Technical Reports Server (NTRS)
Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.
1994-01-01
A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Investigation of micromixing by acoustically oscillated sharp-edges
Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco
2016-01-01
Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel. PMID:27158292
Investigation of micromixing by acoustically oscillated sharp-edges.
Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco
2016-03-01
Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Wolf, D. E.
1983-01-01
A new computational model, SCIPVIS, has been developed to predict the multiple-cell wave/shock structure in under or over-expanded turbulent jets. SCIPVIS solves the parabolized Navier-Stokes jet mixing equations utilizing a shock-capturing approach in supersonic regions of the jet and a pressure-split approach in subsonic regions. Turbulence processes are represented by the solution of compressibility corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive turbulent mixing process occurring behind the disc are handled in a detailed fashion. SCIPVIS presently analyzes jets exhausting into a quiescent or supersonic external stream for which a single-pass spatial marching solution can be obtained. The iterative coupling of SCIPVIS with a potential flow solver for the analysis of subsonic/transonic external streams is under development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex
Lindsay, Grace W.
2017-01-01
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463
NASA Astrophysics Data System (ADS)
Li, Tanda; Bedding, Timothy R.; Huber, Daniel; Ball, Warrick H.; Stello, Dennis; Murphy, Simon J.; Bland-Hawthorn, Joss
2018-03-01
Stellar models rely on a number of free parameters. High-quality observations of eclipsing binary stars observed by Kepler offer a great opportunity to calibrate model parameters for evolved stars. Our study focuses on six Kepler red giants with the goal of calibrating the mixing-length parameter of convection as well as the asteroseismic surface term in models. We introduce a new method to improve the identification of oscillation modes that exploits theoretical frequencies to guide the mode identification (`peak-bagging') stage of the data analysis. Our results indicate that the convective mixing-length parameter (α) is ≈14 per cent larger for red giants than for the Sun, in agreement with recent results from modelling the APOGEE stars. We found that the asteroseismic surface term (i.e. the frequency offset between the observed and predicted modes) correlates with stellar parameters (Teff, log g) and the mixing-length parameter. This frequency offset generally decreases as giants evolve. The two coefficients a-1 and a3 for the inverse and cubic terms that have been used to describe the surface term correction are found to correlate linearly. The effect of the surface term is also seen in the p-g mixed modes; however, established methods for correcting the effect are not able to properly correct the g-dominated modes in late evolved stars.
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model
Hopkins, John B.; Ferguson, Jake M.
2012-01-01
Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246
Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.
2008-01-01
A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.
Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.
2007-01-01
A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.
ERIC Educational Resources Information Center
Ho, Hsuan-Fu; Hung, Chia-Chi
2008-01-01
Purpose: The purpose of this paper is to examine how a graduate institute at National Chiayi University (NCYU), by using a model that integrates analytic hierarchy process, cluster analysis and correspondence analysis, can develop effective marketing strategies. Design/methodology/approach: This is primarily a quantitative study aimed at…
Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.
Covarrubias-Pazaran, Giovanny
2016-01-01
Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.
Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar
2015-06-01
Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.
Bursting patterns and mixed-mode oscillations in reduced Purkinje model
NASA Astrophysics Data System (ADS)
Zhan, Feibiao; Liu, Shenquan; Wang, Jing; Lu, Bo
2018-02-01
Bursting discharge is a ubiquitous behavior in neurons, and abundant bursting patterns imply many physiological information. There exists a closely potential link between bifurcation phenomenon and the number of spikes per burst as well as mixed-mode oscillations (MMOs). In this paper, we have mainly explored the dynamical behavior of the reduced Purkinje cell and the existence of MMOs. First, we adopted the codimension-one bifurcation to illustrate the generation mechanism of bursting in the reduced Purkinje cell model via slow-fast dynamics analysis and demonstrate the process of spike-adding. Furthermore, we have computed the first Lyapunov coefficient of Hopf bifurcation to determine whether it is subcritical or supercritical and depicted the diagrams of inter-spike intervals (ISIs) to examine the chaos. Moreover, the bifurcation diagram near the cusp point is obtained by making the codimension-two bifurcation analysis for the fast subsystem. Finally, we have a discussion on mixed-mode oscillations and it is further investigated using the characteristic index that is Devil’s staircase.
NASA Astrophysics Data System (ADS)
Khosravi Parsa, Mohsen; Hormozi, Faramarz
2014-06-01
In the present work, a passive model of a micromixer with sinusoidal side walls, a convergent-divergent cross section and a T-shape entrance was experimentally fabricated and modeled. The main aim of this modeling was to conduct a study on the Dean and separation vortices created inside the sinusoidal microchannels with a convergent-divergent cross section. To fabricate the microchannels, CO2 laser micromachining was utilized and the fluid mixing pattern is observed using a digital microscope imaging system. Also, computational fluid dynamics was applied with the finite element method to solve Navier-Stokes equations and the diffusion-convection mode in inlet Reynolds numbers of 0.2-75. Numerically obtained results were in reasonable agreement with experimental data. According to the previous studies, phase shift and wavelength of side walls are important parameters in designing sinusoidal microchannels. An increase of phase shift between side walls of microchannels leads the cross section being convergent-divergent. Results also show that at an inlet Reynolds number of <20 the molecular diffusion is the dominant mixing factor and the mixing index extent is nearly identical in all designs. For higher inlet Reynolds numbers (>20), secondary flow is the main factor of mixing. Noticeably, mixing index drastically depends on phase shift (ϕ) and wavelength of side walls (λ) such that the best mixing can be observed in ϕ = 3π/4 and at a wavelength to amplitude ratio of 3.3. Likewise, the maximum pressure drop is reported at ϕ = π. Therefore, the sinusoidal microchannel with phase shifts between π/2 and 3π/4 is the best microchannel for biological and chemical analysis, for which a mixing index value higher than 90% and a pressure drop less than 12 kPa is reported.
Gómez, Javier B; Gimeno, María J; Auqué, Luis F; Acero, Patricia
2014-01-15
This paper presents the mixing modelling results for the hydrogeochemical characterisation of groundwaters in the Laxemar area (Sweden). This area is one of the two sites that have been investigated, under the financial patronage of the Swedish Nuclear Waste and Management Co. (SKB), as possible candidates for hosting the proposed repository for the long-term storage of spent nuclear fuel. The classical geochemical modelling, interpreted in the light of the palaeohydrogeological history of the system, has shown that the driving process in the geochemical evolution of this groundwater system is the mixing between four end-member waters: a deep and old saline water, a glacial meltwater, an old marine water, and a meteoric water. In this paper we put the focus on mixing and its effects on the final chemical composition of the groundwaters using a comprehensive methodology that combines principal component analysis with mass balance calculations. This methodology allows us to test several combinations of end member waters and several combinations of compositional variables in order to find optimal solutions in terms of mixing proportions. We have applied this methodology to a dataset of 287 groundwater samples from the Laxemar area collected and analysed by SKB. The best model found uses four conservative elements (Cl, Br, oxygen-18 and deuterium), and computes mixing proportions with respect to three end member waters (saline, glacial and meteoric). Once the first order effect of mixing has been taken into account, water-rock interaction can be used to explain the remaining variability. In this way, the chemistry of each water sample can be obtained by using the mixing proportions for the conservative elements, only affected by mixing, or combining the mixing proportions and the chemical reactions for the non-conservative elements in the system, establishing the basis for predictive calculations. © 2013 Elsevier B.V. All rights reserved.
The value of a statistical life: a meta-analysis with a mixed effects regression model.
Bellavance, François; Dionne, Georges; Lebeau, Martin
2009-03-01
The value of a statistical life (VSL) is a very controversial topic, but one which is essential to the optimization of governmental decisions. We see a great variability in the values obtained from different studies. The source of this variability needs to be understood, in order to offer public decision-makers better guidance in choosing a value and to set clearer guidelines for future research on the topic. This article presents a meta-analysis based on 39 observations obtained from 37 studies (from nine different countries) which all use a hedonic wage method to calculate the VSL. Our meta-analysis is innovative in that it is the first to use the mixed effects regression model [Raudenbush, S.W., 1994. Random effects models. In: Cooper, H., Hedges, L.V. (Eds.), The Handbook of Research Synthesis. Russel Sage Foundation, New York] to analyze studies on the value of a statistical life. We conclude that the variability found in the values studied stems in large part from differences in methodologies.
Comprehensive renormalization group analysis of the littlest seesaw model
NASA Astrophysics Data System (ADS)
Geib, Tanja; King, Stephen F.
2018-04-01
We present a comprehensive renormalization group analysis of the littlest seesaw model involving two right-handed neutrinos and a very constrained Dirac neutrino Yukawa coupling matrix. We perform the first χ2 analysis of the low energy masses and mixing angles, in the presence of renormalization group corrections, for various right-handed neutrino masses and mass orderings, both with and without supersymmetry. We find that the atmospheric angle, which is predicted to be near maximal in the absence of renormalization group corrections, may receive significant corrections for some nonsupersymmetric cases, bringing it into close agreement with the current best fit value in the first octant. By contrast, in the presence of supersymmetry, the renormalization group corrections are relatively small, and the prediction of a near maximal atmospheric mixing angle is maintained, for the studied cases. Forthcoming results from T2K and NO ν A will decisively test these models at a precision comparable to the renormalization group corrections we have calculated.
Transient analysis of a pulsed detonation combustor using the numerical propulsion system simulation
NASA Astrophysics Data System (ADS)
Hasler, Anthony Scott
The performance of a hybrid mixed flow turbofan (with detonation tubes installed in the bypass duct) is investigated in this study and compared with a baseline model of a mixed flow turbofan with a standard combustion chamber as a duct burner. Previous studies have shown that pulsed detonation combustors have the potential to be more efficient than standard combustors, but they also present new challenges that must be overcome before they can be utilized. The Numerical Propulsion System Simulation (NPSS) will be used to perform the analysis with a pulsed detonation combustor model based on a numerical simulation done by Endo, Fujiwara, et. al. Three different cases will be run using both models representing a take-off situation, a subsonic cruise and a supersonic cruise situation. Since this study investigates a transient analysis, the pulse detonation combustor is run in a rig setup first and then its pressure and temperature are averaged for the cycle to obtain quasi-steady results.
Lidar observation of marine mixed layer
NASA Technical Reports Server (NTRS)
Yamagishi, Susumu; Yamanouchi, Hiroshi; Tsuchiya, Masayuki
1992-01-01
Marine mixed layer is known to play an important role in the transportation of pollution exiting ship funnels. The application of a diffusion model is critically dependent upon a reliable estimate of a lid. However, the processes that form lids are not well understood, though considerable progress toward marine boundary layer has been achieved. This report describes observations of the marine mixed layer from the course Ise-wan to Nii-jima with the intention of gaining a better understanding of their structure by a shipboard lidar. These observations were made in the summer of 1991. One interesting feature of the observations was that the multiple layers of aerosols, which is rarely numerically modeled, was encountered. No attempt is yet made to present a systematic analysis of all the data collected. Instead we focus on observations that seem to be directly relevant to the structure of the mixed layer.
Heat of mixing and morphological stability
NASA Technical Reports Server (NTRS)
Nandapurkar, P.; Poirier, D. R.
1988-01-01
A mathematical model, which incorporates heat of mixing in the energy balance, has been developed to analyze the morphological stability of a planar solid-liquid interface during the directional solidification of a binary alloy. It is observed that the stability behavior is almost that predicted by the analysis of Mullins and Sekerka (1963) at low growth velocities, while deviations in the critical concentration of about 20-25 percent are observed under rapid solidification conditions for certain systems. The calculations indicate that a positive heat of mixing makes the planar interface more unstable, whereas a negative heat of mixing makes it more stable, in terms of the critical concentration.
NASA Astrophysics Data System (ADS)
Narasimhan, T. N.; White, A. F.; Tokunaga, T.
1986-12-01
At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series [White et al., 1984] we presented field data as well as an interpretation based on a static mixing model. As an upper bound, we estimated that 1.7% of the tailings water had mixed with the native groundwater. In the present work we present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNAmic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narashimhan, T.N.; White, A.F.; Tokunaga, T.
1986-12-01
At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series the authors presented field data as well as an interpretation based on a static mixing models. As an upper bound, the authors estimated that 1.7% of the tailings water had mixed with the native groundwater. Inmore » the present work they present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNamic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.« less
Social Relations and Resident Health in Assisted Living: An Application of the Convoy Model
ERIC Educational Resources Information Center
Perkins, Molly M.; Ball, Mary M.; Kemp, Candace L.; Hollingsworth, Carole
2013-01-01
Purpose: This article, based on analysis of data from a mixed methods study, builds on a growing body of assisted living (AL) research focusing on the link between residents' social relationships and health. A key aim of this analysis, which uses the social convoy model as a conceptual and methodological framework, was to examine the relative…
An, Shengli; Zhang, Yanhong; Chen, Zheng
2012-12-01
To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.
Lognormal Assimilation of Water Vapor in a WRF-GSI Cycled System
NASA Astrophysics Data System (ADS)
Fletcher, S. J.; Kliewer, A.; Jones, A. S.; Forsythe, J. M.
2015-12-01
Recent publications have shown the viability of both detecting a lognormally-distributed signal for water vapor mixing ratio and the improved quality of satellite retrievals in a 1DVAR mixed lognormal-Gaussian assimilation scheme over a Gaussian-only system. This mixed scheme is incorporated into the Gridpoint Statistical Interpolation (GSI) assimilation scheme with the goal of improving forecasts from the Weather Research and Forecasting (WRF) Model in a cycled system. Results are presented of the impact of treating water vapor as a lognormal random variable. Included in the analysis are: 1) the evolution of Tropical Storm Chris from 2006, and 2) an analysis of a "Pineapple Express" water vapor event from 2005 where a lognormal signal has been previously detected.
Studying mixing in Non-Newtonian blue maize flour suspensions using color analysis.
Trujillo-de Santiago, Grissel; Rojas-de Gante, Cecilia; García-Lara, Silverio; Ballescá-Estrada, Adriana; Alvarez, Mario Moisés
2014-01-01
Non-Newtonian fluids occur in many relevant flow and mixing scenarios at the lab and industrial scale. The addition of acid or basic solutions to a non-Newtonian fluid is not an infrequent operation, particularly in Biotechnology applications where the pH of Non-Newtonian culture broths is usually regulated using this strategy. We conducted mixing experiments in agitated vessels using Non-Newtonian blue maize flour suspensions. Acid or basic pulses were injected to reveal mixing patterns and flow structures and to follow their time evolution. No foreign pH indicator was used as blue maize flours naturally contain anthocyanins that act as a native, wide spectrum, pH indicator. We describe a novel method to quantitate mixedness and mixing evolution through Dynamic Color Analysis (DCA) in this system. Color readings corresponding to different times and locations within the mixing vessel were taken with a digital camera (or a colorimeter) and translated to the CIELab scale of colors. We use distances in the Lab space, a 3D color space, between a particular mixing state and the final mixing point to characterize segregation/mixing in the system. Blue maize suspensions represent an adequate and flexible model to study mixing (and fluid mechanics in general) in Non-Newtonian suspensions using acid/base tracer injections. Simple strategies based on the evaluation of color distances in the CIELab space (or other scales such as HSB) can be adapted to characterize mixedness and mixing evolution in experiments using blue maize suspensions.
The Robustness of LISREL Estimates in Structural Equation Models with Categorical Variables.
ERIC Educational Resources Information Center
Ethington, Corinna A.
1987-01-01
This study examined the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical variables. The analysis of mixed matrices produced estimates that closely approximated the model parameters except where dichotomous variables were…
NASA Astrophysics Data System (ADS)
Kuć, Marta; Cieślik-Boczula, Katarzyna; Rospenk, Maria
2018-06-01
The influence of cholesterol on the structure of the model lipid bilayers treated with inhalation anesthetics (enflurane, isoflurane, sevoflurane and halothane) was investigated employing near-infrared (NIR) spectroscopy combined with the Principal Component Analysis (PCA). The conformational changes occurring in the hydrophobic area of the lipid bilayers were analyzed using the first overtones of symmetric (2νs) and antisymmetric (2νas) stretching vibrations of the CH2 groups of lipid aliphatic chains. The temperature values of chain-melting phase transition (Tm) of anesthetic-mixed dipalmitoylphosphatidylcholine (DPPC)/cholesterol and dipalmitoylphosphatidylglycerol (DPPG)/cholesterol membranes, which were obtained from the PCA analysis, were compared with cholesterol-free DPPC and DPPG bilayers mixed with inhalation anesthetics.
ITS impacts assessment for Seattle MMDI evaluation : modeling methodology and results
DOT National Transportation Integrated Search
1999-01-01
This document presents a modeling analysis of Intelligent Transportation Systems (ITS) impacts from the SmartTrek program in Seattle, Washington. This report describes the methodology of the study and presents the finding for a mixed freeway/arterial...
Sto Domingo, N D; Refsgaard, A; Mark, O; Paludan, B
2010-01-01
The potential devastating effects of urban flooding have given high importance to thorough understanding and management of water movement within catchments, and computer modelling tools have found widespread use for this purpose. The state-of-the-art in urban flood modelling is the use of a coupled 1D pipe and 2D overland flow model to simultaneously represent pipe and surface flows. This method has been found to be accurate for highly paved areas, but inappropriate when land hydrology is important. The objectives of this study are to introduce a new urban flood modelling procedure that is able to reflect system interactions with hydrology, verify that the new procedure operates well, and underline the importance of considering the complete water cycle in urban flood analysis. A physically-based and distributed hydrological model was linked to a drainage network model for urban flood analysis, and the essential components and concepts used were described in this study. The procedure was then applied to a catchment previously modelled with the traditional 1D-2D procedure to determine if the new method performs similarly well. Then, results from applying the new method in a mixed-urban area were analyzed to determine how important hydrologic contributions are to flooding in the area.
Rouphail, Nagui M.
2011-01-01
This paper presents behavioral-based models for describing pedestrian gap acceptance at unsignalized crosswalks in a mixed-priority environment, where some drivers yield and some pedestrians cross in gaps. Logistic regression models are developed to predict the probability of pedestrian crossings as a function of vehicle dynamics, pedestrian assertiveness, and other factors. In combination with prior work on probabilistic yielding models, the results can be incorporated in a simulation environment, where they can more fully describe the interaction of these two modes. The approach is intended to supplement HCM analytical procedure for locations where significant interaction occurs between drivers and pedestrians, including modern roundabouts. PMID:21643488
A novel scale for measuring mixed states in bipolar disorder.
Cavanagh, Jonathan; Schwannauer, Matthias; Power, Mick; Goodwin, Guy M
2009-01-01
Conventional descriptions of bipolar disorder tend to treat the mixed state as something of an afterthought. There is no scale that specifically measures the phenomena of the mixed state. This study aimed to test a novel scale for mixed state in a clinical and community population of bipolar patients. The scale included clinically relevant symptoms of both mania and depression in a bivariate scale. Recovered respondents were asked to recall their last manic episode. The scale allowed endorsement of one or more of the manic and depressive symptoms. Internal consistency analyses were carried out using Cronbach alpha. Factor analysis was carried out using a standard Principal Components Analysis followed by Varimax Rotation. A confirmatory factor analytic method was used to validate the scale structure in a representative clinical sample. The reliability analysis gave a Cronbach alpha value of 0.950, with a range of corrected-item-total-scale correlations from 0.546 (weight change) to 0.830 (mood). The factor analysis revealed a two-factor solution for the manic and depressed items which accounted for 61.2% of the variance in the data. Factor 1 represented physical activity, verbal activity, thought processes and mood. Factor 2 represented eating habits, weight change, passage of time and pain sensitivity. This novel scale appears to capture the key features of mixed states. The two-factor solution fits well with previous models of bipolar disorder and concurs with the view that mixed states may be more than the sum of their parts.
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
AlKindi, N A; Nunn, J
2016-04-22
Access to health services is a right for every individual. However, there is evidence that people with disabilities face barriers in accessing dental health. One of the reasons associated with this is the unclear referral pathway existing in the Irish dental health service. The appropriate assignment of patients to relevant services is an important issue to ensure better access to healthcare. This is all the more pertinent because there are only a few trained dental practitioners to provide dental treatment for people with disabilities, as well as even fewer qualified specialists in special care dentistry. The aim of this part of the study was to assess the use of the BDA Case Mix Model to determine the need for referral of patients to specialist dental services, and to determine any association between patient complexity and the need for adjunct measures, such as sedation and general anaesthesia for the management of people with disabilities and complex needs. A retrospective analysis of dental records using the BDA Case Mix Model.Results The results showed that patients with different levels of complexities were being referred to the special care dentistry clinic at the Dublin Dental University Hospital. The results also showed that the need for supportive adjunct measures such as sedation and general anaesthesia was not necessarily the main reason for referring patients to specialist services. The assessment with the BDA Case Mix Model was comprehensive as it looked at many factors contributing to the cases' complexity. Not all categories in the Case Mix Model had significant association with the need for an adjunct.Conclusion The BDA Case Mix Model can be used to measure the need for supportive adjunct measures, such as sedation and general anaesthesia.
A review of some problems in global-local stress analysis
NASA Technical Reports Server (NTRS)
Nelson, Richard B.
1989-01-01
The various types of local-global finite-element problems point out the need to develop a new generation of software. First, this new software needs to have a complete analysis capability, encompassing linear and nonlinear analysis of 1-, 2-, and 3-dimensional finite-element models, as well as mixed dimensional models. The software must be capable of treating static and dynamic (vibration and transient response) problems, including the stability effects of initial stress, and the software should be able to treat both elastic and elasto-plastic materials. The software should carry a set of optional diagnostics to assist the program user during model generation in order to help avoid obvious structural modeling errors. In addition, the program software should be well documented so the user has a complete technical reference for each type of element contained in the program library, including information on such topics as the type of numerical integration, use of underintegration, and inclusion of incompatible modes, etc. Some packaged information should also be available to assist the user in building mixed-dimensional models. An important advancement in finite-element software should be in the development of program modularity, so that the user can select from a menu various basic operations in matrix structural analysis.
NASA Technical Reports Server (NTRS)
Wang, C. R.; Papell, S. S.
1983-01-01
Three dimensional mixing length models of a flow field immediately downstream of coolant injection through a discrete circular hole at a 30 deg angle into a crossflow were derived from the measurements of turbulence intensity. To verify their effectiveness, the models were used to estimate the anisotropic turbulent effects in a simplified theoretical and numerical analysis to compute the velocity and temperature fields. With small coolant injection mass flow rate and constant surface temperature, numerical results of the local crossflow streamwise velocity component and surface heat transfer rate are consistent with the velocity measurement and the surface film cooling effectiveness distributions reported in previous studies.
NASA Astrophysics Data System (ADS)
Wang, C. R.; Papell, S. S.
1983-09-01
Three dimensional mixing length models of a flow field immediately downstream of coolant injection through a discrete circular hole at a 30 deg angle into a crossflow were derived from the measurements of turbulence intensity. To verify their effectiveness, the models were used to estimate the anisotropic turbulent effects in a simplified theoretical and numerical analysis to compute the velocity and temperature fields. With small coolant injection mass flow rate and constant surface temperature, numerical results of the local crossflow streamwise velocity component and surface heat transfer rate are consistent with the velocity measurement and the surface film cooling effectiveness distributions reported in previous studies.
Phenomenology of NMSSM in TeV scale mirage mediation
NASA Astrophysics Data System (ADS)
Hagimoto, Kei; Kobayashi, Tatsuo; Makino, Hiroki; Okumura, Ken-ichi; Shimomura, Takashi
2016-02-01
We study the next-to-minimal supersymmetric standard model (NMSSM) with the TeV scale mirage mediation, which is known as a solution for the little hierarchy problem in supersymmetry. Our previous study showed that 125 GeV Higgs boson is realized with {O} (10)% fine-tuning for 1.5 TeV gluino (1 TeV stop) mass. The μ term could be as large as 500 GeV without sacrificing the fine-tuning thanks to a cancellation mechanism. The singlet-doublet mixing is suppressed by tan β. In this paper, we further extend this analysis. We argue that approximate scale symmetries play a role behind the suppression of the singlet-doublet mixing. They reduce the mixing matrix to a simple form that is useful to understand the results of the numerical analysis. We perform a comprehensive analysis of the fine-tuning including the singlet sector by introducing a simple formula for the fine-tuning measure. This shows that the singlet mass of the least fine-tuning is favored by the LEP anomaly for moderate tan β. We also discuss prospects for the precision measurements of the Higgs couplings at LHC and ILC and direct/indirect dark matter searches in the model.
Linear Instability Analysis of non-uniform Bubbly Mixing layer with Two-Fluid model
NASA Astrophysics Data System (ADS)
Sharma, Subash; Chetty, Krishna; Lopez de Bertodano, Martin
We examine the inviscid instability of a non-uniform adiabatic bubbly shear layer with a Two-Fluid model. The Two-Fluid model is made well-posed with the closure relations for interfacial forces. First, a characteristic analysis is carried out to study the well posedness of the model over range of void fraction with interfacial forces for virtual mass, interfacial drag, interfacial pressure. A dispersion analysis then allow us to obtain growth rate and wavelength. Then, the well-posed two-fluid model is solved using CFD to validate the results obtained with the linear stability analysis. The effect of the void fraction and the distribution profile on stability is analyzed.
Atomization and dense-fluid breakup regimes in liquid rocket engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oefelein, Joseph; Dahms, Rainer Norbert Uwe
Until recently, modern theory has lacked a fundamentally based model to predict the operating pressures where classical sprays transition to dense-fluid mixing with diminished surface tension. In this paper, such a model is presented to quantify this transition for liquid-oxygen–hydrogen and n-decane–gaseous-oxygen injection processes. The analysis reveals that respective molecular interfaces break down not necessarily because of vanishing surface tension forces but instead because of the combination of broadened interfaces and a reduction in mean free molecular path. When this occurs, the interfacial structure itself enters the continuum regime, where transport processes rather than intermolecular forces dominate. Using this model,more » regime diagrams for the respective systems are constructed that show the range of operating pressures and temperatures where this transition occurs. The analysis also reveals the conditions where classical spray dynamics persists even at high supercritical pressures. As a result, it demonstrates that, depending on the composition and temperature of the injected fluids, the injection process can exhibit either classical spray atomization, dense-fluid diffusion-dominated mixing, or supercritical mixing phenomena at chamber pressures encountered in state-of-the-art liquid rocket engines.« less
Atomization and dense-fluid breakup regimes in liquid rocket engines
Oefelein, Joseph; Dahms, Rainer Norbert Uwe
2015-04-20
Until recently, modern theory has lacked a fundamentally based model to predict the operating pressures where classical sprays transition to dense-fluid mixing with diminished surface tension. In this paper, such a model is presented to quantify this transition for liquid-oxygen–hydrogen and n-decane–gaseous-oxygen injection processes. The analysis reveals that respective molecular interfaces break down not necessarily because of vanishing surface tension forces but instead because of the combination of broadened interfaces and a reduction in mean free molecular path. When this occurs, the interfacial structure itself enters the continuum regime, where transport processes rather than intermolecular forces dominate. Using this model,more » regime diagrams for the respective systems are constructed that show the range of operating pressures and temperatures where this transition occurs. The analysis also reveals the conditions where classical spray dynamics persists even at high supercritical pressures. As a result, it demonstrates that, depending on the composition and temperature of the injected fluids, the injection process can exhibit either classical spray atomization, dense-fluid diffusion-dominated mixing, or supercritical mixing phenomena at chamber pressures encountered in state-of-the-art liquid rocket engines.« less
NASA Astrophysics Data System (ADS)
Shobana, Sutha; Dharmaraja, Jeyaprakash; Selvaraj, Shanmugaperumal
2013-04-01
Equilibrium studies of Ni(II), Cu(II) and Zn(II) mixed ligand complexes involving a primary ligand 5-fluorouracil (5-FU; A) and imidazoles viz., imidazole (him), benzimidazole (bim), histamine (hist) and L-histidine (his) as co-ligands(B) were carried out pH-metrically in aqueous medium at 310 ± 0.1 K with I = 0.15 M (NaClO4). In solution state, the stoichiometry of MABH, MAB and MAB2 species have been detected. The primary ligand(A) binds the central M(II) ions in a monodentate manner whereas him, bim, hist and his co-ligands(B) bind in mono, mono, bi and tridentate modes respectively. The calculated Δ log K, log X and log X' values indicate higher stability of the mixed ligand complexes in comparison to binary species. Stability of the mixed ligand complex equilibria follows the Irving-Williams order of stability. In vitro biological evaluations of the free ligand(A) and their metal complexes by well diffusion technique show moderate activities against common bacterial and fungal strains. Oxidative cleavage interaction of ligand(A) and their copper complexes with CT DNA is also studied by gel electrophoresis method in the presence of oxidant. In vitro antioxidant evaluations of the primary ligand(A), CuA and CuAB complexes by DPPH free radical scavenging model were carried out. In solid, the MAB type of M(II)sbnd 5-FU(A)sbnd his(B) complexes were isolated and characterized by various physico-chemical and spectral techniques. Both the magnetic susceptibility and electronic spectral analysis suggest distorted octahedral geometry. Thermal studies on the synthesized mixed ligand complexes show loss of coordinated water molecule in the first step followed by decomposition of the organic residues subsequently. XRD and SEM analysis suggest that the microcrystalline nature and homogeneous morphology of MAB complexes. Further, the 3D molecular modeling and analysis for the mixed ligand MAB complexes have also been carried out.
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
Heterosis and outbreeding depression: A multi-locus model and an application to salmon production
Emlen, John M.
1991-01-01
Both artificial propagation and efforts to preserve or augment natural populations sometimes involve, wittingly or unwittingly, the mixing of different gene pools. The advantages of such mixing vis-à-vis the alleviation of inbreeding depression are well known. Acknowledged, but less well understood, are the complications posed by outbreeding depression. This paper derives a simple model of outbreeding depression and demonstrates that it is reasonably possible to predict the generation-to-generation fitness course of hybrids derived from parents from different origins. Genetic difference, or distance between parental types, is defined by the drop in fitness experienced by one type reared at the site to which the other is locally adapted. For situations where decisions involving stock mixing must be made in the absence of complete information, a sensitivity analysis-based conflict resolution method (the Good-Bad-Ugly model) is described.
Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G
2014-07-02
Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.
Koerner, Tess K.; Zhang, Yang
2017-01-01
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422
Hierarchical Bayes approach for subgroup analysis.
Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C
2017-01-01
In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.
Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets
NASA Technical Reports Server (NTRS)
Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.
1978-01-01
A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.
Breaking from binaries - using a sequential mixed methods design.
Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan
2014-03-01
To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.
Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo
2015-01-01
To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.
Renormalization group equation analysis of a pseudoscalar portal dark matter model
NASA Astrophysics Data System (ADS)
Ghorbani, Karim
2017-10-01
We investigate the vacuum stability and perturbativity of a pseudoscalar portal dark matter (DM) model with a Dirac DM candidate, through the renormalization group equation analysis at one-loop order. The model has a particular feature which can evade the direct detection upper bounds measured by XENON100 and even that from planned experiment XENON1T. We first find the viable regions in the parameter space which will give rise to correct DM relic density and comply with the constraints from Higgs physics. We show that for a given mass of the pseudoscalar, the mixing angle plays no significant role in the running of the couplings. Then we study the running of the couplings for various pseudoscalar masses at mixing angle θ =6^\\circ , and find the scale of validity in terms of the dark coupling, {λ }d. Depending on our choice of the cutoff scale, the resulting viable parameter space will be determined.
A mixing timescale model for TPDF simulations of turbulent premixed flames
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...
2017-02-06
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
A mixing timescale model for TPDF simulations of turbulent premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
Estimated population mixing by country and risk cohort for the HIV/AIDS epidemic in Western Europe
NASA Astrophysics Data System (ADS)
Thomas, Richard
This paper applies a compartmental epidemic model to estimating the mixing relations that support the transfer of HIV infection between risk populations within the countries of Western Europe. To this end, a space-time epidemic model with compartments representing countries with populations specified to be at high (gay men and intravenous drug injectors ever with AIDS) and low (the remainder who are sexually active) risk is described. This model also allows for contacts between susceptible and infectious individuals by both local and international travel. This system is calibrated to recorded AIDS incidence and the best-fit solution provides estimates of variations in the rates of mixing between the compartments together with a reconstruction of the transmission pathway. This solution indicates that, for all the countries, AIDS incidence among those at low risk is expected to remain extremely small relative to their total number. A sensitivity analysis of the low risk partner acquisition rate, however, suggests this endemic state might be fragile within Europe during this century. The discussion examines the relevance of these mixing relationships for the maintenance of disease control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piri, Mohammad
2014-03-31
Under this project, a multidisciplinary team of researchers at the University of Wyoming combined state-of-the-art experimental studies, numerical pore- and reservoir-scale modeling, and high performance computing to investigate trapping mechanisms relevant to geologic storage of mixed scCO{sub 2} in deep saline aquifers. The research included investigations in three fundamental areas: (i) the experimental determination of two-phase flow relative permeability functions, relative permeability hysteresis, and residual trapping under reservoir conditions for mixed scCO{sub 2}-brine systems; (ii) improved understanding of permanent trapping mechanisms; (iii) scientifically correct, fine grid numerical simulations of CO{sub 2} storage in deep saline aquifers taking into account themore » underlying rock heterogeneity. The specific activities included: (1) Measurement of reservoir-conditions drainage and imbibition relative permeabilities, irreducible brine and residual mixed scCO{sub 2} saturations, and relative permeability scanning curves (hysteresis) in rock samples from RSU; (2) Characterization of wettability through measurements of contact angles and interfacial tensions under reservoir conditions; (3) Development of physically-based dynamic core-scale pore network model; (4) Development of new, improved high-performance modules for the UW-team simulator to provide new capabilities to the existing model to include hysteresis in the relative permeability functions, geomechanical deformation and an equilibrium calculation (Both pore- and core-scale models were rigorously validated against well-characterized core- flooding experiments); and (5) An analysis of long term permanent trapping of mixed scCO{sub 2} through high-resolution numerical experiments and analytical solutions. The analysis takes into account formation heterogeneity, capillary trapping, and relative permeability hysteresis.« less
Wienke, B R; O'Leary, T R
2008-05-01
Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.
Convective Overshoot in Stellar Interior
NASA Astrophysics Data System (ADS)
Zhang, Q. S.
2015-07-01
In stellar interiors, the turbulent thermal convection transports matters and energy, and dominates the structure and evolution of stars. The convective overshoot, which results from the non-local convective transport from the convection zone to the radiative zone, is one of the most uncertain and difficult factors in stellar physics at present. The classical method for studying the convective overshoot is the non-local mixing-length theory (NMLT). However, the NMLT bases on phenomenological assumptions, and leads to contradictions, thus the NMLT was criticized in literature. At present, the helioseismic studies have shown that the NMLT cannot satisfy the helioseismic requirements, and have pointed out that only the turbulent convection models (TCMs) can be accepted. In the first part of this thesis, models and derivations of both the NMLT and the TCM were introduced. In the second part, i.e., the work part, the studies on the TCM (theoretical analysis and applications), and the development of a new model of the convective overshoot mixing were described in detail. In the work of theoretical analysis on the TCM, the approximate solution and the asymptotic solution were obtained based on some assumptions. The structure of the overshoot region was discussed. In a large space of the free parameters, the approximate/asymptotic solutions are in good agreement with the numerical results. We found an important result that the scale of the overshoot region in which the thermal energy transport is effective is 1 HK (HK is the scale height of turbulence kinetic energy), which does not depend on the free parameters of the TCM. We applied the TCM and a simple overshoot mixing model in three cases. In the solar case, it was found that the temperature gradient in the overshoot region is in agreement with the helioseismic requirements, and the profiles of the solar lithium abundance, sound speed, and density of the solar models are also improved. In the low-mass stars of open clusters Hyades, Praesepe, NGC6633, NGC752, NGC3680, and M67, using the model and parameter same to the solar case to deal with the convective envelope overshoot mixing, the lithium abundances on the surface of the stellar models were consistent with the observations. In the case of the binary HY Vir, the same model and parameter also make the radii and effective temperatures of HY Vir stars with convective cores be consistent with the observations. Based on the implications of the above results, we found that the simple overshoot mixing model may need to be improved significantly. Motivated by those implications, we established a new model of the overshoot mixing based on the fluid dynamic equations, and worked out the diffusion coefficient of convective mixing. The diffusion coefficient shows different behaviors in convection zone and overshoot region. In the overshoot region, the buoyancy does negative works on flows, thus the fluid flows around the equilibrium location, which leads to a small scale and low efficiency of overshoot mixing. The physical properties are significantly different from the classical NMLT, and consistent with the helioseismic studies and numerical simulations. The new model was tested in stellar evolution, and its parameter was calibrated.
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
pong: fast analysis and visualization of latent clusters in population genetic data.
Behr, Aaron A; Liu, Katherine Z; Liu-Fang, Gracie; Nakka, Priyanka; Ramachandran, Sohini
2016-09-15
A series of methods in population genetics use multilocus genotype data to assign individuals membership in latent clusters. These methods belong to a broad class of mixed-membership models, such as latent Dirichlet allocation used to analyze text corpora. Inference from mixed-membership models can produce different output matrices when repeatedly applied to the same inputs, and the number of latent clusters is a parameter that is often varied in the analysis pipeline. For these reasons, quantifying, visualizing, and annotating the output from mixed-membership models are bottlenecks for investigators across multiple disciplines from ecology to text data mining. We introduce pong, a network-graphical approach for analyzing and visualizing membership in latent clusters with a native interactive D3.js visualization. pong leverages efficient algorithms for solving the Assignment Problem to dramatically reduce runtime while increasing accuracy compared with other methods that process output from mixed-membership models. We apply pong to 225 705 unlinked genome-wide single-nucleotide variants from 2426 unrelated individuals in the 1000 Genomes Project, and identify previously overlooked aspects of global human population structure. We show that pong outpaces current solutions by more than an order of magnitude in runtime while providing a customizable and interactive visualization of population structure that is more accurate than those produced by current tools. pong is freely available and can be installed using the Python package management system pip. pong's source code is available at https://github.com/abehr/pong aaron_behr@alumni.brown.edu or sramachandran@brown.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Casellas, J; Bach, R
2012-06-01
Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
FORMAL UNCERTAINTY ANALYSIS OF A LAGRANGIAN PHOTOCHEMICAL AIR POLLUTION MODEL. (R824792)
This study applied Monte Carlo analysis with Latin
hypercube sampling to evaluate the effects of uncertainty
in air parcel trajectory paths, emissions, rate constants,
deposition affinities, mixing heights, and atmospheric stability
on predictions from a vertically...
Using existing case-mix methods to fund trauma cases.
Monakova, Julia; Blais, Irene; Botz, Charles; Chechulin, Yuriy; Picciano, Gino; Basinski, Antoni
2010-01-01
Policymakers frequently face the need to increase funding in isolated and frequently heterogeneous (clinically and in terms of resource consumption) patient subpopulations. This article presents a methodologic solution for testing the appropriateness of using existing grouping and weighting methodologies for funding subsets of patients in the scenario where a case-mix approach is preferable to a flat-rate based payment system. Using as an example the subpopulation of trauma cases of Ontario lead trauma hospitals, the statistical techniques of linear and nonlinear regression models, regression trees, and spline models were applied to examine the fit of the existing case-mix groups and reference weights for the trauma cases. The analyses demonstrated that for funding Ontario trauma cases, the existing case-mix systems can form the basis for rational and equitable hospital funding, decreasing the need to develop a different grouper for this subset of patients. This study confirmed that Injury Severity Score is a poor predictor of costs for trauma patients. Although our analysis used the Canadian case-mix classification system and cost weights, the demonstrated concept of using existing case-mix systems to develop funding rates for specific subsets of patient populations may be applicable internationally.
Ju, Jin Hyun; Crystal, Ronald G.
2017-01-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL. PMID:28505156
Ju, Jin Hyun; Shenoy, Sushila A; Crystal, Ronald G; Mezey, Jason G
2017-05-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL.
Li, Qike; Schissler, A Grant; Gardeux, Vincent; Achour, Ikbel; Kenost, Colleen; Berghout, Joanne; Li, Haiquan; Zhang, Hao Helen; Lussier, Yves A
2017-05-24
Transcriptome analytic tools are commonly used across patient cohorts to develop drugs and predict clinical outcomes. However, as precision medicine pursues more accurate and individualized treatment decisions, these methods are not designed to address single-patient transcriptome analyses. We previously developed and validated the N-of-1-pathways framework using two methods, Wilcoxon and Mahalanobis Distance (MD), for personal transcriptome analysis derived from a pair of samples of a single patient. Although, both methods uncover concordantly dysregulated pathways, they are not designed to detect dysregulated pathways with up- and down-regulated genes (bidirectional dysregulation) that are ubiquitous in biological systems. We developed N-of-1-pathways MixEnrich, a mixture model followed by a gene set enrichment test, to uncover bidirectional and concordantly dysregulated pathways one patient at a time. We assess its accuracy in a comprehensive simulation study and in a RNA-Seq data analysis of head and neck squamous cell carcinomas (HNSCCs). In presence of bidirectionally dysregulated genes in the pathway or in presence of high background noise, MixEnrich substantially outperforms previous single-subject transcriptome analysis methods, both in the simulation study and the HNSCCs data analysis (ROC Curves; higher true positive rates; lower false positive rates). Bidirectional and concordant dysregulated pathways uncovered by MixEnrich in each patient largely overlapped with the quasi-gold standard compared to other single-subject and cohort-based transcriptome analyses. The greater performance of MixEnrich presents an advantage over previous methods to meet the promise of providing accurate personal transcriptome analysis to support precision medicine at point of care.
Testing the Grossman model of medical spending determinants with macroeconomic panel data.
Hartwig, Jochen; Sturm, Jan-Egbert
2018-02-16
Michael Grossman's human capital model of the demand for health has been argued to be one of the major achievements in theoretical health economics. Attempts to test this model empirically have been sparse, however, and with mixed results. These attempts so far relied on using-mostly cross-sectional-micro data from household surveys. For the first time in the literature, we bring in macroeconomic panel data for 29 OECD countries over the period 1970-2010 to test the model. To check the robustness of the results for the determinants of medical spending identified by the model, we include additional covariates in an extreme bounds analysis (EBA) framework. The preferred model specifications (including the robust covariates) do not lend much empirical support to the Grossman model. This is in line with the mixed results of earlier studies.
Studying Mixing in Non-Newtonian Blue Maize Flour Suspensions Using Color Analysis
Trujillo-de Santiago, Grissel; Rojas-de Gante, Cecilia; García-Lara, Silverio; Ballescá-Estrada, Adriana; Alvarez, Mario Moisés
2014-01-01
Background Non-Newtonian fluids occur in many relevant flow and mixing scenarios at the lab and industrial scale. The addition of acid or basic solutions to a non-Newtonian fluid is not an infrequent operation, particularly in Biotechnology applications where the pH of Non-Newtonian culture broths is usually regulated using this strategy. Methodology and Findings We conducted mixing experiments in agitated vessels using Non-Newtonian blue maize flour suspensions. Acid or basic pulses were injected to reveal mixing patterns and flow structures and to follow their time evolution. No foreign pH indicator was used as blue maize flours naturally contain anthocyanins that act as a native, wide spectrum, pH indicator. We describe a novel method to quantitate mixedness and mixing evolution through Dynamic Color Analysis (DCA) in this system. Color readings corresponding to different times and locations within the mixing vessel were taken with a digital camera (or a colorimeter) and translated to the CIELab scale of colors. We use distances in the Lab space, a 3D color space, between a particular mixing state and the final mixing point to characterize segregation/mixing in the system. Conclusion and Relevance Blue maize suspensions represent an adequate and flexible model to study mixing (and fluid mechanics in general) in Non-Newtonian suspensions using acid/base tracer injections. Simple strategies based on the evaluation of color distances in the CIELab space (or other scales such as HSB) can be adapted to characterize mixedness and mixing evolution in experiments using blue maize suspensions. PMID:25401332
Solutions of the chemical kinetic equations for initially inhomogeneous mixtures.
NASA Technical Reports Server (NTRS)
Hilst, G. R.
1973-01-01
Following the recent discussions by O'Brien (1971) and Donaldson and Hilst (1972) of the effects of inhomogeneous mixing and turbulent diffusion on simple chemical reaction rates, the present report provides a more extensive analysis of when inhomogeneous mixing has a significant effect on chemical reaction rates. The analysis is then extended to the development of an approximate chemical sub-model which provides much improved predictions of chemical reaction rates over a wide range of inhomogeneities and pathological distributions of the concentrations of the reacting chemical species. In particular, the development of an approximate representation of the third-order correlations of the joint concentration fluctuations permits closure of the chemical sub-model at the level of the second-order moments of these fluctuations and the mean concentrations.
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Gao, Zheng; Liu, Yangang; Li, Xiaolin; ...
2018-02-19
Here, a new particle-resolved three dimensional direct numerical simulation (DNS) model is developed that combines Lagrangian droplet tracking with the Eulerian field representation of turbulence near the Kolmogorov microscale. Six numerical experiments are performed to investigate the processes of entrainment of clear air and subsequent mixing with cloudy air and their interactions with cloud microphysics. The experiments are designed to represent different combinations of three configurations of initial cloudy area and two turbulence modes (decaying and forced turbulence). Five existing measures of microphysical homogeneous mixing degree are examined, modified, and compared in terms of their ability as a unifying measuremore » to represent the effect of various entrainment-mixing mechanisms on cloud microphysics. Also examined and compared are the conventional Damköhler number and transition scale number as a dynamical measure of different mixing mechanisms. Relationships between the various microphysical measures and dynamical measures are investigated in search for a unified parameterization of entrainment-mixing processes. The results show that even with the same cloud water fraction, the thermodynamic and microphysical properties are different, especially for the decaying cases. Further analysis confirms that despite the detailed differences in cloud properties among the six simulation scenarios, the variety of turbulent entrainment-mixing mechanisms can be reasonably represented with power-law relationships between the microphysical homogeneous mixing degrees and the dynamical measures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zheng; Liu, Yangang; Li, Xiaolin
Here, a new particle-resolved three dimensional direct numerical simulation (DNS) model is developed that combines Lagrangian droplet tracking with the Eulerian field representation of turbulence near the Kolmogorov microscale. Six numerical experiments are performed to investigate the processes of entrainment of clear air and subsequent mixing with cloudy air and their interactions with cloud microphysics. The experiments are designed to represent different combinations of three configurations of initial cloudy area and two turbulence modes (decaying and forced turbulence). Five existing measures of microphysical homogeneous mixing degree are examined, modified, and compared in terms of their ability as a unifying measuremore » to represent the effect of various entrainment-mixing mechanisms on cloud microphysics. Also examined and compared are the conventional Damköhler number and transition scale number as a dynamical measure of different mixing mechanisms. Relationships between the various microphysical measures and dynamical measures are investigated in search for a unified parameterization of entrainment-mixing processes. The results show that even with the same cloud water fraction, the thermodynamic and microphysical properties are different, especially for the decaying cases. Further analysis confirms that despite the detailed differences in cloud properties among the six simulation scenarios, the variety of turbulent entrainment-mixing mechanisms can be reasonably represented with power-law relationships between the microphysical homogeneous mixing degrees and the dynamical measures.« less
Jayachandrababu, Krishna C; Verploegh, Ross J; Leisen, Johannes; Nieuwendaal, Ryan C; Sholl, David S; Nair, Sankar
2016-06-15
Mixed-linker zeolitic imidazolate frameworks (ZIFs) are nanoporous materials that exhibit continuous and controllable tunability of properties like effective pore size, hydrophobicity, and organophilicity. The structure of mixed-linker ZIFs has been studied on macroscopic scales using gravimetric and spectroscopic techniques. However, it has so far not been possible to obtain information on unit-cell-level linker distribution, an understanding of which is key to predicting and controlling their adsorption and diffusion properties. We demonstrate the use of (1)H combined rotation and multiple pulse spectroscopy (CRAMPS) NMR spin exchange measurements in combination with computational modeling to elucidate potential structures of mixed-linker ZIFs, particularly the ZIF 8-90 series. All of the compositions studied have structures that have linkers mixed at a unit-cell-level as opposed to separated or highly clustered phases within the same crystal. Direct experimental observations of linker mixing were accomplished by measuring the proton spin exchange behavior between functional groups on the linkers. The data were then fitted to a kinetic spin exchange model using proton positions from candidate mixed-linker ZIF structures that were generated computationally using the short-range order (SRO) parameter as a measure of the ordering, clustering, or randomization of the linkers. The present method offers the advantages of sensitivity without requiring isotope enrichment, a straightforward NMR pulse sequence, and an analysis framework that allows one to relate spin diffusion behavior to proposed atomic positions. We find that structures close to equimolar composition of the two linkers show a greater tendency for linker clustering than what would be predicted based on random models. Using computational modeling we have also shown how the window-type distribution in experimentally synthesized mixed-linker ZIF-8-90 materials varies as a function of their composition. The structural information thus obtained can be further used for predicting, screening, or understanding the tunable adsorption and diffusion behavior of mixed-linker ZIFs, for which the knowledge of linker distributions in the framework is expected to be important.
NASA Astrophysics Data System (ADS)
Ware, John; Kort, Eric A.; DeCola, Phil; Duren, Riley
2016-08-01
Atmospheric observations of greenhouse gases provide essential information on sources and sinks of these key atmospheric constituents. To quantify fluxes from atmospheric observations, representation of transport—especially vertical mixing—is a necessity and often a source of error. We report on remotely sensed profiles of vertical aerosol distribution taken over a 2 year period in Pasadena, California. Using an automated analysis system, we estimate daytime mixing layer depth, achieving high confidence in the afternoon maximum on 51% of days with profiles from a Sigma Space Mini Micropulse LiDAR (MiniMPL) and on 36% of days with a Vaisala CL51 ceilometer. We note that considering ceilometer data on a logarithmic scale, a standard method, introduces, an offset in mixing height retrievals. The mean afternoon maximum mixing height is 770 m Above Ground Level in summer and 670 m in winter, with significant day-to-day variance (within season σ = 220m≈30%). Taking advantage of the MiniMPL's portability, we demonstrate the feasibility of measuring the detailed horizontal structure of the mixing layer by automobile. We compare our observations to planetary boundary layer (PBL) heights from sonde launches, North American regional reanalysis (NARR), and a custom Weather Research and Forecasting (WRF) model developed for greenhouse gas (GHG) monitoring in Los Angeles. NARR and WRF PBL heights at Pasadena are both systematically higher than measured, NARR by 2.5 times; these biases will cause proportional errors in GHG flux estimates using modeled transport. We discuss how sustained lidar observations can be used to reduce flux inversion error by selecting suitable analysis periods, calibrating models, or characterizing bias for correction in post processing.
Wave models for turbulent free shear flows
NASA Technical Reports Server (NTRS)
Liou, W. W.; Morris, P. J.
1991-01-01
New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.
Modeling reactive transport with particle tracking and kernel estimators
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-04-01
Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.
ERIC Educational Resources Information Center
Gardner, Susan K.
2012-01-01
A mixed methods analysis of women faculty departure at one research institution was conducted using Hagedorn's model of faculty job satisfaction. Findings from an institution-wide survey and interviews with women faculty who had left the institution resulted in several themes: (a) a lack of resources to support faculty work, (b) a lack of…
The Promotion Strategy of Green Construction Materials: A Path Analysis Approach.
Huang, Chung-Fah; Chen, Jung-Lu
2015-10-14
As one of the major materials used in construction, cement can be very resource-consuming and polluting to produce and use. Compared with traditional cement processing methods, dry-mix mortar is more environmentally friendly by reducing waste production or carbon emissions. Despite the continuous development and promotion of green construction materials, only a few of them are accepted or widely used in the market. In addition, the majority of existing research on green construction materials focuses more on their physical or chemical characteristics than on their promotion. Without effective promotion, their benefits cannot be fully appreciated and realized. Therefore, this study is conducted to explore the promotion of dry-mix mortars, one of the green materials. This study uses both qualitative and quantitative methods. First, through a case study, the potential of reducing carbon emission is verified. Then a path analysis is conducted to verify the validity and predictability of the samples based on the technology acceptance model (TAM) in this study. According to the findings of this research, to ensure better promotion results and wider application of dry-mix mortar, it is suggested that more systematic efforts be invested in promoting the usefulness and benefits of dry-mix mortar. The model developed in this study can provide helpful references for future research and promotion of other green materials.
NASA Astrophysics Data System (ADS)
Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis
2015-07-01
This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.
NASA Astrophysics Data System (ADS)
Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen I.
2017-03-01
This study evaluated the impact of five single- or double-moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven intense wintertime cyclones impacting the mid-Atlantic United States; 5-day long WRF simulations were initialized roughly 24 h prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (five BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities led to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatiotemporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF simulations demonstrate low-to-moderate (0.217-0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude diagrams (CFADs) reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.
Nicholls, Stephen D; Decker, Steven G; Tao, Wei-Kuo; Lang, Stephen E; Shi, Jainn J; Mohr, Karen I
2017-01-01
This study evaluated the impact of five, single- or double- moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven, intense winter time cyclones impacting the Mid-Atlantic United States. Five-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (5 BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities lead to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatio-temporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF-simulations demonstrate low-to-moderate (0.217-0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.
Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen I.
2018-01-01
This study evaluated the impact of five, single- or double- moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven, intense winter time cyclones impacting the Mid-Atlantic United States. Five-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (5 BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities lead to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatio-temporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF-simulations demonstrate low-to-moderate (0.217–0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions. PMID:29697705
NASA Technical Reports Server (NTRS)
Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen Irene
2017-01-01
This study evaluated the impact of five single- or double-moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven intense wintertime cyclones impacting the mid-Atlantic United States; 5-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (five BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities led to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatiotemporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF simulations demonstrate low-to-moderate (0.217 to 0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.
ERIC Educational Resources Information Center
Kwok, Oi-man; West, Stephen G.; Green, Samuel B.
2007-01-01
This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…
One hundred years of Arctic ice cover variations as simulated by a one-dimensional, ice-ocean model
NASA Astrophysics Data System (ADS)
Hakkinen, S.; Mellor, G. L.
1990-09-01
A one-dimensional ice-ocean model consisting of a second moment, turbulent closure, mixed layer model and a three-layer snow-ice model has been applied to the simulation of Arctic ice mass and mixed layer properties. The results for the climatological seasonal cycle are discussed first and include the salt and heat balance in the upper ocean. The coupled model is then applied to the period 1880-1985, using the surface air temperature fluctuations from Hansen et al. (1983) and from Wigley et al. (1981). The analysis of the simulated large variations of the Arctic ice mass during this period (with similar changes in the mixed layer salinity) shows that the variability in the summer melt determines to a high degree the variability in the average ice thickness. The annual oceanic heat flux from the deep ocean and the maximum freezing rate and associated nearly constant minimum surface salinity flux did not vary significantly interannually. This also implies that the oceanic influence on the Arctic ice mass is minimal for the range of atmospheric variability tested.
Bayesian analysis of volcanic eruptions
NASA Astrophysics Data System (ADS)
Ho, Chih-Hsiang
1990-10-01
The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.
Water sources and mixing in riparian wetlands revealed by tracers and geospatial analysis.
Lessels, Jason S; Tetzlaff, Doerthe; Birkel, Christian; Dick, Jonathan; Soulsby, Chris
2016-01-01
Mixing of waters within riparian zones has been identified as an important influence on runoff generation and water quality. Improved understanding of the controls on the spatial and temporal variability of water sources and how they mix in riparian zones is therefore of both fundamental and applied interest. In this study, we have combined topographic indices derived from a high-resolution Digital Elevation Model (DEM) with repeated spatially high-resolution synoptic sampling of multiple tracers to investigate such dynamics of source water mixing. We use geostatistics to estimate concentrations of three different tracers (deuterium, alkalinity, and dissolved organic carbon) across an extended riparian zone in a headwater catchment in NE Scotland, to identify spatial and temporal influences on mixing of source waters. The various biogeochemical tracers and stable isotopes helped constrain the sources of runoff and their temporal dynamics. Results show that spatial variability in all three tracers was evident in all sampling campaigns, but more pronounced in warmer dryer periods. The extent of mixing areas within the riparian area reflected strong hydroclimatic controls and showed large degrees of expansion and contraction that was not strongly related to topographic indices. The integrated approach of using multiple tracers, geospatial statistics, and topographic analysis allowed us to classify three main riparian source areas and mixing zones. This study underlines the importance of the riparian zones for mixing soil water and groundwater and introduces a novel approach how this mixing can be quantified and the effect on the downstream chemistry be assessed.
Effects of preheat and mix on the fuel adiabat of an imploding capsule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, B.; Kwan, T. J. T.; Wang, Y. M.
We demonstrate the effect of preheat, hydrodynamic mix and vorticity on the adiabat of the deuterium-tritium (DT) fuel in fusion capsule experiments. We show that the adiabat of the DT fuel increases resulting from hydrodynamic mixing due to the phenomenon of entropy of mixture. An upper limit of mix, M clean=M DT ≥ 0:98 is found necessary to keep the DT fuel on a low adiabat. We demonstrate in this study that the use of a high adiabat for the DT fuel in theoretical analysis and with the aid of 1D code simulations could explain some aspects of 3D effectsmore » and mix in capsule implosion. Furthermore, we can infer from our physics model and the observed neutron images the adiabat of the DT fuel in the capsule and the amount of mix produced on the hot spot.« less
Effects of preheat and mix on the fuel adiabat of an imploding capsule
Cheng, B.; Kwan, T. J. T.; Wang, Y. M.; ...
2016-12-01
We demonstrate the effect of preheat, hydrodynamic mix and vorticity on the adiabat of the deuterium-tritium (DT) fuel in fusion capsule experiments. We show that the adiabat of the DT fuel increases resulting from hydrodynamic mixing due to the phenomenon of entropy of mixture. An upper limit of mix, M clean=M DT ≥ 0:98 is found necessary to keep the DT fuel on a low adiabat. We demonstrate in this study that the use of a high adiabat for the DT fuel in theoretical analysis and with the aid of 1D code simulations could explain some aspects of 3D effectsmore » and mix in capsule implosion. Furthermore, we can infer from our physics model and the observed neutron images the adiabat of the DT fuel in the capsule and the amount of mix produced on the hot spot.« less
NASA Astrophysics Data System (ADS)
Guerin, Marianne
2001-10-01
An analysis of tritium and 36Cl data collected at Yucca Mountain, Nevada suggests that fracture flow may occur at high velocities through the thick unsaturated zone. The mechanisms and extent of this "fast flow" in fractures at Yucca Mountain are investigated with data analysis, mixing models and several one-dimensional modeling scenarios. The model results and data analysis provide evidence substantiating the weeps model [Gauthier, J.H., Wilson, M.L., Lauffer, F.C., 1992. Proceedings of the Third Annual International High-level Radioactive Waste Management Conference, vol. 1, Las Vegas, NV. American Nuclear Society, La Grange Park, IL, pp. 891-989] and suggest that fast flow in fractures with minimal fracture-matrix interaction may comprise a substantial proportion of the total infiltration through Yucca Mountain. Mixing calculations suggest that bomb-pulse tritium measurements, in general, represent the tail end of travel times for thermonuclear-test-era (bomb-pulse) infiltration. The data analysis shows that bomb-pulse tritium and 36Cl measurements are correlated with discrete features such as horizontal fractures and areas where lateral flow may occur. The results presented here imply that fast flow in fractures may be ubiquitous at Yucca Mountain, occurring when transient infiltration (storms) generates flow in the connected fracture network.
Guerin, M
2001-10-01
An analysis of tritium and 36Cl data collected at Yucca Mountain, Nevada suggests that fracture flow may occur at high velocities through the thick unsaturated zone. The mechanisms and extent of this "fast flow" in fractures at Yucca Mountain are investigated with data analysis, mixing models and several one-dimensional modeling scenarios. The model results and data analysis provide evidence substantiating the weeps model [Gauthier, J.H., Wilson, M.L., Lauffer, F.C., 1992. Proceedings of the Third Annual International High-level Radioactive Waste Management Conference, vol. 1, Las Vegas, NV. American Nuclear Society, La Grange Park, IL, pp. 891-989] and suggest that fast flow in fractures with minimal fracture-matrix interaction may comprise a substantial proportion of the total infiltration through Yucca Mountain. Mixing calculations suggest that bomb-pulse tritium measurements, in general, represent the tail end of travel times for thermonuclear-test-era (bomb-pulse) infiltration. The data analysis shows that bomb-pulse tritium and 36Cl measurements are correlated with discrete features such as horizontal fractures and areas where lateral flow may occur. The results presented here imply that fast flow in fractures may be ubiquitous at Yucca Mountain, occurring when transient infiltration (storms) generates flow in the connected fracture network.
Analysis of the stochastic excitability in the flow chemical reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bashkirtseva, Irina
2015-11-30
A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.
2014-07-01
powder x-ray diffraction (PXRD), thermogravimentric analysis (TGA), and Fourier transform infrared (FTIR). 15. SUBJECT TERMS Metal organic frame work...the inclusion by using a variety of analytical techniques, such as powder x-ray diffraction (PXRD), thermo-gravimetric analysis (TGA), Fourier...Characterizations Analysis of the MOF and the complexes with the MOF and the guest molecules was performed using an Agilent GC-MS (Model 6890N GC and Model 5973N
Analysis of the stochastic excitability in the flow chemical reactor
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2015-11-01
A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.
Application of mixsep software package: Performance verification of male-mixed DNA analysis
HU, NA; CONG, BIN; GAO, TAO; CHEN, YU; SHEN, JUNYI; LI, SHUJIN; MA, CHUNLING
2015-01-01
An experimental model of male-mixed DNA (n=297) was constructed according to the mixed DNA construction principle. This comprised the use of the Applied Biosystems (ABI) 7500 quantitative polymerase chain reaction system, with scientific validation of mixture proportion (Mx; root-mean-square error ≤0.02). Statistical analysis was performed on locus separation accuracy using mixsep, a DNA mixture separation R-package, and the analytical performance of mixsep was assessed by examining the data distribution pattern of different mixed gradients, short tandem repeat (STR) loci and mixed DNA types. The results showed that locus separation accuracy had a negative linear correlation with the mixed gradient (R2=−0.7121). With increasing mixed gradient imbalance, locus separation accuracy first increased and then decreased, with the highest value detected at a gradient of 1:3 (≥90%). The mixed gradient, which is the theoretical Mx, was one of the primary factors that influenced the success of mixed DNA analysis. Among the 16 STR loci detected by Identifiler®, the separation accuracy was relatively high (>88%) for loci D5S818, D8S1179 and FGA, whereas the median separation accuracy value was lowest for the D7S820 locus. STR loci with relatively large numbers of allelic drop-out (ADO; >15) were all located in the yellow and red channels, including loci D18S51, D19S433, FGA, TPOX and vWA. These five loci featured low allele peak heights, which was consistent with the low sensitivity of the ABI 3130xl Genetic Analyzer to yellow and red fluorescence. The locus separation accuracy of the mixsep package was substantially different with and without the inclusion of ADO loci; inclusion of ADO significantly reduced the analytical performance of the mixsep package, which was consistent with the lack of an ADO functional module in this software. The present study demonstrated that the mixsep software had a number of advantages and was recommended for analysis of mixed DNA. This software was easy to operate and produced understandable results with a degree of controllability. PMID:25936428
Irregular-regular-irregular mixed mode oscillations in a glow discharge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Sabuj, E-mail: sabuj.ghosh@saha.ac.in; Shaw, Pankaj Kumar, E-mail: pankaj.shaw@saha.ac.in; Saha, Debajyoti, E-mail: debajyoti.saha@saha.ac.in
2015-05-15
Floating potential fluctuations of a glow discharge plasma are found to exhibit different kinds of mixed mode oscillations. Power spectrum analysis reveals that with change in the nature of the mixed mode oscillation (MMO), there occurs a transfer of power between the different harmonics and subharmonics. The variation in the chaoticity of different types of mmo was observed with the study of Lyapunov exponents. Estimates of correlation dimension and the Hurst exponent suggest that these MMOs are of low dimensional nature with an anti persistent character. Numerical modeling also reflects the experimentally found transitions between the different MMOs.
Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models
NASA Astrophysics Data System (ADS)
Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A.
2017-02-01
Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]-[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]-[Fe/H] unlike the observed bimodality (separate high-α and low-α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]-[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α-elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.
NASA Technical Reports Server (NTRS)
Downes, Stephanie M.; Farneti, Riccardo; Uotila, Petteri; Griffies, Stephen M.; Marsland, Simon J.; Bailey, David; Behrens, Erik; Bentsen, Mats; Bi, Daohua; Biastoch, Arne;
2015-01-01
We characterise the representation of the Southern Ocean water mass structure and sea ice within a suite of 15 global ocean-ice models run with the Coordinated Ocean-ice Reference Experiment Phase II (CORE-II) protocol. The main focus is the representation of the present (1988-2007) mode and intermediate waters, thus framing an analysis of winter and summer mixed layer depths; temperature, salinity, and potential vorticity structure; and temporal variability of sea ice distributions. We also consider the interannual variability over the same 20 year period. Comparisons are made between models as well as to observation-based analyses where available. The CORE-II models exhibit several biases relative to Southern Ocean observations, including an underestimation of the model mean mixed layer depths of mode and intermediate water masses in March (associated with greater ocean surface heat gain), and an overestimation in September (associated with greater high latitude ocean heat loss and a more northward winter sea-ice extent). In addition, the models have cold and fresh/warm and salty water column biases centred near 50 deg S. Over the 1988-2007 period, the CORE-II models consistently simulate spatially variable trends in sea-ice concentration, surface freshwater fluxes, mixed layer depths, and 200-700 m ocean heat content. In particular, sea-ice coverage around most of the Antarctic continental shelf is reduced, leading to a cooling and freshening of the near surface waters. The shoaling of the mixed layer is associated with increased surface buoyancy gain, except in the Pacific where sea ice is also influential. The models are in disagreement, despite the common CORE-II atmospheric state, in their spatial pattern of the 20-year trends in the mixed layer depth and sea-ice.
Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph
Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracksmore » in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.« less
Dynamic Behavior of Wind Turbine by a Mixed Flexible-Rigid Multi-Body Model
NASA Astrophysics Data System (ADS)
Wang, Jianhong; Qin, Datong; Ding, Yi
A mixed flexible-rigid multi-body model is presented to study the dynamic behavior of a horizontal axis wind turbine. The special attention is given to flexible body: flexible rotor is modeled by a newly developed blade finite element, support bearing elasticities, variations in the number of teeth in contact as well as contact tooth's elasticities are mainly flexible components in the power train. The couple conditions between different subsystems are established by constraint equations. The wind turbine model is generated by coupling models of rotor, power train and generator with constraint equations together. Based on this model, an eigenproblem analysis is carried out to show the mode shape of rotor and power train at a few natural frequencies. The dynamic responses and contact forces among gears under constant wind speed and fixed pitch angle are analyzed.
Residual estuarine circulation in the Mandovi, a monsoonal estuary: A three-dimensional model study
NASA Astrophysics Data System (ADS)
Vijith, V.; Shetye, S. R.; Baetens, K.; Luyten, P.; Michael, G. S.
2016-05-01
Observations in the Mandovi estuary, located on the central west coast of India, have shown that the salinity field in this estuary is remarkably time-dependent and passes through all possible states of stratification (riverine, highly-stratified, partially-mixed and well-mixed) during a year as the runoff into the estuary varies from high values (∼1000 m3 s-1) in the wet season to negligible values (∼1 m3 s-1) at end of the dry season. The time-dependence is forced by the Indian Summer Monsoon (ISM) and hence the estuary is referred to as a monsoonal estuary. In this paper, we use a three-dimensional, open source, hydrodynamic, numerical model to reproduce the observed annual salinity field in the Mandovi. We then analyse the model results to define characteristics of residual estuarine circulation in the Mandovi. Our motivation to study this aspect of the Mandovi's dynamics is derived from the following three considerations. First, residual circulation is important to long-term evolution of an estuary; second, we need to understand how this circulation responds to strongly time-dependent runoff forcing experienced by a monsoonal estuary; and third, Mandovi is among the best studied estuaries that come under the influence of ISM, and has observations that can be used to validate the model. Our analysis shows that the residual estuarine circulation in the Mandovi shows four distinct phases during a year: a river like flow that is oriented downstream throughout the estuary; a salt-wedge type circulation, with flow into the estuary near the bottom and out of the estuary near the surface restricted close to the mouth of the estuary; circulation associated with a partially-mixed estuary; and, the circulation associated with a well-mixed estuary. Dimensional analysis of the field of residual circulation helped us to establish the link between strength of residual circulation at a location and magnitude of river runoff and rate of mixing at the location. We then derive an analytical expression that approximates exchange velocity (bottom velocity minus near freshwater velocity at a location) as a function of freshwater velocity and rate of mixing.
Hurtado, F J; Kaiser, A S; Zamora, B
2015-03-15
Continuous stirred tank reactors (CSTR) are widely used in wastewater treatment plants to reduce the organic matter and microorganism present in sludge by anaerobic digestion. The present study carries out a numerical analysis of the fluid dynamic behaviour of a CSTR in order to optimize the process energetically. The characterization of the sludge flow inside the digester tank, the residence time distribution and the active volume of the reactor under different criteria are determined. The effects of design and power of the mixing system on the active volume of the CSTR are analyzed. The numerical model is solved under non-steady conditions by examining the evolution of the flow during the stop and restart of the mixing system. An intermittent regime of the mixing system, which kept the active volume between 94% and 99%, is achieved. The results obtained can lead to the eventual energy optimization of the mixing system of the CSTR. Copyright © 2014 Elsevier Ltd. All rights reserved.
Flowers, Tracey C.; Hunt, James R.
2010-01-01
The transport of fluids miscible with water arises in groundwater contamination and during remediation of the subsurface environment. For concentrated salt solutions, i.e., brines, the increased density and viscosity determine mixing processes between these fluids and ambient groundwater. Under downward flow conditions, gravitational and viscous forces work against each other to determine the interfacial mixing processes. Historically, mixing has been modeled as a dispersive process, as viscous fingering, and as a combination of both using approaches that were both analytical and numerical. A compilation of previously reported experimental data on vertical miscible displacements by fluids with significant density and viscosity contrasts reveals some agreement with a stability analysis presented by Hill (1952). Additional experimental data on one-dimensional dispersion during downward displacement of concentrated salt solutions by freshwater and freshwater displacement by brines support the stability analysis and provides an empirical representation for dispersion coefficients as functions of a gravity number and a mobility ratio. PMID:20300476
Wu, Xiaoping; Guldbrandtsen, Bernt; Lund, Mogens Sandø; Sahana, Goutam
2016-09-01
Identification of genetic variants associated with feet and legs disorders (FLD) will aid in the genetic improvement of these traits by providing knowledge on genes that influence trait variations. In Denmark, FLD in cattle has been recorded since the 1990s. In this report, we used deregressed breeding values as response variables for a genome-wide association study. Bulls (5,334 Danish Holstein, 4,237 Nordic Red Dairy Cattle, and 1,180 Danish Jersey) with deregressed estimated breeding values were genotyped with the Illumina Bovine 54k single nucleotide polymorphism (SNP) genotyping array. Genotypes were imputed to whole-genome sequence variants, and then 22,751,039 SNP on 29 autosomes were used for an association analysis. A modified linear mixed-model approach (efficient mixed-model association eXpedited, EMMAX) and a linear mixed model were used for association analysis. We identified 5 (3,854 SNP), 3 (13,642 SNP), and 0 quantitative trait locus (QTL) regions associated with the FLD index in Danish Holstein, Nordic Red Dairy Cattle, and Danish Jersey populations, respectively. We did not identify any QTL that were common among the 3 breeds. In a meta-analysis of the 3 breeds, 4 QTL regions were significant, but no additional QTL region was identified compared with within-breed analyses. Comparison between top SNP locations within these QTL regions and known genes suggested that RASGRP1, LCORL, MOS, and MITF may be candidate genes for FLD in dairy cattle. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A model for predicting the distribution of liquid fuel droplets and fuel vapor in premixing-prevaporizing fuel-air mixing passages of the direct injection type is reported. This model consists of three computer programs; a calculation of the two dimensional or axisymmetric air flow field neglecting the effects of fuel; a calculation of the three dimensional fuel droplet trajectories and evaporation rates in a known, moving air flow; a calculation of fuel vapor diffusing into a moving three dimensional air flow with source terms dependent on the droplet evaporation rates. The fuel droplets are treated as individual particle classes each satisfying Newton's law, a heat transfer, and a mass transfer equation. This fuel droplet model treats multicomponent fuels and incorporates the physics required for the treatment of elastic droplet collisions, droplet shattering, droplet coalescence and droplet wall interactions. The vapor diffusion calculation treats three dimensional, gas phase, turbulent diffusion processes. The analysis includes a model for the autoignition of the fuel air mixture based upon the rate of formation of an important intermediate chemical species during the preignition period.
A Monte-Carlo Analysis of Organic Aerosol Volatility with Aerosol Microphysics
NASA Astrophysics Data System (ADS)
Gao, C. Y.; Tsigaridis, K.; Bauer, S. E.
2016-12-01
A newly developed box model scheme, MATRIX-VBS, includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves aerosol mass and number concentrations and aerosol mixing state. The new scheme advanced the representation of organic aerosols in Earth system models by improving the traditional and simplistic treatment of organic aerosols as non-volatile and with a fixed size distribution. Further development includes adding the condensation of organics on coarse mode aerosols - dust and sea salt, thus making all organics in the system semi-volatile. To test and simplify the model, a Monte-Carlo analysis is performed to pin point which processes affect organics the most under which chemical and meteorological conditions. Since the model's parameterizations have the ability to capture a very wide range of conditions, from very clean to very polluted and for a wide range of meteorological conditions, all possible scenarios on Earth across the whole parameter space, including temperature, location, emissions and oxidant levels, are examined. The Monte-Carlo simulations provide quantitative information on the sensitivity of the newly developed model and help us understand how organics are affecting the size distribution, mixing state and volatility distribution at varying levels of meteorological conditions and pollution levels. In addition, these simulations give information on which parameters play a critical role in the aerosol distribution and evolution in the atmosphere and which do not, that will facilitate the simplification of the box model, an important step in its implementation in the global model.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Small signal analysis of four-wave mixing in InAs/GaAs quantum-dot semiconductor optical amplifiers
NASA Astrophysics Data System (ADS)
Ma, Shaozhen; Chen, Zhe; Dutta, Niloy K.
2009-02-01
A model to study four-wave mixing (FWM) wavelength conversion in InAs-GaAs quantum-dot semiconductor optical amplifier is proposed. Rate equations involving two QD states are solved to simulate the carrier density modulation in the system, results show that the existence of QD excited state contributes to the ultra fast recover time for single pulse response by serving as a carrier reservoir for the QD ground state, its speed limitations are also studied. Nondegenerate four-wave mixing process with small intensity modulation probe signal injected is simulated using this model, a set of coupled wave equations describing the evolution of all frequency components in the active region of QD-SOA are derived and solved numerically. Results show that better FWM conversion efficiency can be obtained compared with the regular bulk SOA, and the four-wave mixing bandwidth can exceed 1.5 THz when the detuning between pump and probe lights is 0.5 nm.
Experiments in dilution jet mixing effects of multiple rows and non-circular orifices
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Coleman, E. B.; Meyers, G. D.; White, C. D.
1985-01-01
Experimental and empirical model results are presented that extend previous studies of the mixing of single-sided and opposed rows of jets in a confined duct flow to include effects of non-circular orifices and double rows of jets. Analysis of the mean temperature data obtained in this investigation showed that the effects of orifice shape and double rows are significant only in the region close to the injection plane, provided that the orifices are symmetric with respect to the main flow direction. The penetration and mixing of jets from 45-degree slanted slots is slightly less than that from equivalent-area symmetric orifices. The penetration from 2-dimensional slots is similar to that from equivalent-area closely-spaced rows of holes, but the mixing is slower for the 2-D slots. Calculated mean temperature profiles downstream of jets from non-circular and double rows of orifices, made using an extension developed for a previous empirical model, are shown to be in good agreement with the measured distributions.
Experiments in dilution jet mixing - Effects of multiple rows and non-circular orifices
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Coleman, E. B.; Meyers, G. D.; White, C. D.
1985-01-01
Experimental and empirical model results are presented that extend previous studies of the mixing of single-sided and opposed rows of jets in a confined duct flow to include effects of non-circular orifices and double rows of jets. Analysis of the mean temperature data obtained in this investigation showed that the effects of orifice shape and double rows are significant only in the region close to the injection plane, provided that the orifices are symmetric with respect to the main flow direction. The penetration and mixing of jets from 45-degree slanted slots is slightly less than that from equivalent-area symmetric orifices. The penetration from two-dimensional slots is similar to that from equivalent-area closely-spaced rows of holes, but the mixing is slower for the 2-D slots. Calculated mean temperature profiles downstream of jets from non-circular and double rows of orifices, made using an extension developed for a previous empirical model, are shown to be in good agreement with the measured distributions.
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, K. N.; Takano, Y.; He, Cenlin
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less
Analysis of membrane fusion as a two-state sequential process: evaluation of the stalk model.
Weinreb, Gabriel; Lentz, Barry R
2007-06-01
We propose a model that accounts for the time courses of PEG-induced fusion of membrane vesicles of varying lipid compositions and sizes. The model assumes that fusion proceeds from an initial, aggregated vesicle state ((A) membrane contact) through two sequential intermediate states (I(1) and I(2)) and then on to a fusion pore state (FP). Using this model, we interpreted data on the fusion of seven different vesicle systems. We found that the initial aggregated state involved no lipid or content mixing but did produce leakage. The final state (FP) was not leaky. Lipid mixing normally dominated the first intermediate state (I(1)), but content mixing signal was also observed in this state for most systems. The second intermediate state (I(2)) exhibited both lipid and content mixing signals and leakage, and was sometimes the only leaky state. In some systems, the first and second intermediates were indistinguishable and converted directly to the FP state. Having also tested a parallel, two-intermediate model subject to different assumptions about the nature of the intermediates, we conclude that a sequential, two-intermediate model is the simplest model sufficient to describe PEG-mediated fusion in all vesicle systems studied. We conclude as well that a fusion intermediate "state" should not be thought of as a fixed structure (e.g., "stalk" or "transmembrane contact") of uniform properties. Rather, a fusion "state" describes an ensemble of similar structures that can have different mechanical properties. Thus, a "state" can have varying probabilities of having a given functional property such as content mixing, lipid mixing, or leakage. Our data show that the content mixing signal may occur through two processes, one correlated and one not correlated with leakage. Finally, we consider the implications of our results in terms of the "modified stalk" hypothesis for the mechanism of lipid pore formation. We conclude that our results not only support this hypothesis but also provide a means of analyzing fusion time courses so as to test it and gauge the mechanism of action of fusion proteins in the context of the lipidic hypothesis of fusion.
Basson, Jacob; Sung, Yun Ju; de Las Fuentes, Lisa; Schwander, Karen L; Vazquez, Ana; Rao, Dabeeru C
2016-01-01
Blood pressure (BP) has been shown to be substantially heritable, yet identified genetic variants explain only a small fraction of the heritability. Gene-smoking interactions have detected novel BP loci in cross-sectional family data. Longitudinal family data are available and have additional promise to identify BP loci. However, this type of data presents unique analysis challenges. Although several methods for analyzing longitudinal family data are available, which method is the most appropriate and under what conditions has not been fully studied. Using data from three clinic visits from the Framingham Heart Study, we performed association analysis accounting for gene-smoking interactions in BP at 31,203 markers on chromosome 22. We evaluated three different modeling frameworks: generalized estimating equations (GEE), hierarchical linear modeling, and pedigree-based mixed modeling. The three models performed somewhat comparably, with multiple overlaps in the most strongly associated loci from each model. Loci with the greatest significance were more strongly supported in the longitudinal analyses than in any of the component single-visit analyses. The pedigree-based mixed model was more conservative, with less inflation in the variant main effect and greater deflation in the gene-smoking interactions. The GEE, but not the other two models, resulted in substantial inflation in the tail of the distribution when variants with minor allele frequency <1% were included in the analysis. The choice of analysis method should depend on the model and the structure and complexity of the familial and longitudinal data. © 2015 WILEY PERIODICALS, INC.
Coker, Freya; Williams, Cylie M; Taylor, Nicholas F; Caspers, Kirsten; McAlinden, Fiona; Wilton, Anita; Shields, Nora; Haines, Terry P
2018-05-10
This protocol considers three allied health staffing models across public health subacute hospitals. This quasi-experimental mixed-methods study, including qualitative process evaluation, aims to evaluate the impact of additional allied health services in subacute care, in rehabilitation and geriatric evaluation management settings, on patient, health service and societal outcomes. This health services research will analyse outcomes of patients exposed to different allied health models of care at three health services. Each health service will have a control ward (routine care) and an intervention ward (additional allied health). This project has two parts. Part 1: a whole of site data extraction for included wards. Outcome measures will include: length of stay, rate of readmissions, discharge destinations, community referrals, patient feedback and staff perspectives. Part 2: Functional Independence Measure scores will be collected every 2-3 days for the duration of 60 patient admissions.Data from part 1 will be analysed by linear regression analysis for continuous outcomes using patient-level data and logistic regression analysis for binary outcomes. Qualitative data will be analysed using a deductive thematic approach. For part 2, a linear mixed model analysis will be conducted using therapy service delivery and days since admission to subacute care as fixed factors in the model and individual participant as a random factor. Graphical analysis will be used to examine the growth curve of the model and transformations. The days since admission factor will be used to examine non-linear growth trajectories to determine if they lead to better model fit. Findings will be disseminated through local reports and to the Department of Health and Human Services Victoria. Results will be presented at conferences and submitted to peer-reviewed journals. The Monash Health Human Research Ethics committee approved this multisite research (HREC/17/MonH/144 and HREC/17/MonH/547). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A Systems Approach to Planning a Tele-Education System.
ERIC Educational Resources Information Center
Vazquez-Abad, Jesus; Mitchell, P. David
1983-01-01
Presents a systems analysis for transforming an educational system operating under a conventional scheme into a tele-education system. Particular attention is paid to developing and analyzing a preferred media mix and to the use of models and simulations as part of conducting a systems analysis. (Author)
Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C
2017-01-01
Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305
Starchenka, S; Bell, A J; Mwange, J; Skinner, M A; Heath, M D
2017-01-01
Subcutaneous allergen immunotherapy (SCIT) is a well-documented treatment for allergic disease which involves injections of native allergen or modified (allergoid) extracts. The use of allergoid vaccines is a growing sector of the allergy immunotherapy market, associated with shorter-course therapy. The aim of this study was the structural and immunological characterisation of group 1 (Lol p 1) IgG-binding epitopes within a complex mix grass allergoid formulation containing rye grass. HP-SEC was used to resolve a mix grass allergoid preparation of high molecular weight into several distinct fractions with defined molecular weight and elution profiles. Allergen verification of the HP-SEC allergoid fractions was confirmed by mass spectrometry analysis. IgE and IgG immunoreactivity of the allergoid preparations was explored and Lol p 1 specific IgG-binding epitopes mapped by SPOT synthesis technology (PepSpot™) with structural analysis based on a Lol p 1 homology model. Grass specific IgE reactivity of the mix grass modified extract (allergoid) was diminished in comparison with the mix grass native extract. A difference in IgG profiles was observed between an intact mix grass allergoid preparation and HP-SEC allergoid fractions, which indicated enhancement of accessible reactive IgG epitopes across size distribution profiles of the mix grass allergoid formulation. Detailed analysis of the epitope specificity showed retention of six Lol p 1 IgG-binding epitopes in the mix grass modified extract. The structural and immunological changes which take place following the grass allergen modification process was further unravelled revealing distinct IgG immunological profiles. All epitopes were mapped on the solvent exposed area of Lol p 1 homology model accessible for IgG binding. One of the epitopes was identified as an 'immunodominant' Lol p 1 IgG-binding epitope (62-IFKDGRGCGSCFEIK-76) and classified as a novel epitope. The results from this study support the concept that modification allows shorter-course therapy options as a result of providing an IgG epitope repertoire important for efficacy. Additionally, the work paves the way to help further develop methods for standardising allergoid platforms.
Tree mortality risk of oak due to gypsy moth
K.W. Gottschalk; J.J. Colbert; D.L. Feicht
1998-01-01
We present prediction models for estimating tree mortality resulting from gypsy moth, Lymantria dispar, defoliation in mixed oak, Quercus sp., forests. These models differ from previous work by including defoliation as a factor in the analysis. Defoliation intensity, initial tree crown condition (crown vigour), crown position, and...
MixSIAR: advanced stable isotope mixing models in R
Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...
Upper Ocean Response to Hurricanes Katrina and Rita (2005) from Multi-sensor Satellites
NASA Astrophysics Data System (ADS)
Gierach, M. M.; Bulusu, S.
2006-12-01
Analysis of satellite observations and model simulations of the mixed layer provided an opportunity to assess the biological and physical effects of hurricanes Katrina and Rita (2005) in the Gulf of Mexico. Oceanic cyclonic circulation was intensified by the hurricanes' wind field, maximizing upwelling, surface cooling, and deepening the mixed layer. Two areas of maximum surface chlorophyll-a concentration and sea surface cooling were detected with peak intensities ranging from 2-3 mg m-3 and 4-6°C, along the tracks of Katrina and Rita. The temperature of the mixed layer cooled approximately 2°C and the depth of the mixed layer deepened by approximately 33-52 m. The forced deepening of the mixed layer injected nutrients into the euphotic zone, generating phytoplankton blooms 3-5 days after the passage of Katrina and Rita (2005).
Time and frequency domain analysis of sampled data controllers via mixed operation equations
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1981-01-01
Specification of the mathematical equations required to define the dynamic response of a linear continuous plant, subject to sampled data control, is complicated by the fact that the digital components of the control system cannot be modeled via linear ordinary differential equations. This complication can be overcome by introducing two new mathematical operations; namely, the operation of zero order hold and digial delay. It is shown that by direct utilization of these operations, a set of linear mixed operation equations can be written and used to define the dynamic response characteristics of the controlled system. It also is shown how these linear mixed operation equations lead, in an automatable manner, directly to a set of finite difference equations which are in a format compatible with follow on time and frequency domain analysis methods.
Physics of rotation: problems and challenges
NASA Astrophysics Data System (ADS)
Maeder, Andre; Meynet, Georges
2015-01-01
We examine some debated points in current discussions about rotating stars: the shape, the gravity darkening, the critical velocities, the mass loss rates, the hydrodynamical instabilities, the internal mixing and N-enrichments. The study of rotational mixing requires high quality data and careful analysis. From recent studies where such conditions are fulfilled, rotational mixing is well confirmed. Magnetic coupling with stellar winds may produce an apparent contradiction, i.e. stars with a low rotation and a high N-enrichment. We point out that it rather confirms the large role of shears in differentially rotating stars for the transport processes. New models of interacting binaries also show how shears and mixing may be enhanced in close binaries which are either spun up or down by tidal interactions.
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
A refined shear deformation theory for the analysis of laminated plates
NASA Technical Reports Server (NTRS)
Reddy, J. N.
1986-01-01
A refined, third-order plate theory that accounts for the transverse shear strains is presented, the Navier solutions are derived for certain simply supported cross-ply and antisymmetric angle-ply laminates, and finite-element models are developed for general laminates. The new theory does not require the shear correction factors of the first-order theory (i.e., the Reissner-Mindlin plate theory) because the transverse shear stresses are represented parabolically in the present theory. A mixed finite-element model that uses independent approximations of the generalized displacements and generalized moments, and a displacement model that uses only the generalized displacements as degrees of freedom are developed. The displacement model requires C sup 1-continuity of the transverse deflection across the inter-element boundaries, whereas the mixed model requires a C sup 0-element. Also, the mixed model does not require continuous approximations (between elements) of the bending moments. Numerical results are presented to show the accuracy of the present theory in predicting the transverse stresses. Numerical results are also presented for the nonlinear bending of plates, and the results compare well with the experimental results available in the literature.
Rabouille, Sophie; Edwards, Christopher A; Zehr, Jonathan P
2007-10-01
A simple model was developed to examine the vertical distribution of Prochlorococcus and Synechococcus ecotypes in the water column, based on their adaptation to light intensity. Model simulations were compared with a 14-year time series of Prochlorococcus and Synechococcus cell abundances at Station ALOHA in the North Pacific Subtropical Gyre. Data were analysed to examine spatial and temporal patterns in abundances and their ranges of variability in the euphotic zone, the surface mixed layer and the layer in the euphotic zone but below the base of the mixed layer. Model simulations show that the apparent occupation of the whole euphotic zone by a genus can be the result of a co-occurrence of different ecotypes that segregate vertically. The segregation of ecotypes can result simply from differences in light response. A sensitivity analysis of the model, performed on the parameter alpha (initial slope of the light-response curve) and the DIN concentration in the upper water column, demonstrates that the model successfully reproduces the observed range of vertical distributions. Results support the idea that intermittent mixing events may have important ecological and geochemical impacts on the phytoplankton community at Station ALOHA.
Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence
NASA Astrophysics Data System (ADS)
Cerqueti, Roy; Fenga, Livio; Ventura, Marco
2018-06-01
This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.
A mixing-model approach to quantifying sources of organic matter to salt marsh sediments
NASA Astrophysics Data System (ADS)
Bowles, K. M.; Meile, C. D.
2010-12-01
Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes can alter endmember characteristics over time, we investigate the effect of early diagenesis on chosen parameters, an analysis that entails an assessment of the organic matter age distribution. Thus, estimates of the relative contributions of phytoplankton, C3 and C4 plants to bulk sediment organic matter depend not only on environmental characteristics that impact reactivity, but also on sediment mixing processes.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications
Austin, Peter C.
2017-01-01
Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.
Austin, Peter C
2017-08-01
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).
AALC(AMPHIBIOUS ASSAULT LANDING CRAFT), AMPHIBIOUS ASSAULT LANDING CRAFT, DEBARKING, GAMUT MODEL, GENERAL PURPOSE SIMULATION SYSTEM, GPSS(GENERAL PURPOSE SIMULATION SYSTEM), IBM 360 COMPUTERS, LANDING CRAFT MIXES.
NASA Technical Reports Server (NTRS)
1975-01-01
The costs and benefits of the NASA Aircraft Fuel Conservation Technology Program are discussed. Consideration is given to a present worth analysis of the planned program expenditures, an examination of the fuel savings to be obtained by the year 2005 and the worth of this fuel savings relative to the investment required, a comparison of the program funding with that planned by other Federal agencies for energy conservation, an examination of the private industry aeronautical research and technology financial posture for the period FY 76 - FY 85, and an assessment of the potential impacts on air and noise pollution. To aid in this analysis, a computerized fleet mix forecasting model was developed. This model enables the estimation of fuel consumption and present worth of fuel expenditures for selected commerical aircraft fleet mix scenarios.
Isak, I; Patel, M; Riddell, M; West, M; Bowers, T; Wijeyekoon, S; Lloyd, J
2016-08-01
Fourier transform infrared (FTIR) spectroscopy was used in this study for the rapid quantification of polyhydroxyalkanoates (PHA) in mixed and pure culture bacterial biomass. Three different statistical analysis methods (regression, partial least squares (PLS) and nonlinear) were applied to the FTIR data and the results were plotted against the PHA values measured with the reference gas chromatography technique. All methods predicted PHA content in mixed culture biomass with comparable efficiency, indicated by similar residuals values. The PHA in these cultures ranged from low to medium concentration (0-44 wt% of dried biomass content). However, for the analysis of the combined mixed and pure culture biomass with PHA concentration ranging from low to high (0-93% of dried biomass content), the PLS method was most efficient. This paper reports, for the first time, the use of a single calibration model constructed with a combination of mixed and pure cultures covering a wide PHA range, for predicting PHA content in biomass. Currently no one universal method exists for processing FTIR data for polyhydroxyalkanoates (PHA) quantification. This study compares three different methods of analysing FTIR data for quantification of PHAs in biomass. A new data-processing approach was proposed and the results were compared against existing literature methods. Most publications report PHA quantification of medium range in pure culture. However, in our study we encompassed both mixed and pure culture biomass containing a broader range of PHA in the calibration curve. The resulting prediction model is useful for rapid quantification of a wider range of PHA content in biomass. © 2016 The Society for Applied Microbiology.
IsoWeb: A Bayesian Isotope Mixing Model for Diet Analysis of the Whole Food Web
Kadoya, Taku; Osada, Yutaka; Takimoto, Gaku
2012-01-01
Quantitative description of food webs provides fundamental information for the understanding of population, community, and ecosystem dynamics. Recently, stable isotope mixing models have been widely used to quantify dietary proportions of different food resources to a focal consumer. Here we propose a novel mixing model (IsoWeb) that estimates diet proportions of all consumers in a food web based on stable isotope information. IsoWeb requires a topological description of a food web, and stable isotope signatures of all consumers and resources in the web. A merit of IsoWeb is that it takes into account variation in trophic enrichment factors among different consumer-resource links. Sensitivity analysis using realistic hypothetical food webs suggests that IsoWeb is applicable to a wide variety of food webs differing in the number of species, connectance, sample size, and data variability. Sensitivity analysis based on real topological webs showed that IsoWeb can allow for a certain level of topological uncertainty in target food webs, including erroneously assuming false links, omission of existent links and species, and trophic aggregation into trophospecies. Moreover, using an illustrative application to a real food web, we demonstrated that IsoWeb can compare the plausibility of different candidate topologies for a focal web. These results suggest that IsoWeb provides a powerful tool to analyze food-web structure from stable isotope data. We provide R and BUGS codes to aid efficient applications of IsoWeb. PMID:22848427
Verheggen, Bram G; Westerhout, Kirsten Y; Schreder, Carl H; Augustin, Matthias
2015-01-01
Allergoids are chemically modified allergen extracts administered to reduce allergenicity and to maintain immunogenicity. Oralair® (the 5-grass tablet) is a sublingual native grass allergen tablet for pre- and co-seasonal treatment. Based on a literature review, meta-analysis, and cost-effectiveness analysis the relative effects and costs of the 5-grass tablet versus a mix of subcutaneous allergoid compounds for grass pollen allergic rhinoconjunctivitis were assessed. A Markov model with a time horizon of nine years was used to assess the costs and effects of three-year immunotherapy treatment. Relative efficacy expressed as standardized mean differences was estimated using an indirect comparison on symptom scores extracted from available clinical trials. The Rhinitis Symptom Utility Index (RSUI) was applied as a proxy to estimate utility values for symptom scores. Drug acquisition and other medical costs were derived from published sources as well as estimates for resource use, immunotherapy persistence, and occurrence of asthma. The analysis was executed from the German payer's perspective, which includes payments of the Statutory Health Insurance (SHI) and additional payments by insurants. Comprehensive deterministic and probabilistic sensitivity analyses and different scenarios were performed to test the uncertainty concerning the incremental model outcomes. The applied model predicted a cost-utility ratio of the 5-grass tablet versus a market mix of injectable allergoid products of € 12,593 per QALY in the base case analysis. Predicted incremental costs and QALYs were € 458 (95% confidence interval, CI: € 220; € 739) and 0.036 (95% CI: 0.002; 0.078), respectively. Compared to the allergoid mix the probability of the 5-grass tablet being the most cost-effective treatment option was predicted to be 76% at a willingness-to-pay threshold of € 20,000. The results were most sensitive to changes in efficacy estimates, duration of the pollen season, and immunotherapy persistence rates. This analysis suggests the sublingual native 5-grass tablet to be cost-effective relative to a mix of subcutaneous allergoid compounds. The robustness of these statements has been confirmed in extensive sensitivity and scenario analyses.
Two- and three-dimensional natural and mixed convection simulation using modular zonal models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurtz, E.; Nataf, J.M.; Winkelmann, F.
We demonstrate the use of the zonal model approach, which is a simplified method for calculating natural and mixed convection in rooms. Zonal models use a coarse grid and use balance equations, state equations, hydrostatic pressure drop equations and power law equations of the form {ital m} = {ital C}{Delta}{sup {ital n}}. The advantage of the zonal approach and its modular implementation are discussed. The zonal model resolution of nonlinear equation systems is demonstrated for three cases: a 2-D room, a 3-D room and a pair of 3-D rooms separated by a partition with an opening. A sensitivity analysis withmore » respect to physical parameters and grid coarseness is presented. Results are compared to computational fluid dynamics (CFD) calculations and experimental data.« less
Modeling the outcomes of nursing home care.
Rohrer, J E; Hogan, A J
1987-01-01
In this exploratory analysis using data on 290 patients, we use regression analysis to model patient outcomes in two Veterans Administration nursing homes. We find resource use, as measured with minutes of nursing time, to be associated with outcomes when case mix is controlled. Our results suggest that, under case-based reimbursement systems, nursing homes could increase their revenues by withholding unskilled and psychosocial care and discouraging physicians' visits. Implications for nursing home policy are discussed.
On thermal conductivity of gas mixtures containing hydrogen
NASA Astrophysics Data System (ADS)
Zhukov, Victor P.; Pätz, Markus
2017-06-01
A brief review of formulas used for the thermal conductivity of gas mixtures in CFD simulations of rocket combustion chambers is carried out in the present work. In most cases, the transport properties of mixtures are calculated from the properties of individual components using special mixing rules. The analysis of different mixing rules starts from basic equations and ends by very complex semi-empirical expressions. The formulas for the thermal conductivity are taken for the analysis from the works on modelling of rocket combustion chambers. \\hbox {H}_2{-}\\hbox {O}_2 mixtures are chosen for the evaluation of the accuracy of the considered mixing rules. The analysis shows that two of them, of Mathur et al. (Mol Phys 12(6):569-579,
Zhou, Lan; Yang, Jin-Bo; Liu, Dan; Liu, Zhan; Chen, Ying; Gao, Bo
2008-06-01
To analyze the possible damage to the remaining tooth and composite restorations when various mixing ratios of bases were used. Testing elastic modulus and poission's ratio of glass-ionomer Vitrebond and self-cured calcium hydroxide Dycal with mixing ratios of 1:1, 3:4, 4:3. Micro-CT was used to scan the first mandibular molar, and the three-dimensional finite element model of the first permanent mandibular molar with class I cavity was established. Analyzing the stress of tooth structure, composite and base cement under physical load when different mixing ratios of base cement were used. The elastic modulus of base cement in various mixing ratios was different, which had the statistic significance. The magnitude and location of stress in restored tooth made no differences when the mixing ratios of Vitrebond and Dycal were changed. The peak stress and spreading area in the model with Dycal was more than that with Vitrebond. Changing the best mixing ratio of base cement can partially influence the mechanistic character, but make no differences on the magnitude and location of stress in restored tooth. During the treatment of deep caries, the base cement of the elastic modulus which is proximal to the dentin and restoration should be chosen to avoid the fracture of tooth or restoration.
Omedo, Irene; Mogeni, Polycarp; Bousema, Teun; Rockett, Kirk; Amambua-Ngwa, Alfred; Oyier, Isabella; C. Stevenson, Jennifer; Y. Baidjoe, Amrish; de Villiers, Etienne P.; Fegan, Greg; Ross, Amanda; Hubbart, Christina; Jeffreys, Anne; N. Williams, Thomas; Kwiatkowski, Dominic; Bejon, Philip
2017-01-01
Background: The first models of malaria transmission assumed a completely mixed and homogeneous population of parasites. Recent models include spatial heterogeneity and variably mixed populations. However, there are few empiric estimates of parasite mixing with which to parametize such models. Methods: Here we genotype 276 single nucleotide polymorphisms (SNPs) in 5199 P. falciparum isolates from two Kenyan sites (Kilifi county and Rachuonyo South district) and one Gambian site (Kombo coastal districts) to determine the spatio-temporal extent of parasite mixing, and use Principal Component Analysis (PCA) and linear regression to examine the relationship between genetic relatedness and distance in space and time for parasite pairs. Results: Using 107, 177 and 82 SNPs that were successfully genotyped in 133, 1602, and 1034 parasite isolates from The Gambia, Kilifi and Rachuonyo South district, respectively, we show that there are no discrete geographically restricted parasite sub-populations, but instead we see a diffuse spatio-temporal structure to parasite genotypes. Genetic relatedness of sample pairs is predicted by relatedness in space and time. Conclusions: Our findings suggest that targeted malaria control will benefit the surrounding community, but unfortunately also that emerging drug resistance will spread rapidly through the population. PMID:28612053
Evaluating methods to visualize patterns of genetic differentiation on a landscape.
House, Geoffrey L; Hahn, Matthew W
2018-05-01
With advances in sequencing technology, research in the field of landscape genetics can now be conducted at unprecedented spatial and genomic scales. This has been especially evident when using sequence data to visualize patterns of genetic differentiation across a landscape due to demographic history, including changes in migration. Two recent model-based visualization methods that can highlight unusual patterns of genetic differentiation across a landscape, SpaceMix and EEMS, are increasingly used. While SpaceMix's model can infer long-distance migration, EEMS' model is more sensitive to short-distance changes in genetic differentiation, and it is unclear how these differences may affect their results in various situations. Here, we compare SpaceMix and EEMS side by side using landscape genetics simulations representing different migration scenarios. While both methods excel when patterns of simulated migration closely match their underlying models, they can produce either un-intuitive or misleading results when the simulated migration patterns match their models less well, and this may be difficult to assess in empirical data sets. We also introduce unbundled principal components (un-PC), a fast, model-free method to visualize patterns of genetic differentiation by combining principal components analysis (PCA), which is already used in many landscape genetics studies, with the locations of sampled individuals. Un-PC has characteristics of both SpaceMix and EEMS and works well with simulated and empirical data. Finally, we introduce msLandscape, a collection of tools that streamline the creation of customizable landscape-scale simulations using the popular coalescent simulator ms and conversion of the simulated data for use with un-PC, SpaceMix and EEMS. © 2017 John Wiley & Sons Ltd.
Transient Ejector Analysis (TEA) code user's guide
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1993-01-01
A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.
Analysis of mixing in high-explosive fireballs using small-scale pressurised spheres
NASA Astrophysics Data System (ADS)
Courtiaud, S.; Lecysyn, N.; Damamme, G.; Poinsot, T.; Selle, L.
2018-02-01
After the detonation of an oxygen-deficient homogeneous high explosive, a phase of turbulent combustion, called afterburning, takes place at the interface between the rich detonation products and air. Its modelling is instrumental for the accurate prediction of the performance of these explosives. Because of the high temperature of detonation products, the chemical reactions are mixing-driven. Modelling afterburning thus relies on the precise description of the mixing process inside fireballs. This work presents a joint numerical and experimental study of a non-reacting reduced-scale set-up, which uses the compressed balloon analogy and does not involve the detonation of a high explosive. The set-up produces a flow similar to the one caused by a spherical detonation and allows focusing on the mixing process. The numerical work is composed of 2D and 3D LES simulations of the set-up. It is shown that grid independence can be reached by imposing perturbations at the edge of the fireball. The results compare well with the existing literature and give new insights on the mixing process inside fireballs. In particular, they highlight the fact that the mixing layer development follows an energetic scaling law but remains sensitive to the density ratio between the detonation products and air.
Peng, Dan; Bi, Yanlan; Ren, Xiaona; Yang, Guolong; Sun, Shangde; Wang, Xuede
2015-12-01
This study was performed to develop a hierarchical approach for detection and quantification of adulteration of sesame oil with vegetable oils using gas chromatography (GC). At first, a model was constructed to discriminate the difference between authentic sesame oils and adulterated sesame oils using support vector machine (SVM) algorithm. Then, another SVM-based model is developed to identify the type of adulterant in the mixed oil. At last, prediction models for sesame oil were built for each kind of oil using partial least square method. To validate this approach, 746 samples were prepared by mixing authentic sesame oils with five types of vegetable oil. The prediction results show that the detection limit for authentication is as low as 5% in mixing ratio and the root-mean-square errors for prediction range from 1.19% to 4.29%, meaning that this approach is a valuable tool to detect and quantify the adulteration of sesame oil. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L.L.; Trent, D.S.
The TEMPEST computer program was used to simulate fluid and thermal mixing in the cold leg and downcomer of a pressurized water reactor under emergency core cooling high-pressure injection (HPI), which is of concern to the pressurized thermal shock (PTS) problem. Application of the code was made in performing an analysis simulation of a full-scale Westinghouse three-loop plant design cold leg and downcomer. Verification/assessment of the code was performed and analysis procedures developed using data from Creare 1/5-scale experimental tests. Results of three simulations are presented. The first is a no-loop-flow case with high-velocity, low-negative-buoyancy HPI in a 1/5-scale modelmore » of a cold leg and downcomer. The second is a no-loop-flow case with low-velocity, high-negative density (modeled with salt water) injection in a 1/5-scale model. Comparison of TEMPEST code predictions with experimental data for these two cases show good agreement. The third simulation is a three-dimensional model of one loop of a full size Westinghouse three-loop plant design. Included in this latter simulation are loop components extending from the steam generator to the reactor vessel and a one-third sector of the vessel downcomer and lower plenum. No data were available for this case. For the Westinghouse plant simulation, thermally coupled conduction heat transfer in structural materials is included. The cold leg pipe and fluid mixing volumes of the primary pump, the stillwell, and the riser to the steam generator are included in the model. In the reactor vessel, the thermal shield, pressure vessel cladding, and pressure vessel wall are thermally coupled to the fluid and thermal mixing in the downcomer. The inlet plenum mixing volume is included in the model. A 10-min (real time) transient beginning at the initiation of HPI is computed to determine temperatures at the beltline of the pressure vessel wall.« less
NASA Astrophysics Data System (ADS)
Adachi, Kouji; Zaizen, Yuji; Kajino, Mizuo; Igarashi, Yasuhito
2014-05-01
Soot particles influence the global climate through interactions with sunlight. A coating on soot particles increases their light absorption by increasing their absorption cross section and cloud condensation nuclei activity when mixed with other hygroscopic aerosol components. Therefore, it is important to understand how soot internally mixes with other materials to accurately simulate its effects in climate models. In this study, we used a transmission electron microscope (TEM) with an auto particle analysis system, which enables more particles to be analyzed than a conventional TEM. Using the TEM, soot particle size and shape (shape factor) were determined with and without coating from samples collected at a remote mountain site in Japan. The results indicate that ~10% of aerosol particles between 60 and 350 nm in aerodynamic diameters contain or consist of soot particles and ~75% of soot particles were internally mixed with nonvolatile ammonium sulfate or other materials. In contrast to an assumption that coatings change soot shape, both internally and externally mixed soot particles had similar shape and size distributions. Larger aerosol particles had higher soot mixing ratios, i.e., more than 40% of aerosol particles with diameters >1 µm had soot inclusions, whereas <20% of aerosol particles with diameters <1 µm included soot. Our results suggest that climate models may use the same size distributions and shapes for both internally and externally mixed soot; however, changing the soot mixing ratios in the different aerosol size bins is necessary.
Knuiman, Matthew W; Christian, Hayley E; Divitini, Mark L; Foster, Sarah A; Bull, Fiona C; Badland, Hannah M; Giles-Corti, Billie
2014-09-01
The purpose of the present analysis was to use longitudinal data collected over 7 years (from 4 surveys) in the Residential Environments (RESIDE) Study (Perth, Australia, 2003-2012) to more carefully examine the relationship of neighborhood walkability and destination accessibility with walking for transportation that has been seen in many cross-sectional studies. We compared effect estimates from 3 types of logistic regression models: 2 that utilize all available data (a population marginal model and a subject-level mixed model) and a third subject-level conditional model that exclusively uses within-person longitudinal evidence. The results support the evidence that neighborhood walkability (especially land-use mix and street connectivity), local access to public transit stops, and variety in the types of local destinations are important determinants of walking for transportation. The similarity of subject-level effect estimates from logistic mixed models and those from conditional logistic models indicates that there is little or no bias from uncontrolled time-constant residential preference (self-selection) factors; however, confounding by uncontrolled time-varying factors, such as health status, remains a possibility. These findings provide policy makers and urban planners with further evidence that certain features of the built environment may be important in the design of neighborhoods to increase walking for transportation and meet the health needs of residents. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham
2014-09-01
Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.
Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T
2009-07-09
Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.
The WRF-CMAQ modeling system was applied over a domain encompassing the northern hemisphere and a nested domain over the U.S. Model simulations for the 1990-2010 were performed to examine trends in various air pollutant concentrations. Trends in O3 mixing ratios over the U.S. are...
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Selection of latent variables for multiple mixed-outcome models
ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI
2014-01-01
Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219
Atella, Vincenzo; Bhattacharya, Jay; Carbonari, Lorenzo
2012-01-01
Objective This article examines the relationship between drug price and drug quality and how it varies across two of the most common regulatory regimes in the pharmaceutical market: minimum efficacy standards (MES) and a mix of MES and price control mechanisms (MES + PC). Data Sources Our primary data source is the Tufts-New England Medical Center-Cost Effectiveness Analysis Registry which have been merged with price data taken from MEPS (for the United States) and AIFA (for Italy). Study Design Through a simple model of adverse selection we model the interaction between firms, heterogeneous buyers, and the regulator. Principal Findings The theoretical analysis provides two results. First, an MES regime provides greater incentives to produce high-quality drugs. Second, an MES + PC mix reduces the difference in price between the highest and lowest quality drugs on the market. Conclusion The empirical analysis based on United States and Italian data corroborates these results. PMID:22091623
Zeeshan, Farrukh; Tabbassum, Misbah; Jorgensen, Lene; Medlicott, Natalie J
2018-02-01
Protein drugs may encounter conformational perturbations during the formulation processing of lipid-based solid dosage forms. In aqueous protein solutions, attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy can investigate these conformational changes following the subtraction of spectral interference of solvent with protein amide I bands. However, in solid dosage forms, the possible spectral contribution of lipid carriers to protein amide I band may be an obstacle to determine conformational alterations. The objective of this study was to develop an ATR FT-IR spectroscopic method for the analysis of protein secondary structure embedded in solid lipid matrices. Bovine serum albumin (BSA) was chosen as a model protein, while Precirol AT05 (glycerol palmitostearate, melting point 58 ℃) was employed as the model lipid matrix. Bovine serum albumin was incorporated into lipid using physical mixing, melting and mixing, or wet granulation mixing methods. Attenuated total reflection FT-IR spectroscopy and size exclusion chromatography (SEC) were performed for the analysis of BSA secondary structure and its dissolution in aqueous media, respectively. The results showed significant interference of Precirol ATO5 with BSA amide I band which was subtracted up to 90% w/w lipid content to analyze BSA secondary structure. In addition, ATR FT-IR spectroscopy also detected thermally denatured BSA solid alone and in the presence of lipid matrix indicating its suitability for the detection of denatured protein solids in lipid matrices. Despite being in the solid state, conformational changes occurred to BSA upon incorporation into solid lipid matrices. However, the extent of these conformational alterations was found to be dependent on the mixing method employed as indicated by area overlap calculations. For instance, the melting and mixing method imparted negligible effect on BSA secondary structure, whereas the wet granulation mixing method promoted more changes. Size exclusion chromatography analysis depicted the complete dissolution of BSA in the aqueous media employed in the wet granulation method. In conclusion, an ATR FT-IR spectroscopic method was successfully developed to investigate BSA secondary structure in solid lipid matrices following the subtraction of lipid spectral interference. The ATR FT-IR spectroscopy could further be applied to investigate the secondary structure perturbations of therapeutic proteins during their formulation development.
Review and developments of dissemination models for airborne carbon fibers
NASA Technical Reports Server (NTRS)
Elber, W.
1980-01-01
Dissemination prediction models were reviewed to determine their applicability to a risk assessment for airborne carbon fibers. The review showed that the Gaussian prediction models using partial reflection at the ground agreed very closely with a more elaborate diffusion analysis developed for the study. For distances beyond 10,000 m the Gaussian models predicted a slower fall-off in exposure levels than the diffusion models. This resulting level of conservatism was preferred for the carbon fiber risk assessment. The results also showed that the perfect vertical-mixing models developed herein agreed very closely with the diffusion analysis for all except the most stable atmospheric conditions.
NASA Astrophysics Data System (ADS)
Petroselli, A.; Grimaldi, S.; Romano, N.
2012-12-01
The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.
Simulation of fluid flows during growth of organic crystals in microgravity
NASA Technical Reports Server (NTRS)
Roberts, Gary D.; Sutter, James K.; Balasubramaniam, R.; Fowlis, William K.; Radcliffe, M. D.; Drake, M. C.
1987-01-01
Several counter diffusion type crystal growth experiments were conducted in space. Improvements in crystal size and quality are attributed to reduced natural convection in the microgravity environment. One series of experiments called DMOS (Diffusive Mixing of Organic Solutions) was designed and conducted by researchers at the 3M Corporation and flown by NASA on the space shuttle. Since only limited information about the mixing process is available from the space experiments, a series of ground based experiments was conducted to further investigate the fluid dynamics within the DMOS crystal growth cell. Solutions with density differences in the range of 10 to the -7 to 10 to the -4 power g/cc were used to simulate microgravity conditions. The small density differences were obtained by mixing D2O and H2O. Methylene blue dye was used to enhance flow visualization. The extent of mixing was measured photometrically using the 662 nm absorbance peak of the dye. Results indicate that extensive mixing by natural convection can occur even under microgravity conditions. This is qualitatively consistent with results of a simple scaling analysis. Quantitave results are in close agreement with ongoing computational modeling analysis.
Talsma, A K; Reedijk, A M J; Damhuis, R A M; Westenend, P J; Vles, W J
2011-04-01
Re-resection rate after breast-conserving surgery (BCS) has been introduced as an indicator of quality of surgical treatment in international literature. The present study aims to develop a case-mix model for re-resection rates and to evaluate its performance in comparing results between hospitals. Electronic records of eligible patients diagnosed with in-situ and invasive breast cancer in 2006 and 2007 were derived from 16 hospitals in the Rotterdam Cancer Registry (RCR) (n = 961). A model was built in which prognostic factors for re-resections after BCS were identified and expected re-resection rate could be assessed for hospitals based on their case mix. To illustrate the opportunities of monitoring re-resections over time, after risk adjustment for patient profile, a VLAD chart was drawn for patients in one hospital. In general three out of every ten women had re-surgery; in about 50% this meant an additive mastectomy. Independent prognostic factors of re-resection after multivariate analysis were histological type, sublocalisation, tumour size, lymph node involvement and multifocal disease. After correction for case mix, one hospital was performing significantly less re-resections compared to the reference hospital. On the other hand, two were performing significantly more re-resections than was expected based on their patient mix. Our population-based study confirms earlier reports that re-resection is frequently required after an initial breast-conserving operation. Case-mix models such as the one we constructed can be used to correct for variation between hospitals performances. VLAD charts are valuable tools to monitor quality of care within individual hospitals. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ruggles, Adam J.
2015-11-01
This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent agreement. This was attributed to the high quality of the measurements which reduced the width of the correctly identified, noise-affected pure air distribution, with respect to the turbulent mixing distribution. The ignitability of the atmospheric jet is determined using the flammability factor calculated from both kernel density estimated (KDE) PDFs and PDFs generated using the newly proposed model. Agreement between contours from both approaches is excellent. Ignitability of the under-expanded jet is also calculated using KDE PDFs. Contours are compared with those calculated by applying the atmospheric model to the under-expanded jet. Once again, agreement is excellent. This work demonstrates that self-similar scalar mixing statistics and ignitability of atmospheric jets can be accurately described by the proposed model. This description can be applied with confidence to under-expanded jets, which are more realistic of leak and fuel injection scenarios.
A comparison of methods for estimating the random effects distribution of a linear mixed model.
Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert
2010-12-01
This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.
Nguyen, Huy Truong; Lee, Dong-Kyu; Choi, Young-Geun; Min, Jung-Eun; Yoon, Sang Jun; Yu, Yun-Hyun; Lim, Johan; Lee, Jeongmi; Kwon, Sung Won; Park, Jeong Hill
2016-05-30
Ginseng, the root of Panax ginseng has long been the subject of adulteration, especially regarding its origins. Here, 60 ginseng samples from Korea and China initially displayed similar genetic makeup when investigated by DNA-based technique with 23 chloroplast intergenic space regions. Hence, (1)H NMR-based metabolomics with orthogonal projections on the latent structure-discrimination analysis (OPLS-DA) were applied and successfully distinguished between samples from two countries using seven primary metabolites as discrimination markers. Furthermore, to recreate adulteration in reality, 21 mixed samples of numerous Korea/China ratios were tested with the newly built OPLS-DA model. The results showed satisfactory separation according to the proportion of mixing. Finally, a procedure for assessing mixing proportion of intentionally blended samples that achieved good predictability (adjusted R(2)=0.8343) was constructed, thus verifying its promising application to quality control of herbal foods by pointing out the possible mixing ratio of falsified samples. Copyright © 2016 Elsevier B.V. All rights reserved.
An Amino Acid Code for Irregular and Mixed Protein Packing
Joo, Hyun; Chavan, Archana; Fraga, Keith; Tsai, Jerry
2015-01-01
To advance our understanding of protein tertiary structure, the development of the knob-socket model is completed in an analysis of the packing in irregular coil and turn secondary structure packing as well as between mixed secondary structure. The knob-socket model simplifies packing based on repeated patterns of 2 motifs: a 3 residue socket for packing within 2° structure and a 4 residue knob-socket for 3° packing. For coil and turn secondary structure, knob-sockets allow identification of a correlation between amino acid composition and tertiary arrangements in space. Coil contributes almost as much as α-helices to tertiary packing. Irregular secondary structure involves 3 residue cliques of consecutive contacting residues or XYZ sockets. In irregular sockets, Gly, Pro, Asp and Ser are favored, while Cys, His, Met and Trp are not. For irregular knobs, the preference order is Arg, Asp, Pro, Asn, Thr, Leu, and Gly, while Cys, His, Met and Trp are not. In mixed packing, the knob amino acid preferences are a function of the socket that they are packing into, whereas the amino acid composition of the sockets does not depend on the secondary structure of the knob. A unique motif of a coil knob with an XYZ β-sheet socket may potentially function to inhibit β-sheet extension. In addition, analysis of the preferred crossing angles for strands within a β-sheet and mixed α-helices/β-sheets identifies canonical packing patterns useful in protein design. Lastly, the knob-socket model abstracts the complexity of protein tertiary structure into an intuitive packing surface topology map. PMID:26370334
The Influence of Atomic Diffusion on Stellar Ages and Chemical Tagging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dotter, Aaron; Conroy, Charlie; Cargile, Phillip
2017-05-10
In the era of large stellar spectroscopic surveys, there is an emphasis on deriving not only stellar abundances but also the ages for millions of stars. In the context of Galactic archeology, stellar ages provide a direct probe of the formation history of the Galaxy. We use the stellar evolution code MESA to compute models with atomic diffusion—with and without radiative acceleration—and extra mixing in the surface layers. The extra mixing consists of both density-dependent turbulent mixing and envelope overshoot mixing. Based on these models we argue that it is important to distinguish between initial, bulk abundances (parameters) and current,more » surface abundances (variables) in the analysis of individual stellar ages. In stars that maintain radiative regions on evolutionary timescales, atomic diffusion modifies the surface abundances. We show that when initial, bulk metallicity is equated with current, surface metallicity in isochrone age analysis, the resulting stellar ages can be systematically overestimated by up to 20%. The change of surface abundances with evolutionary phase also complicates chemical tagging, which is the concept that dispersed star clusters can be identified through unique, high-dimensional chemical signatures. Stars from the same cluster, but in different evolutionary phases, will show different surface abundances. We speculate that calibration of stellar models may allow us to estimate not only stellar ages but also initial abundances for individual stars. In the meantime, analyzing the chemical properties of stars in similar evolutionary phases is essential to minimize the effects of atomic diffusion in the context of chemical tagging.« less
ERIC Educational Resources Information Center
Kliem, Soren; Kroger, Christoph; Kosfelder, Joachim
2010-01-01
Objective: At present, the most frequently investigated psychosocial intervention for borderline personality disorder (BPD) is dialectical behavior therapy (DBT). We conducted a meta-analysis to examine the efficacy and long-term effectiveness of DBT. Method: Systematic bibliographic research was undertaken to find relevant literature from online…
USDA-ARS?s Scientific Manuscript database
Mixed model analysis of data from 32 studies (122 diets) was used to validate omasal sampling for quantifying ruminal-N metabolism and to assess the relationships between nonammonia-N flow at the omasal canal and milk protein yield. Data were derived from experiments in cattle fed North American die...
Air pollution from aircraft. [jet exhaust - aircraft fuels/combustion efficiency
NASA Technical Reports Server (NTRS)
Heywood, J. B.; Chigier, N. A.
1975-01-01
A model which predicts nitric oxide and carbon monoxide emissions from a swirl can modular combustor is discussed. A detailed analysis of the turbulent fuel-air mixing process in the swirl can module wake region is reviewed. Hot wire anemometry was employed, and gas sampling analysis of fuel combustion emissions were performed.
Fumanelli, Laura; Ajelli, Marco; Manfredi, Piero; Vespignani, Alessandro; Merler, Stefano
2012-01-01
Social contact patterns among individuals encode the transmission route of infectious diseases and are a key ingredient in the realistic characterization and modeling of epidemics. Unfortunately, the gathering of high quality experimental data on contact patterns in human populations is a very difficult task even at the coarse level of mixing patterns among age groups. Here we propose an alternative route to the estimation of mixing patterns that relies on the construction of virtual populations parametrized with highly detailed census and demographic data. We present the modeling of the population of 26 European countries and the generation of the corresponding synthetic contact matrices among the population age groups. The method is validated by a detailed comparison with the matrices obtained in six European countries by the most extensive survey study on mixing patterns. The methodology presented here allows a large scale comparison of mixing patterns in Europe, highlighting general common features as well as country-specific differences. We find clear relations between epidemiologically relevant quantities (reproduction number and attack rate) and socio-demographic characteristics of the populations, such as the average age of the population and the duration of primary school cycle. This study provides a numerical approach for the generation of human mixing patterns that can be used to improve the accuracy of mathematical models in the absence of specific experimental data. PMID:23028275
The operating room case-mix problem under uncertainty and nurses capacity constraints.
Yahia, Zakaria; Eltawil, Amr B; Harraz, Nermine A
2016-12-01
Surgery is one of the key functions in hospitals; it generates significant revenue and admissions to hospitals. In this paper we address the decision of choosing a case-mix for a surgery department. The objective of this study is to generate an optimal case-mix plan of surgery patients with uncertain surgery operations, which includes uncertainty in surgery durations, length of stay, surgery demand and the availability of nurses. In order to obtain an optimal case-mix plan, a stochastic optimization model is proposed and the sample average approximation method is applied. The proposed model is used to determine the number of surgery cases to be weekly served, the amount of operating rooms' time dedicated to each specialty and the number of ward beds dedicated to each specialty. The optimal case-mix selection criterion is based upon a weighted score taking into account both the waiting list and the historical demand of each patient category. The score aims to maximizing the service level of the operating rooms by increasing the total number of surgery cases that could be served. A computational experiment is presented to demonstrate the performance of the proposed method. The results show that the stochastic model solution outperforms the expected value problem solution. Additional analysis is conducted to study the effect of varying the number of ORs and nurses capacity on the overall ORs' performance.
Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.
1997-01-01
The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from this paper is that even in the presence of growing, unstable waves, the mixing barriers around
An Efficient Alternative Mixed Randomized Response Procedure
ERIC Educational Resources Information Center
Singh, Housila P.; Tarray, Tanveer A.
2015-01-01
In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…
Missing continuous outcomes under covariate dependent missingness in cluster randomised trials
Diaz-Ordaz, Karla; Bartlett, Jonathan W
2016-01-01
Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885
Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.
Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W
2017-06-01
Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.
2008-03-01
investigated, as well as the methodology used . Chapter IV presents the data collection and analysis procedures, and the resulting analysis and...interpolate the data, although a non-interpolating model is possible. For this research Design and Analysis of Computer Experiments (DACE) is used ...followed by the analysis . 4.1. Testing Approach The initial SMOMADS algorithm used for this research was acquired directly from Walston [70]. The
Search for sterile neutrino mixing in the νμ → ντ appearance channel with the OPERA detector
NASA Astrophysics Data System (ADS)
Mauri, N.
2016-11-01
The OPERA experiment has observed muon neutrino to tau neutrino oscillations in the atmospheric sector in appearance mode. Five ντ candidate events have been detected, a number consistent with the expectation from the "standard" 3ν framework. Based on this result new limits on the mixing parameters of a massive sterile neutrino have been set. The analysis is performed in the 3+1 neutrino model.
The Promotion Strategy of Green Construction Materials: A Path Analysis Approach
Huang, Chung-Fah; Chen, Jung-Lu
2015-01-01
As one of the major materials used in construction, cement can be very resource-consuming and polluting to produce and use. Compared with traditional cement processing methods, dry-mix mortar is more environmentally friendly by reducing waste production or carbon emissions. Despite the continuous development and promotion of green construction materials, only a few of them are accepted or widely used in the market. In addition, the majority of existing research on green construction materials focuses more on their physical or chemical characteristics than on their promotion. Without effective promotion, their benefits cannot be fully appreciated and realized. Therefore, this study is conducted to explore the promotion of dry-mix mortars, one of the green materials. This study uses both qualitative and quantitative methods. First, through a case study, the potential of reducing carbon emission is verified. Then a path analysis is conducted to verify the validity and predictability of the samples based on the technology acceptance model (TAM) in this study. According to the findings of this research, to ensure better promotion results and wider application of dry-mix mortar, it is suggested that more systematic efforts be invested in promoting the usefulness and benefits of dry-mix mortar. The model developed in this study can provide helpful references for future research and promotion of other green materials. PMID:28793613
Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn
2018-05-01
The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.
NASA Astrophysics Data System (ADS)
Lutz, Stefanie; Van Breukelen, Boris
2014-05-01
Natural attenuation can represent a complementary or alternative approach to engineered remediation of polluted sites. In this context, compound specific stable isotope analysis (CSIA) has proven a useful tool, as it can provide evidence of natural attenuation and assess the extent of in-situ degradation based on changes in isotope ratios of pollutants. Moreover, CSIA can allow for source identification and apportionment, which might help to identify major emission sources in complex contamination scenarios. However, degradation and mixing processes in aquifers can lead to changes in isotopic compositions, such that their simultaneous occurrence might complicate combined source apportionment (SA) and assessment of the extent of degradation (ED). We developed a mathematical model (stable isotope sources and sinks model; SISS model) based on the linear stable isotope mixing model and the Rayleigh equation that allows for simultaneous SA and quantification of the ED in a scenario of two emission sources and degradation via one reaction pathway. It was shown that the SISS model with CSIA of at least two elements contained in the pollutant (e.g., C and H in benzene) allows for unequivocal SA even in the presence of degradation-induced isotope fractionation. In addition, the model enables precise quantification of the ED provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still yield a conservative estimate of the overall extent of degradation. The SISS model was validated against virtual data from a two-dimensional reactive transport model. The model results for SA and ED were in good agreement with the simulation results. The application of the SISS model to field data of benzene contamination was, however, challenged by large uncertainties in measured isotope data. Nonetheless, the use of the SISS model provided a better insight into the interplay of mixing and degradation processes at the field site, as it revealed the prevailing contribution of one emission source and a low overall ED. The model can be extended to a larger number of sources and sinks. It may aid in forensics and natural attenuation assessment of soil, groundwater, surface water, or atmospheric pollution.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf
2018-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf
2017-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977
Symon, Andrew; Winter, Clare; Cochrane, Lynda
2015-06-01
preterm birth represents a significant personal, clinical, organisational and financial burden. Strategies to reduce the preterm birth rate have had limited success. Limited evidence indicates that certain antenatal care models may offer some protection, although the causal mechanism is not understood. We sought to compare preterm birth rates for mixed-risk pregnant women accessing antenatal care organised at a freestanding midwifery unit (FMU) and mixed-risk pregnant women attending an obstetric unit (OU) with related community-based antenatal care. unmatched retrospective 4-year Scottish cohort analysis (2008-2011) of mixed-risk pregnant women accessing (i) FMU antenatal care (n=1107); (ii) combined community-based and OU antenatal care (n=7567). Data were accessed via the Information and Statistics Division of the NHS in Scotland. Aggregates analysis and binary logistic regression were used to compare the cohorts׳ rates of preterm birth; and of spontaneous labour onset, use of pharmacological analgesia, unassisted vertex birth, and low birth weight. Odds ratios were adjusted for age, parity, deprivation score and smoking status in pregnancy. after adjustment the 'mixed risk' FMU cohort had a statistically significantly reduced risk of preterm birth (5.1% [n=57] versus 7.7% [n=583]; AOR 0.73 [95% CI 0.55-0.98]; p=0.034). Differences in these secondary outcome measures were also statistically significant: spontaneous labour onset (FMU 83.9% versus OU 74.6%; AOR 1.74 [95% CI 1.46-2.08]; p<0.001); minimal intrapartum analgesia (FMU 53.7% versus OU 34.4%; AOR 2.17 [95% CI 1.90-2.49]; p<0.001); spontaneous vertex delivery (FMU 71.9% versus OU 63.5%; AOR 1.46 [95% CI 1.32-1.78]; p<0.001). Incidence of low birth weight was not statistically significant after adjustment for other variables. There was no significant difference in the rate of perinatal or neonatal death. given this study׳s methodological limitations, we can only claim associations between the care model and or chosen outcomes. Although both cohorts were mixed risk, differences in risk levels could have contributed to these findings. Nevertheless, the significant difference in preterm birth rates in this study resonates with other research, including the recent Cochrane review of midwife-led continuity models. Because of the multiplicity of risk factors for preterm birth we need to explore the salient features of the FMU model which may be contributing to this apparent protective effect. Because a randomised controlled trial would necessarily restrict choice to pregnant women, we feel that this option is problematic in exploring this further. We therefore plan to conduct a prospective matched cohort analysis together with a survey of unit practices and experiences. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chae, Gi-Tak; Yun, Seong-Taek; Kim, Kangjoo; Mayer, Bernhard
2006-04-01
The Pocheon spa-land area, South Korea occurs in a topographically steep, fault-bounded basin and is characterized by a hydraulic upwelling flow zone of thermal water (up to 44 °C) in its central part. Hydrogeochemical and environmental isotope data for groundwater in the study area suggested the occurrence of two distinct water types, a Ca-HCO 3 type and a Na-HCO 3 type. The former water type is characterized by relatively high concentrations of Ca, SO 4 and NO 3, which show significant temporal variation indicating a strong influence by surface processes. In contrast, the Na-HCO 3 type waters have high and temporally constant temperature, pH, TDS, Na, Cl, HCO 3 and F, indicating the attainment of a chemical steady state with respect to the host rocks (granite and gneiss). Oxygen, hydrogen and tritium isotope data also indicate the differences in hydrologic conditions between the two groups: the relatively lower δ 18O, δD and tritium values for Na-HCO 3 type waters suggest that they recharged at higher elevations and have comparatively long mean residence times. Considering the geologic and hydrogeologic conditions of the study area, Na-HCO 3 type waters possibly have evolved from Ca-HCO 3 type waters. Mass balance modeling revealed that the chemistry of Na-HCO 3 type water was regulated by dissolution of silicates and carbonates and concurrent ion exchange. Particularly, low Ca concentrations in Na-HCO 3 water was mainly caused by cation exchange. Multivariate mixing and mass balance modeling (M3 modeling) was performed to evaluate the hydrologic mixing and mass transfer between discrete water masses occurring in the shallow peripheral part of the central spa-land area, where hydraulic upwelling occurs. Based on Q-mode factor analysis and mixing modeling using PHREEQC, an ideal mixing among three major water masses (surface water, shallow groundwater of Ca-HCO 3 type, deep groundwater of Na-HCO 3 type) was proposed. M3 modeling suggests that all the groundwaters in the spa area can be described as mixtures of these end-members. After mixing, the net mole transfer by geochemical reaction was less than that without mixing. Therefore, it is likely that in the hydraulic mixing zone geochemical reactions are of minor importance and, therefore, that mixing regulates the groundwater geochemistry.
Traveltime-based descriptions of transport and mixing in heterogeneous domains
NASA Astrophysics Data System (ADS)
Luo, Jian; Cirpka, Olaf A.
2008-09-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass transfer coefficients. In most applications, breakthrough curves (BTCs) of conservative and reactive compounds are measured at only a few locations and spatially explicit models are calibrated by matching these BTCs. A common difficulty in such applications is that the individual BTCs differ too strongly to justify the assumption of spatial homogeneity, whereas the number of observation points is too small to identify the spatial distribution of the decisive parameters. The key objective of the current study is to characterize physical transport by the analysis of conservative tracer BTCs and predict the macroscopic BTCs of compounds that react upon mixing from the interpretation of conservative tracer BTCs and reactive parameters determined in the laboratory. We do this in the framework of traveltime-based transport models which do not require spatially explicit, costly aquifer characterization. By considering BTCs of a conservative tracer measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the traveltime-based framework, the BTC of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct traveltime value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of traveltimes, which also determines the weights associated with each stream tube. Key issues in using the traveltime-based framework include the description of mixing mechanisms and the estimation of the traveltime distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the traveltime distribution, given a BTC integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases wherein the true traveltime distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and traveltime distributions to fit conservative BTCs and describe the tailing. A reactive transport case of a dual Michaelis-Menten problem demonstrates that the reactive mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local BTCs.
Fermion masses and mixings and dark matter constraints in a model with radiative seesaw mechanism
NASA Astrophysics Data System (ADS)
Bernal, Nicolás; Cárcamo Hernández, A. E.; de Medeiros Varzielas, Ivo; Kovalenko, Sergey
2018-05-01
We formulate a predictive model of fermion masses and mixings based on a Δ(27) family symmetry. In the quark sector the model leads to the viable mixing inspired texture where the Cabibbo angle comes from the down quark sector and the other angles come from both up and down quark sectors. In the lepton sector the model generates a predictive structure for charged leptons and, after radiative seesaw, an effective neutrino mass matrix with only one real and one complex parameter. We carry out a detailed analysis of the predictions in the lepton sector, where the model is only viable for inverted neutrino mass hierarchy, predicting a strict correlation between θ 23 and θ 13. We show a benchmark point that leads to the best-fit values of θ 12, θ 13, predicting a specific sin2 θ 23 ≃ 0.51 (within the 3 σ range), a leptonic CP-violating Dirac phase δ ≃ 281.6° and for neutrinoless double-beta decay m ee ≃ 41.3 meV. We turn then to an analysis of the dark matter candidates in the model, which are stabilized by an unbroken ℤ2 symmetry. We discuss the possibility of scalar dark matter, which can generate the observed abundance through the Higgs portal by the standard WIMP mechanism. An interesting possibility arises if the lightest heavy Majorana neutrino is the lightest ℤ2-odd particle. The model can produce a viable fermionic dark matter candidate, but only as a feebly interacting massive particle (FIMP), with the smallness of the coupling to the visible sector protected by a symmetry and directly related to the smallness of the light neutrino masses.
NASA Astrophysics Data System (ADS)
Peck, Jaron Joshua
Water is used in power generation for cooling processes in thermoelectric power. plants and currently withdraws more water than any other sector in the U.S. Reducing water. use from power generation will help to alleviate water stress in at risk areas, where droughts. have the potential to strain water resources. The amount of water used for power varies. depending on many climatic aspects as well as plant operation factors. This work presents. a model that quantifies the water use for power generation for two regions representing. different generation fuel portfolios, California and Utah. The analysis of the California Independent System Operator introduces the methods. of water energy modeling by creating an overall water use factor in volume of water per. unit of energy produced based on the fuel generation mix of the area. The idea of water. monitoring based on energy used by a building or region is explored based on live fuel mix. data. This is for the purposes of increasing public awareness of the water associated with. personal energy use and helping to promote greater energy efficiency. The Utah case study explores the effects more renewable, and less water-intensive, forms of energy will have on the overall water use from power generation for the state. Using a similar model to that of the California case study, total water savings are quantified. based on power reduction scenarios involving increased use of renewable energy. The. plausibility of implementing more renewable energy into Utah’s power grid is also. discussed. Data resolution, as well as dispatch methods, economics, and solar variability, introduces some uncertainty into the analysis.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Cohen, D E; Carey, M C
1991-08-01
We determined the distribution of lecithin molecular species between vesicles and mixed micelles in cholesterol super-saturated model biles (molar taurocholate-lecithin-cholesterol ratio 67:23:10, 3 g/dl, 0.15 M NaCl, pH approximately 6-7) that contained equimolar synthetic lecithin mixtures or egg yolk or soybean lecithins. After apparent equilibration (48 h), biles were fractionated by Superose 6 gel filtration chromatography at 20 degrees C, and lecithin molecular species in the vesicle and mixed micellar fractions were quantified as benzoyl diacylglycerides by high performance liquid chromatography. With binary lecithin mixtures, vesicles were enriched with lecithins containing the most saturated sn-1 or sn-2 chains by as much as 2.4-fold whereas mixed micelles were enriched in the more unsaturated lecithins. Vesicles isolated from model biles composed of egg yolk (primarily sn-1 16:0 and 18:0 acyl chains) or soy bean (mixed saturated and unsaturated sn-1 acyl chains) lecithins were selectively enriched (6.5-76%) in lecithins with saturated sn-1 acyl chains whereas mixed micelles were enriched with lecithins composed of either sn-1 18:1, 18:2, and 18:3 unsaturated or sn-2 20:4, 22:4, and 22:6 polyunsaturated chains. Gel filtration, lipid analysis, and quasielastic light scattering revealed that apparent micellar cholesterol solubilities and metastable vesicle cholesterol/lecithin molar ratios were as much as 60% and 100% higher, respectively, in biles composed of unsaturated lecithins. Acyl chain packing constraints imposed by distinctly different particle geometries most likely explain the asymmetric distribution of lecithin molecular species between vesicles and mixed micelles in model bile as well as the variations in apparent micellar cholesterol solubilities and vesicle cholesterol/lecithin molar ratios.(ABSTRACT TRUNCATED AT 250 WORDS)
Analysis/forecast experiments with a multivariate statistical analysis scheme using FGGE data
NASA Technical Reports Server (NTRS)
Baker, W. E.; Bloom, S. C.; Nestler, M. S.
1985-01-01
A three-dimensional, multivariate, statistical analysis method, optimal interpolation (OI) is described for modeling meteorological data from widely dispersed sites. The model was developed to analyze FGGE data at the NASA-Goddard Laboratory of Atmospherics. The model features a multivariate surface analysis over the oceans, including maintenance of the Ekman balance and a geographically dependent correlation function. Preliminary comparisons are made between the OI model and similar schemes employed at the European Center for Medium Range Weather Forecasts and the National Meteorological Center. The OI scheme is used to provide input to a GCM, and model error correlations are calculated for forecasts of 500 mb vertical water mixing ratios and the wind profiles. Comparisons are made between the predictions and measured data. The model is shown to be as accurate as a successive corrections model out to 4.5 days.
A continuous mixing model for pdf simulations and its applications to combusting shear flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Chen, J.-Y.
1991-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.
Psychometric analysis in support of shortening the Scale for the Assessment of Negative Symptoms.
Levine, Stephen Z; Leucht, Stefan
2013-09-01
Despite recent emphasis on the measurement and treatment of negative symptoms, studies of the Scale for the Assessment of Negative Symptoms (SANS) identify different symptom clusters, offer mixed support for its psychometric properties and suggest that it is shortened. The current study objective is to examine the psychometric properties of the SANS and the feasibility of a short research version of the SANS. Data were re-analyzed from three clinical trials that compared placebo and amisulpride to 60 days. Participants had chronic schizophrenia and predominantly negative symptoms (n=487). Baseline data were examined with exploratory factor analysis and Item Response Theory (IRT) to identify a short SANS. The short and original SANS were compared: with confirmatory factor analysis at endpoint; and on symptom response with mixed modeling to compare. Results showed that at baseline the SANS consisted of three factors labeled Affective-flattening, Asociality and Alogia-inattentiveness. IRT suggested a short SANS with 11 items and 3 response options. Comparisons of the original and short SANS showed: the short version was a better fit to the data based on confirmatory factor analysis at endpoint; similar significant (p<.001) correlations between the baseline and subsequent scores; similar reliability; and similar significance (p<.05) on response based on mixed modeling. It is concluded that a short SANS is feasible to assess predominantly negative symptoms in chronic schizophrenia in research settings. Copyright © 2012 Elsevier B.V. and ECNP. All rights reserved.
Morais, João; Aguiar, Carlos; McLeod, Euan; Chatzitheofilou, Ismini; Fonseca Santos, Isabel; Pereira, Sónia
2014-09-01
To project the long-term cost-effectiveness of treating non-valvular atrial fibrillation (AF) patients for stroke prevention with rivaroxaban compared to warfarin in Portugal. A Markov model was used that included health and treatment states describing the management and consequences of AF and its treatment. The model's time horizon was set at a patient's lifetime and each cycle at three months. The analysis was conducted from a societal perspective and a 5% discount rate was applied to both costs and outcomes. Treatment effect data were obtained from the pivotal phase III ROCKET AF trial. The model was also populated with utility values obtained from the literature and with cost data derived from official Portuguese sources. The outcomes of the model included life-years, quality-adjusted life-years (QALYs), incremental costs, and associated incremental cost-effectiveness ratios (ICERs). Extensive sensitivity analyses were undertaken to further assess the findings of the model. As there is evidence indicating underuse and underprescription of warfarin in Portugal, an additional analysis was performed using a mixed comparator composed of no treatment, aspirin, and warfarin, which better reflects real-world prescribing in Portugal. This cost-effectiveness analysis produced an ICER of €3895/QALY for the base-case analysis (vs. warfarin) and of €6697/QALY for the real-world prescribing analysis (vs. mixed comparator). The findings were robust when tested in sensitivity analyses. The results showed that rivaroxaban may be a cost-effective alternative compared with warfarin or real-world prescribing in Portugal. Copyright © 2014 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.
Intuitive Logic Revisited: New Data and a Bayesian Mixed Model Meta-Analysis
Singmann, Henrik; Klauer, Karl Christoph; Kellen, David
2014-01-01
Recent research on syllogistic reasoning suggests that the logical status (valid vs. invalid) of even difficult syllogisms can be intuitively detected via differences in conceptual fluency between logically valid and invalid syllogisms when participants are asked to rate how much they like a conclusion following from a syllogism (Morsanyi & Handley, 2012). These claims of an intuitive logic are at odds with most theories on syllogistic reasoning which posit that detecting the logical status of difficult syllogisms requires effortful and deliberate cognitive processes. We present new data replicating the effects reported by Morsanyi and Handley, but show that this effect is eliminated when controlling for a possible confound in terms of conclusion content. Additionally, we reanalyze three studies () without this confound with a Bayesian mixed model meta-analysis (i.e., controlling for participant and item effects) which provides evidence for the null-hypothesis and against Morsanyi and Handley's claim. PMID:24755777
NASA Astrophysics Data System (ADS)
Hashmi, M. S.; Khan, N.; Ullah Khan, Sami; Rashidi, M. M.
In this study, we have constructed a mathematical model to investigate the heat source/sink effects in mixed convection axisymmetric flow of an incompressible, electrically conducting Oldroyd-B fluid between two infinite isothermal stretching disks. The effects of viscous dissipation and Joule heating are also considered in the heat equation. The governing partial differential equations are converted into ordinary differential equations by using appropriate similarity variables. The series solution of these dimensionless equations is constructed by using homotopy analysis method. The convergence of the obtained solution is carefully examined. The effects of various involved parameters on pressure, velocity and temperature profiles are comprehensively studied. A graphical analysis has been presented for various values of problem parameters. The numerical values of wall shear stress and Nusselt number are computed at both upper and lower disks. Moreover, a graphical and tabular explanation for critical values of Frank-Kamenetskii regarding other flow parameters.
A computer model of long-term salinity in San Francisco Bay: Sensitivity to mixing and inflows
Uncles, R.J.; Peterson, D.H.
1995-01-01
A two-level model of the residual circulation and tidally-averaged salinity in San Francisco Bay has been developed in order to interpret long-term (days to decades) salinity variability in the Bay. Applications of the model to biogeochemical studies are also envisaged. The model has been used to simulate daily-averaged salinity in the upper and lower levels of a 51-segment discretization of the Bay over the 22-y period 1967–1988. Observed, monthly-averaged surface salinity data and monthly averages of the daily-simulated salinity are in reasonable agreement, both near the Golden Gate and in the upper reaches, close to the delta. Agreement is less satisfactory in the central reaches of North Bay, in the vicinity of Carquinez Strait. Comparison of daily-averaged data at Station 5 (Pittsburg, in the upper North Bay) with modeled data indicates close agreement with a correlation coefficient of 0.97 for the 4110 daily values. The model successfully simulates the marked seasonal variability in salinity as well as the effects of rapidly changing freshwater inflows. Salinity variability is driven primarily by freshwater inflow. The sensitivity of the modeled salinity to variations in the longitudinal mixing coefficients is investigated. The modeled salinity is relatively insensitive to the calibration factor for vertical mixing and relatively sensitive to the calibration factor for longitudinal mixing. The optimum value of the longitudinal calibration factor is 1.1, compared with the physically-based value of 1.0. Linear time-series analysis indicates that the observed and dynamically-modeled salinity-inflow responses are in good agreement in the lower reaches of the Bay.
NASA Astrophysics Data System (ADS)
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
Hamad, Eradah O; Savundranayagam, Marie Y; Holmes, Jeffrey D; Kinsella, Elizabeth Anne; Johnson, Andrew M
2016-03-08
Twitter's 140-character microblog posts are increasingly used to access information and facilitate discussions among health care professionals and between patients with chronic conditions and their caregivers. Recently, efforts have emerged to investigate the content of health care-related posts on Twitter. This marks a new area for researchers to investigate and apply content analysis (CA). In current infodemiology, infoveillance and digital disease detection research initiatives, quantitative and qualitative Twitter data are often combined, and there are no clear guidelines for researchers to follow when collecting and evaluating Twitter-driven content. The aim of this study was to identify studies on health care and social media that used Twitter feeds as a primary data source and CA as an analysis technique. We evaluated the resulting 18 studies based on a narrative review of previous methodological studies and textbooks to determine the criteria and main features of quantitative and qualitative CA. We then used the key features of CA and mixed-methods research designs to propose the combined content-analysis (CCA) model as a solid research framework for designing, conducting, and evaluating investigations of Twitter-driven content. We conducted a PubMed search to collect studies published between 2010 and 2014 that used CA to analyze health care-related tweets. The PubMed search and reference list checks of selected papers identified 21 papers. We excluded 3 papers and further analyzed 18. Results suggest that the methods used in these studies were not purely quantitative or qualitative, and the mixed-methods design was not explicitly chosen for data collection and analysis. A solid research framework is needed for researchers who intend to analyze Twitter data through the use of CA. We propose the CCA model as a useful framework that provides a straightforward approach to guide Twitter-driven studies and that adds rigor to health care social media investigations. We provide suggestions for the use of the CCA model in elder care-related contexts.
Hamad, Eradah O; Savundranayagam, Marie Y; Holmes, Jeffrey D; Kinsella, Elizabeth Anne
2016-01-01
Background Twitter’s 140-character microblog posts are increasingly used to access information and facilitate discussions among health care professionals and between patients with chronic conditions and their caregivers. Recently, efforts have emerged to investigate the content of health care-related posts on Twitter. This marks a new area for researchers to investigate and apply content analysis (CA). In current infodemiology, infoveillance and digital disease detection research initiatives, quantitative and qualitative Twitter data are often combined, and there are no clear guidelines for researchers to follow when collecting and evaluating Twitter-driven content. Objective The aim of this study was to identify studies on health care and social media that used Twitter feeds as a primary data source and CA as an analysis technique. We evaluated the resulting 18 studies based on a narrative review of previous methodological studies and textbooks to determine the criteria and main features of quantitative and qualitative CA. We then used the key features of CA and mixed-methods research designs to propose the combined content-analysis (CCA) model as a solid research framework for designing, conducting, and evaluating investigations of Twitter-driven content. Methods We conducted a PubMed search to collect studies published between 2010 and 2014 that used CA to analyze health care-related tweets. The PubMed search and reference list checks of selected papers identified 21 papers. We excluded 3 papers and further analyzed 18. Results Results suggest that the methods used in these studies were not purely quantitative or qualitative, and the mixed-methods design was not explicitly chosen for data collection and analysis. A solid research framework is needed for researchers who intend to analyze Twitter data through the use of CA. Conclusions We propose the CCA model as a useful framework that provides a straightforward approach to guide Twitter-driven studies and that adds rigor to health care social media investigations. We provide suggestions for the use of the CCA model in elder care-related contexts. PMID:26957477
Strategic Analysis of Terrorism
NASA Astrophysics Data System (ADS)
Arce, Daniel G.; Sandler, Todd
Two areas that are increasingly studied in the game-theoretic literature on terrorism and counterterrorism are collective action and asymmetric information. One contribution of this chapter is a survey and extension of continuous policy models with differentiable payoff functions. In this way, policies can be characterized as strategic substitutes (e. g., proactive measures), or strategic complements (e. g., defensive measures). Mixed substitute-complement models are also introduced. We show that the efficiency of counterterror policy depends upon (i) the strategic substitutes-complements characterization, and (ii) who initiates the action. Surprisingly, in mixed-models the dichotomy between individual and collective action may disappear. A second contribution is the consideration of a signaling model where indiscriminant spectacular terrorist attacks may erode terrorists’ support among its constituency, and proactive government responses can create a backlash effect in favor of terrorists. A novel equilibrium of this model reflects the well-documented ineffectiveness of terrorism in achieving its stated goals.
Domnich, Alexander; Arata, Lucia; Amicizia, Daniela; Signori, Alessio; Gasparini, Roberto; Panatto, Donatella
2016-11-16
Geographical accessibility is an important determinant for the utilisation of community pharmacies. The present study explored patterns of spatial accessibility with respect to pharmacies in Liguria, Italy, a region with particular geographical and demographic features. Municipal density of pharmacies was proxied as the number of pharmacies per capita and per km2, and spatial autocorrelation analysis was performed to identify spatial clusters. Both non-spatial and spatial models were constructed to predict the study outcome. Spatial autocorrelation analysis showed a highly significant clustered pattern in the density of pharmacies per capita (I=0.082) and per km2 (I=0.295). Potentially under-supplied areas were mostly located in the mountainous hinterland. Ordinary least-squares (OLS) regressions established a significant positive relationship between the density of pharmacies and income among municipalities located at high altitudes, while no such association was observed in lower-lying areas. However, residuals of the OLS models were spatially auto-correlated. The best-fitting mixed geographically weighted regression (GWR) models outperformed the corresponding OLS models. Pharmacies per capita were best predicted by two local predictors (altitude and proportion of immigrants) and two global ones (proportion of elderly residents and income), while the local terms population, mean altitude and rural status and the global term income functioned as independent variables predicting pharmacies per km2. The density of pharmacies in Liguria was found to be associated with both socio-economic and landscape factors. Mapping of mixed GWR results would be helpful to policy-makers.
NASA Technical Reports Server (NTRS)
Chang, Y. V.
1986-01-01
The effects of external parameters on the surface heat and vapor fluxes into the marine atmospheric boundary layer (MABL) during cold-air outbreaks are investigated using the numerical model of Stage and Businger (1981a). These fluxes are nondimensionalized using the horizontal heat (g1) and vapor (g2) transfer coefficient method first suggested by Chou and Atlas (1982) and further formulated by Stage (1983a). In order to simplify the problem, the boundary layer is assumed to be well mixed and horizontally homogeneous, and to have linear shoreline soundings of equivalent potential temperature and mixing ratio. Modifications of initial surface flux estimates, time step limitation, and termination conditions are made to the MABL model to obtain accurate computations. The dependence of g1 and g2 in the cloud topped boundary layer on the external parameters (wind speed, divergence, sea surface temperature, radiative sky temperature, cloud top radiation cooling, and initial shoreline soundings of temperature, and mixing ratio) is studied by a sensitivity analysis, which shows that the uncertainties of horizontal transfer coefficients caused by changes in the parameters are reasonably small.
A novel approach to mixing qualitative and quantitative methods in HIV and STI prevention research.
Penman-Aguilar, Ana; Macaluso, Maurizio; Peacock, Nadine; Snead, M Christine; Posner, Samuel F
2014-04-01
Mixed-method designs are increasingly used in sexually transmitted infection (STI) and HIV prevention research. The authors designed a mixedmethod approach and applied it to estimate and evaluate a predictor of continued female condom use (6+ uses, among those who used it at least once) in a 6-month prospective cohort study. The analysis included 402 women who received an intervention promoting use of female and male condoms for STI prevention and completed monthly quantitative surveys; 33 also completed a semistructured qualitative interview. The authors identified a qualitative theme (couples' female condom enjoyment [CFCE]), applied discriminant analysis techniques to estimate CFCE for all participants, and added CFCE to a multivariable logistic regression model of continued female condom use. CFCE related to comfort, naturalness, pleasure, feeling protected, playfulness, ease of use, intimacy, and feeling in control of protection. CFCE was associated with continued female condom use (adjusted odds ratio: 2.8, 95% confidence interval: 1.4-5.6) and significantly improved model fit (p < .001). CFCE predicted continued female condom use. Mixed-method approaches for "scaling up" qualitative findings from small samples to larger numbers of participants can benefit HIV and STI prevention research.
NASA Astrophysics Data System (ADS)
Horochowska, Martyna; Cieślik-Boczula, Katarzyna; Rospenk, Maria
2018-03-01
It has been shown that Prodan emission-excitation fluorescence spectroscopy supported by Parallel Factor (PARAFAC) analysis is a fast, simple and sensitive method used in the study of the phase transition from the noninterdigitated gel (Lβ‧) state to the interdigitated gel (LβI) phase, triggered by ethanol and 2,2,2-trifluoroethanol (TFE) molecules in dipalmitoylphosphatidylcholines (DPPC) membranes. The relative contribution of lipid phases with spectral characteristics of each pure phase component has been presented as a function of an increase in alcohol concentration. It has been stated that both alcohol molecules can induce a formation of the LβI phase, but TFE is over six times stronger inducer of the interdigitated phase in DPPC membranes than ethanol molecules. Moreover, in the TFE-mixed DPPC membranes, the transition from the Lβ‧ to LβI phase is accompanied by a formation of the fluid phase, which most probably serves as a boundary phase between the Lβ‧ and LβI regions. Contrary to the three phase-state model of TFE-mixed DPPC membranes, in ethanol-mixed DPPC membranes only the two phase-state model has been detected.
A study of reacting free and ducted hydrogen/air jets
NASA Technical Reports Server (NTRS)
Beach, H. L., Jr.
1975-01-01
The mixing and reaction of a supersonic jet of hydrogen in coaxial free and ducted high temperature test gases were investigated. The importance of chemical kinetics on computed results, and the utilization of free-jet theoretical approaches to compute enclosed flow fields were studied. Measured pitot pressure profiles were correlated by use of a parabolic mixing analysis employing an eddy viscosity model. All computations, including free, ducted, reacting, and nonreacting cases, use the same value of the empirical constant in the viscosity model. Equilibrium and finite rate chemistry models were utilized. The finite rate assumption allowed prediction of observed ignition delay, but the equilibrium model gave the best correlations downstream from the ignition location. Ducted calculations were made with finite rate chemistry; correlations were, in general, as good as the free-jet results until problems with the boundary conditions were encountered.
An Exploratory Study of the Role of Human Resource Management in Models of Employee Turnover
ERIC Educational Resources Information Center
Ozolina-Ozola, Iveta
2016-01-01
The purpose of this paper is to present the study results of the human resource management role in the voluntary employee turnover models. The mixed methods design was applied. On the basis of the results of the search and evaluation of publications, the 16 models of employee turnover were selected. Applying the method of content analysis, the…
CLUSTERING SOUTH AFRICAN HOUSEHOLDS BASED ON THEIR ASSET STATUS USING LATENT VARIABLE MODELS
McParland, Damien; Gormley, Isobel Claire; McCormick, Tyler H.; Clark, Samuel J.; Kabudula, Chodziwadziwa Whiteson; Collinson, Mark A.
2014-01-01
The Agincourt Health and Demographic Surveillance System has since 2001 conducted a biannual household asset survey in order to quantify household socio-economic status (SES) in a rural population living in northeast South Africa. The survey contains binary, ordinal and nominal items. In the absence of income or expenditure data, the SES landscape in the study population is explored and described by clustering the households into homogeneous groups based on their asset status. A model-based approach to clustering the Agincourt households, based on latent variable models, is proposed. In the case of modeling binary or ordinal items, item response theory models are employed. For nominal survey items, a factor analysis model, similar in nature to a multinomial probit model, is used. Both model types have an underlying latent variable structure—this similarity is exploited and the models are combined to produce a hybrid model capable of handling mixed data types. Further, a mixture of the hybrid models is considered to provide clustering capabilities within the context of mixed binary, ordinal and nominal response data. The proposed model is termed a mixture of factor analyzers for mixed data (MFA-MD). The MFA-MD model is applied to the survey data to cluster the Agincourt households into homogeneous groups. The model is estimated within the Bayesian paradigm, using a Markov chain Monte Carlo algorithm. Intuitive groupings result, providing insight to the different socio-economic strata within the Agincourt region. PMID:25485026
NASA Astrophysics Data System (ADS)
Soulsby, Chris; Dunn, Sarah M.
2003-02-01
Hydrochemical tracers (alkalinity and silica) were used in an end-member mixing analysis (EMMA) of runoff sources in the 10 km2 Allt a' Mharcaidh catchment. A three-component mixing model was used to separate the hydrograph and estimate, to a first approximation, the range of likely contributions of overland flow, shallow subsurface storm flow, and groundwater to the annual hydrograph. A conceptual, catchment-scale rainfall-runoff model (DIY) was also used to separate the annual hydrograph in an equivalent set of flow paths. The two approaches produced independent representations of catchment hydrology that exhibited reasonable agreement. This showed the dominance of overland flow in generating storm runoff and the important role of groundwater inputs throughout the hydrological year. Moreover, DIY was successfully adapted to simulate stream chemistry (alkalinity) at daily time steps. Sensitivity analysis showed that whilst a distinct groundwater source at the catchment scale could be identified, there was considerable uncertainty in differentiating between overland flow and subsurface storm flow in both the EMMA and DIY applications. Nevertheless, the study indicated that the complementary use of tracer analysis in EMMA can increase the confidence in conceptual model structure. However, conclusions are restricted to the specific spatial and temporal scales examined.
Stability analysis of magnetized neutron stars - a semi-analytic approach
NASA Astrophysics Data System (ADS)
Herbrik, Marlene; Kokkotas, Kostas D.
2017-04-01
We implement a semi-analytic approach for stability analysis, addressing the ongoing uncertainty about stability and structure of neutron star magnetic fields. Applying the energy variational principle, a model system is displaced from its equilibrium state. The related energy density variation is set up analytically, whereas its volume integration is carried out numerically. This facilitates the consideration of more realistic neutron star characteristics within the model compared to analytical treatments. At the same time, our method retains the possibility to yield general information about neutron star magnetic field and composition structures that are likely to be stable. In contrast to numerical studies, classes of parametrized systems can be studied at once, finally constraining realistic configurations for interior neutron star magnetic fields. We apply the stability analysis scheme on polytropic and non-barotropic neutron stars with toroidal, poloidal and mixed fields testing their stability in a Newtonian framework. Furthermore, we provide the analytical scheme for dropping the Cowling approximation in an axisymmetric system and investigate its impact. Our results confirm the instability of simple magnetized neutron star models as well as a stabilization tendency in the case of mixed fields and stratification. These findings agree with analytical studies whose spectrum of model systems we extend by lifting former simplifications.
Development and numerical analysis of low specific speed mixed-flow pump
NASA Astrophysics Data System (ADS)
Li, H. F.; Huo, Y. W.; Pan, Z. B.; Zhou, W. C.; He, M. H.
2012-11-01
With the development of the city, the market of the mixed flow pump with large flux and high head is prospect. The KSB Shanghai Pump Co., LTD decided to develop low speed specific speed mixed flow pump to meet the market requirements. Based on the centrifugal pump and axial flow pump model, aiming at the characteristics of large flux and high head, a new type of guide vane mixed flow pump was designed. The computational fluid dynamics method was adopted to analyze the internal flow of the new type model and predict its performances. The time-averaged Navier-Stokes equations were closed by SST k-ω turbulent model to adapt internal flow of guide vane with larger curvatures. The multi-reference frame(MRF) method was used to deal with the coupling of rotating impeller and static guide vane, and the SIMPLEC method was adopted to achieve the coupling solution of velocity and pressure. The computational results shows that there is great flow impact on the head of vanes at different working conditions, and there is great flow separation at the tailing of the guide vanes at different working conditions, and all will affect the performance of pump. Based on the computational results, optimizations were carried out to decrease the impact on the head of vanes and flow separation at the tailing of the guide vanes. The optimized model was simulated and its performance was predicted. The computational results show that the impact on the head of vanes and the separation at the tailing of the guide vanes disappeared. The high efficiency of the optimized pump is wide, and it fit the original design destination. The newly designed mixed flow pump is now in modeling and its experimental performance will be getting soon.
Water mass mixing: The dominant control on the zinc distribution in the North Atlantic Ocean
NASA Astrophysics Data System (ADS)
Roshan, Saeed; Wu, Jingfeng
2015-07-01
Dissolved zinc (dZn) concentration was determined in the North Atlantic during the U.S. GEOTRACES 2010 and 2011 cruise (GOETRACES GA03). A relatively poor linear correlation (R2 = 0.756) was observed between dZn and silicic acid (Si), the slope of which was 0.0577 nM/µmol/kg. We attribute the relatively poor dZn-Si correlation to the following processes: (a) differential regeneration of zinc relative to silicic acid, (b) mixing of multiple water masses that have different Zn/Si, and (c) zinc sources such as sedimentary or hydrothermal. To quantitatively distinguish these possibilities, we use the results of Optimum Multi-Parameter Water Mass Analysis by Jenkins et al. (2015) to model the zinc distribution below 500 m. We hypothesized two scenarios: conservative mixing and regenerative mixing. The first scenario (conservative) could be modeled to results in a correlation with observations with a R2 = 0.846. In the second scenario, we took a Si-related regeneration into account, which could model the observations with a R2 = 0.867. Through this regenerative mixing scenario, we estimated a Zn/Si = 0.0548 nM/µmol/kg that may be more realistic than linear regression slope due to accounting for process b. However, this did not improve the model substantially (R2 = 0.867 versus0.846), which may indicate the insignificant effect of remineralization on the zinc distribution in this region. The relative weakness in the model-observation correlation (R2~0.85 for both scenarios) implies that processes (a) and (c) may be plausible. Furthermore, dZn in the upper 500 m exhibited a very poor correlation with apparent oxygen utilization, suggesting a minimal role for the organic matter-associated remineralization process.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
Analysis of messy data with heteroscedastic in mean models
NASA Astrophysics Data System (ADS)
Trianasari, Nurvita; Sumarni, Cucu
2016-02-01
In the analysis of the data, we often faced with the problem of data where the data did not meet some assumptions. In conditions of such data is often called data messy. This problem is a consequence of the data that generates outliers that bias or error estimation. To analyze the data messy, there are three approaches, namely standard analysis, transform data and data analysis methods rather than a standard. Simulations conducted to determine the performance of a third comparative test procedure on average often the model variance is not homogeneous. Data simulation of each scenario is raised as much as 500 times. Next, we do the analysis of the average comparison test using three methods, Welch test, mixed models and Welch-r test. Data generation is done through software R version 3.1.2. Based on simulation results, these three methods can be used for both normal and abnormal case (homoscedastic). The third method works very well on data balanced or unbalanced when there is no violation in the homogenity's assumptions variance. For balanced data, the three methods still showed an excellent performance despite the violation of the assumption of homogeneity of variance, with the requisite degree of heterogeneity is high. It can be shown from the level of power test above 90 percent, and the best to Welch method (98.4%) and the Welch-r method (97.8%). For unbalanced data, Welch method will be very good moderate at in case of heterogeneity positive pair with a 98.2% power. Mixed models method will be very good at case of highly heterogeneity was negative negative pairs with power. Welch-r method works very well in both cases. However, if the level of heterogeneity of variance is very high, the power of all method will decrease especially for mixed models methods. The method which still works well enough (power more than 50%) is Welch-r method (62.6%), and the method of Welch (58.6%) in the case of balanced data. If the data are unbalanced, Welch-r method works well enough in the case of highly heterogeneous positive positive or negative negative pairs, there power are 68.8% and 51% consequencly. Welch method perform well enough only in the case of highly heterogeneous variety of positive positive pairs with it is power of 64.8%. While mixed models method is good in the case of a very heterogeneous variety of negative partner with 54.6% power. So in general, when there is a variance is not homogeneous case, Welch method is applied to the data rank (Welch-r) has a better performance than the other methods.
Bus accident analysis of routes with/without bus priority.
Goh, Kelvin Chun Keong; Currie, Graham; Sarvi, Majid; Logan, David
2014-04-01
This paper summarises findings on road safety performance and bus-involved accidents in Melbourne along roads where bus priority measures had been applied. Results from an empirical analysis of the accident types revealed significant reduction in the proportion of accidents involving buses hitting stationary objects and vehicles, which suggests the effect of bus priority in addressing manoeuvrability issues for buses. A mixed-effects negative binomial (MENB) regression and back-propagation neural network (BPNN) modelling of bus accidents considering wider influences on accident rates at a route section level also revealed significant safety benefits when bus priority is provided. Sensitivity analyses done on the BPNN model showed general agreement in the predicted accident frequency between both models. The slightly better performance recorded by the MENB model results suggests merits in adopting a mixed effects modelling approach for accident count prediction in practice given its capability to account for unobserved location and time-specific factors. A major implication of this research is that bus priority in Melbourne's context acts to improve road safety and should be a major consideration for road management agencies when implementing bus priority and road schemes. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Myhre, Cathrine Lund; Platt, Stephen Matthew; Eckhardt, Sabine; Hermansen, Ove; Schmidbauer, Norbert; Mienert, Jurgen; Vadakkepuliyambatta, Sunil; Bauguitte, Stephane; Pitt, Joseph; Allen, Grant; Bower, Keith; O'Shea, Sebastian; Gallagher, Martin; Percival, Carl; Pyle, John; Cain, Michelle; Stohl, Andreas
2017-04-01
Methane stored in seabed reservoirs such as methane hydrates can reach the atmosphere in the form of bubbles or dissolved in water. Hydrates could destabilize with rising temperature further increasing greenhouse gas emissions in a warming climate. To assess the impact of oceanic emissions from the area west of Svalbard, where methane hydrates are abundant, we used measurements collected with a research aircraft (FAAM) and a ship (Helmer Hansen) during the Summer 2014, and for Zeppelin Observatory for the full year. We present a model-supported analysis of the atmospheric CH4 mixing ratios measured by the different platforms. To address uncertainty about where CH4 emissions actually occur, we explored three scenarios: areas with known seeps, a hydrate stability model and an ocean depth criterion. We then used a budget analysis and a Lagrangian particle dispersion model to compare measurements taken upwind and downwind of the potential CH4 emission areas. We found small differences between the CH4 mixing ratios measured upwind and downwind of the potential emission areas during the campaign. By taking into account measurement and sampling uncertainties and by determining the sensitivity of the measured mixing ratios to potential oceanic emissions, we provide upper limits for the CH4 fluxes. The CH4 flux during the campaign was small, with an upper limit of 2.5 nmol / m s in the stability model scenario. The Zeppelin Observatory data for 2014 suggests CH4 fluxes from the Svalbard continental platform below 0.2 Tg/yr . All estimates are in the lower range of values previously reported.
A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.
2016-10-01
Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.
NASA Astrophysics Data System (ADS)
Ancellet, Gerard; Pelon, Jacques; Totems, Julien; Chazette, Patrick; Bazureau, Ariane; Sicard, Michaël; Di Iorio, Tatiana; Dulac, Francois; Mallet, Marc
2016-04-01
Long-range transport of biomass burning (BB) aerosols between North America and the Mediterranean region took place in June 2013. A large number of ground-based and airborne lidar measurements were deployed in the western Mediterranean during the Chemistry-AeRosol Mediterranean EXperiment (ChArMEx) intensive observation period. A detailed analysis of the potential North American aerosol sources is conducted including the assessment of their transport to Europe using forward simulations of the FLEXPART Lagrangian particle dispersion model initialized using satellite observations by MODIS and CALIOP. The three-dimensional structure of the aerosol distribution in the ChArMEx domain observed by the ground-based lidars (Minorca, Barcelona and Lampedusa), a Falcon-20 aircraft flight and three CALIOP tracks, agrees very well with the model simulation of the three major sources considered in this work: Canadian and Colorado fires, a dust storm from western US and the contribution of Saharan dust streamers advected from the North Atlantic trade wind region into the westerlies region. Four aerosol types were identified using the optical properties of the observed aerosol layers (aerosol depolarization ratio, lidar ratio) and the transport model analysis of the contribution of each aerosol source: (i) pure BB layer, (ii) weakly dusty BB, (iii) significant mixture of BB and dust transported from the trade wind region, and (iv) the outflow of Saharan dust by the subtropical jet and not mixed with BB aerosol. The contribution of the Canadian fires is the major aerosol source during this episode while mixing of dust and BB is only significant at an altitude above 5 km. The mixing corresponds to a 20-30 % dust contribution in the total aerosol backscatter. The comparison with the MODIS aerosol optical depth horizontal distribution during this episode over the western Mediterranean Sea shows that the Canadian fire contributions were as large as the direct northward dust outflow from Sahara.
NASA Astrophysics Data System (ADS)
Ancellet, G.; Pelon, J.; Totems, J.; Chazette, P.; Bazureau, A.; Sicard, M.; Di Iorio, T.; Dulac, F.; Mallet, M.
2015-11-01
Long range transport of biomass burning (BB) aerosols between North America and the Mediterranean region took place in June 2013. A large number of ground based and airborne lidar measurements were deployed in the Western Mediterranean during the Chemistry-AeRosol Mediterranean EXperiment (ChArMEx) intensive observation period. A detailed analysis of the potential North American aerosol sources is conducted including the assessment of their transport to Europe using forward simulations of the FLEXPART Lagrangian particle dispersion model initialized using satellite observations by MODIS and CALIOP. The three dimensional structure of the aerosol distribution in the ChArMEx domain observed by the ground-based lidars (Menorca, Barcelona and Lampedusa), a Falcon-20 aircraft flight and three CALIOP tracks, agree very well with the model simulation of the three major sources considered in this work: Canadian and Colorado fires, a dust storm from Western US and the contribution of Saharan dust streamers advected from the North Atlantic trade wind region into the Westerlies region. Four aerosol types were identified using the optical properties of the observed aerosol layers (aerosol depolarization ratio, lidar ratio) and the transport model analysis of the contribution of each aerosol source: (I) pure BB layer, (II) weakly dusty BB, (III) significant mixture of BB and dust transported from the trade wind region (IV) the outflow of Saharan dust by the subtropical jet and not mixed with BB aerosol. The contribution of the Canadian fires is the major aerosol source during this episode while mixing of dust and BB is only significant at altitude above 5 km. The mixing corresponds to a 20-30 % dust contribution in the total aerosol backscatter. The comparison with the MODIS AOD horizontal distribution during this episode over the Western Mediterranean sea shows that the Canadian fires contribution were as large as the direct northward dust outflow from Sahara.
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-01-01
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. PMID:28952708
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-09-27
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License
Unifying error structures in commonly used biotracer mixing models.
Stock, Brian C; Semmens, Brice X
2016-10-01
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.
Lagrangian mixed layer modeling of the western equatorial Pacific
NASA Technical Reports Server (NTRS)
Shinoda, Toshiaki; Lukas, Roger
1995-01-01
Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.
Functional mixed effects spectral analysis
KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG
2011-01-01
SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437
Numerical analysis of mixing by sharp-edge-based acoustofluidic micromixer
NASA Astrophysics Data System (ADS)
Nama, Nitesh; Huang, Po-Hsun; Jun Huang, Tony; Costanzo, Francesco
2015-11-01
Recently, acoustically oscillated sharp-edges have been employed to realize rapid and homogeneous mixing at microscales (Huang, Lab on a Chip, 13, 2013). Here, we present a numerical model, qualitatively validated by experimental results, to analyze the acoustic mixing inside a sharp-edge-based micromixer. We extend our previous numerical model (Nama, Lab on a Chip, 14, 2014) to combine the Generalized Lagrangian Mean (GLM) theory with the convection-diffusion equation, while also allowing for the presence of a background flow as observed in a typical sharp-edge-based micromixer. We employ a perturbation approach to divide the flow variables into zeroth-, first- and second-order fields which are successively solved to obtain the Lagrangian mean velocity. The Langrangian mean velocity and the background flow velocity are further employed with the convection-diffusion equation to obtain the concentration profile. We characterize the effects of various operational and geometrical parameters to suggest potential design changes for improving the mixing performance of the sharp-edge-based micromixer. Lastly, we investigate the possibility of generation of a spatio-temporally controllable concentration gradient by placing sharp-edge structures inside the microchannel.
Experimental study of stratified jet by simultaneous measurements of velocity and density fields
NASA Astrophysics Data System (ADS)
Xu, Duo; Chen, Jun
2012-07-01
Stratified flows with small density difference commonly exist in geophysical and engineering applications, which often involve interaction of turbulence and buoyancy effect. A combined particle image velocimetry (PIV) and planar laser-induced fluorescence (PLIF) system is developed to measure the velocity and density fields in a dense jet discharged horizontally into a tank filled with light fluid. The illumination of PIV particles and excitation of PLIF dye are achieved by a dual-head pulsed Nd:YAG laser and two CCD cameras with a set of optical filters. The procedure for matching refractive indexes of two fluids and calibration of the combined system are presented, as well as a quantitative analysis of the measurement uncertainties. The flow structures and mixing dynamics within the central vertical plane are studied by examining the averaged parameters, turbulent kinetic energy budget, and modeling of momentum flux and buoyancy flux. At downstream, profiles of velocity and density display strong asymmetry with respect to its center. This is attributed to the fact that stable stratification reduces mixing and unstable stratification enhances mixing. In stable stratification region, most of turbulence production is consumed by mean-flow convection, whereas in unstable stratification region, turbulence production is nearly balanced by viscous dissipation. Experimental data also indicate that at downstream locations, mixing length model performs better in mixing zone of stable stratification regions, whereas in other regions, eddy viscosity/diffusivity models with static model coefficients represent effectively momentum and buoyancy flux terms. The measured turbulent Prandtl number displays strong spatial variation in the stratified jet.
Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs
2018-01-01
Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344
NASA Astrophysics Data System (ADS)
Iakshina, D. F.; Golubeva, E. N.
2017-11-01
The vertical distribution of the hydrological characteristics in the upper ocean layer is mostly formed under the influence of turbulent and convective mixing, which are not resolved in the system of equations for large-scale ocean. Therefore it is necessary to include additional parameterizations of these processes into the numerical models. In this paper we carry out a comparative analysis of the different vertical mixing parameterizations in simulations of climatic variability of the Arctic water and sea ice circulation. The 3D regional numerical model for the Arctic and North Atlantic developed in the ICMMG SB RAS (Institute of Computational Mathematics and Mathematical Geophysics of the Siberian Branch of the Russian Academy of Science) and package GOTM (General Ocean Turbulence Model1,2, http://www.gotm.net/) were used as the numerical instruments . NCEP/NCAR reanalysis data were used for determination of the surface fluxes related to ice and ocean. The next turbulence closure schemes were used for the vertical mixing parameterizations: 1) Integration scheme based on the Richardson criteria (RI); 2) Second-order scheme TKE with coefficients Canuto-A3 (CANUTO); 3) First-order scheme TKE with coefficients Schumann and Gerz4 (TKE-1); 4) Scheme KPP5 (KPP). In addition we investigated some important characteristics of the Arctic Ocean state including the intensity of Atlantic water inflow, ice cover state and fresh water content in Beaufort Sea.
NASA Astrophysics Data System (ADS)
Kasi Viswanath, A.; Smith, Wayne L.; Patterson, H.
1982-04-01
Crystals of K 2Pt(CN) 6 doped with Pt(CN) 2-4 show an absorption band at 337 nm which is assigned as a mixed-valence (MV) transition from Pt (II) to Pt(IV). From a Hush model analysis, the absorption band is interpreted to be class II in the Day—Robin scheme. When the MV band is laser excited at 337 nm, emmision is observed from Pt(CN) 2-4 clusters.
Designing a mixed methods study in primary care.
Creswell, John W; Fetters, Michael D; Ivankova, Nataliya V
2004-01-01
Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research.
Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model
2009-08-01
has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System
Physical Controls on Biogeochemical Processes in Intertidal Zones of Beach Aquifers
NASA Astrophysics Data System (ADS)
Heiss, James W.; Post, Vincent E. A.; Laattoe, Tariq; Russoniello, Christopher J.; Michael, Holly A.
2017-11-01
Marine ecosystems are sensitive to inputs of chemicals from submarine groundwater discharge. Tidally influenced saltwater-freshwater mixing zones in beach aquifers can host biogeochemical transformations that modify chemical loads prior to discharge. A numerical variable-density groundwater flow and reactive transport model was used to evaluate the physical controls on reactivity for mixing-dependent and mixing-independent reactions in beach aquifers, represented as denitrification and sulfate reduction, respectively. A sensitivity analysis was performed across typical values of tidal amplitude, hydraulic conductivity, terrestrial freshwater flux, beach slope, dispersivity, and DOC reactivity. For the model setup and conditions tested, the simulations demonstrate that denitrification can remove up to 100% of terrestrially derived nitrate, and sulfate reduction can transform up to 8% of seawater-derived sulfate prior to discharge. Tidally driven mixing between saltwater and freshwater promotes denitrification along the boundary of the intertidal saltwater circulation cell in pore water between 1 and 10 ppt. The denitrification zone occupies on average 49% of the mixing zone. Denitrification rates are highest on the landward side of the circulation cell and decrease along circulating flow paths. Reactivity for mixing-dependent reactions increases with the size of the mixing zone and solute supply, while mixing-independent reactivity is controlled primarily by solute supply. The results provide insights into the types of beaches most efficient in altering fluxes of chemicals prior to discharge and could be built upon to help engineer beaches to enhance reactivity. The findings have implications for management to protect coastal ecosystems and the estimation of chemical fluxes to the ocean.
Modeled forest inventory data suggest climate benefits from fuels management
Jeremy S. Fried; Theresa B. Jain; Jonathan. Sandquist
2013-01-01
As part of a recent synthesis addressing fuel management in dry, mixed-conifer forests we analyzed more than 5,000 Forest Inventory and Analysis (FIA) plots, a probability sample that represents 33 million acres of these forests throughout Washington, Oregon, Idaho, Montana, Utah, and extreme northern California. We relied on the BioSum analysis framework that...
Analysis of High School English Curriculum Materials through Rasch Measurement Model and Maxqda
ERIC Educational Resources Information Center
Batdi, Veli; Elaldi, Senel
2016-01-01
The purpose of the study is to analyze high school English curriculum materials (ECM) through FACETS analysis and MAXQDA-11 programs. The mixed methods approach, both quantitative and qualitative methods, were used in three samples including English teachers in Elazig during the 2014-2015 academic year. While the quantitative phase of the study…
Improving Treatment Plan Implementation in Schools: A Meta-Analysis of Single Subject Design Studies
ERIC Educational Resources Information Center
Noell, George H.; Gansle, Kristin A.; Mevers, Joanna Lomas; Knox, R. Maria; Mintz, Joslyn Cynkus; Dahir, Amanda
2014-01-01
Twenty-nine peer-reviewed journal articles that analyzed intervention implementation in schools using single-case experimental designs were meta-analyzed. These studies reported 171 separate data paths and provided 3,991 data points. The meta-analysis was accomplished by fitting data extracted from graphs in mixed linear growth models. This…
Verification of Orthogrid Finite Element Modeling Techniques
NASA Technical Reports Server (NTRS)
Steeve, B. E.
1996-01-01
The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.
THRSTER: A THRee-STream Ejector Ramjet Analysis and Design Tool
NASA Technical Reports Server (NTRS)
Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.
2000-01-01
An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.
THRSTER: A Three-Stream Ejector Ramjet Analysis and Design Tool
NASA Technical Reports Server (NTRS)
Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.; Komar, D. R. (Technical Monitor)
2000-01-01
An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.