NASA Technical Reports Server (NTRS)
Kuchar, A. P.; Chamberlin, R.
1983-01-01
As part of the NASA Energy Efficient Engine program, scale-model performance tests of a mixed flow exhaust system were conducted. The tests were used to evaluate the performance of exhaust system mixers for high-bypass, mixed-flow turbofan engines. The tests indicated that: (1) mixer penetration has the most significant affect on both mixing effectiveness and mixer pressure loss; (2) mixing/tailpipe length improves mixing effectiveness; (3) gap reduction between the mixer and centerbody increases high mixing effectiveness; (4) mixer cross-sectional shape influences mixing effectiveness; (5) lobe number affects mixing degree; and (6) mixer aerodynamic pressure losses are a function of secondary flows inherent to the lobed mixer concept.
NASA Astrophysics Data System (ADS)
Gibson, Angus H.; Hogg, Andrew McC.; Kiss, Andrew E.; Shakespeare, Callum J.; Adcroft, Alistair
2017-11-01
We examine the separate contributions to spurious mixing from horizontal and vertical processes in an ALE ocean model, MOM6, using reference potential energy (RPE). The RPE is a global diagnostic which changes only due to mixing between density classes. We extend this diagnostic to a sub-timestep timescale in order to individually separate contributions to spurious mixing through horizontal (tracer advection) and vertical (regridding/remapping) processes within the model. We both evaluate the overall spurious mixing in MOM6 against previously published output from other models (MOM5, MITGCM and MPAS-O), and investigate impacts on the components of spurious mixing in MOM6 across a suite of test cases: a lock exchange, internal wave propagation, and a baroclinically-unstable eddying channel. The split RPE diagnostic demonstrates that the spurious mixing in a lock exchange test case is dominated by horizontal tracer advection, due to the spatial variability in the velocity field. In contrast, the vertical component of spurious mixing dominates in an internal waves test case. MOM6 performs well in this test case owing to its quasi-Lagrangian implementation of ALE. Finally, the effects of model resolution are examined in a baroclinic eddies test case. In particular, the vertical component of spurious mixing dominates as horizontal resolution increases, an important consideration as global models evolve towards higher horizontal resolutions.
Quantifying spatial distribution of spurious mixing in ocean models.
Ilıcak, Mehmet
2016-12-01
Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.
A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.
2016-10-01
Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
The Performance of IRT Model Selection Methods with Mixed-Format Tests
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2012-01-01
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
A mixing timescale model for TPDF simulations of turbulent premixed flames
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...
2017-02-06
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
A mixing timescale model for TPDF simulations of turbulent premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
Pulse Jet Mixing Tests With Noncohesive Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Perry A.; Bamberger, Judith A.; Enderlin, Carl W.
2012-02-17
This report summarizes results from pulse jet mixing (PJM) tests with noncohesive solids in Newtonian liquid. The tests were conducted during FY 2007 and 2008 to support the design of mixing systems for the Hanford Waste Treatment and Immobilization Plant (WTP). Tests were conducted at three geometric scales using noncohesive simulants, and the test data were used to develop models predicting two measures of mixing performance for full-scale WTP vessels. The models predict the cloud height (the height to which solids will be lifted by the PJM action) and the critical suspension velocity (the minimum velocity needed to ensure allmore » solids are suspended off the floor, though not fully mixed). From the cloud height, the concentration of solids at the pump inlet can be estimated. The predicted critical suspension velocity for lifting all solids is not precisely the same as the mixing requirement for 'disturbing' a sufficient volume of solids, but the values will be similar and closely related. These predictive models were successfully benchmarked against larger scale tests and compared well with results from computational fluid dynamics simulations. The application of the models to assess mixing in WTP vessels is illustrated in examples for 13 distinct designs and selected operational conditions. The values selected for these examples are not final; thus, the estimates of performance should not be interpreted as final conclusions of design adequacy or inadequacy. However, this work does reveal that several vessels may require adjustments to design, operating features, or waste feed properties to ensure confidence in operation. The models described in this report will prove to be valuable engineering tools to evaluate options as designs are finalized for the WTP. Revision 1 refines data sets used for model development and summarizes models developed since the completion of Revision 0.« less
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Modelling rainfall amounts using mixed-gamma model for Kuantan district
NASA Astrophysics Data System (ADS)
Zakaria, Roslinazairimah; Moslim, Nor Hafizah
2017-05-01
An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.
A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2015-01-01
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…
ERIC Educational Resources Information Center
Aryadoust, Vahid; Zhang, Limei
2016-01-01
The present study used the mixed Rasch model (MRM) to identify subgroups of readers within a sample of students taking an EFL reading comprehension test. Six hundred and two (602) Chinese college students took a reading test and a lexico-grammatical knowledge test and completed a Metacognitive and Cognitive Strategy Use Questionnaire (MCSUQ)…
The use of Argo for validation and tuning of mixed layer models
NASA Astrophysics Data System (ADS)
Acreman, D. M.; Jeffery, C. D.
We present results from validation and tuning of 1-D ocean mixed layer models using data from Argo floats and data from Ocean Weather Station Papa (145°W, 50°N). Model tests at Ocean Weather Station Papa showed that a bulk model could perform well provided it was tuned correctly. The Large et al. [Large, W.G., McWilliams, J.C., Doney, S.C., 1994. Oceanic vertical mixing: a review and a model with a nonlocal boundary layer parameterisation. Rev. Geophys. 32 (Novermber), 363-403] K-profile parameterisation (KPP) model also gave a good representation of mixed layer depth provided the vertical resolution was sufficiently high. Model tests using data from a single Argo float indicated a tendency for the KPP model to deepen insufficiently over an annual cycle, whereas the tuned bulk model and general ocean turbulence model (GOTM) gave a better representation of mixed layer depth. The bulk model was then tuned using data from a sample of Argo floats and a set of optimum parameters was found; these optimum parameters were consistent with the tuning at OWS Papa.
On the validity of effective formulations for transport through heterogeneous porous media
NASA Astrophysics Data System (ADS)
de Dreuzy, Jean-Raynald; Carrera, Jesus
2016-04-01
Geological heterogeneity enhances spreading of solutes and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through heterogeneous porous media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the multi-rate mass transfer (MRMT) model to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, in general non-dispersive mixing cannot.
Application of a Mixed Consequential Ethical Model to a Problem Regarding Test Standards.
ERIC Educational Resources Information Center
Busch, John Christian
The work of the ethicist Charles Curran and the problem-solving strategy of the mixed consequentialist ethical model are applied to a traditional social science measurement problem--that of how to adjust a recommended standard in order to be fair to the test-taker and society. The focus is on criterion-referenced teacher certification tests.…
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; ...
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less
A Parameter Subset Selection Algorithm for Mixed-Effects Models
Schmidt, Kathleen L.; Smith, Ralph C.
2016-01-01
Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less
Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong
2016-01-01
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471
On the validity of effective formulations for transport through heterogeneous porous media
NASA Astrophysics Data System (ADS)
de Dreuzy, J.-R.; Carrera, J.
2015-11-01
Geological heterogeneity enhances spreading of solutes, and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through Heterogeneous Porous Media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the Multi-Rate Mass Transfer (MRMT) to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, non-dispersive mixing cannot.
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2016-11-01
The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.
NASA Astrophysics Data System (ADS)
Solomon, D. Kip; Genereux, David P.; Plummer, L. Niel; Busenberg, Eurybiades
2010-04-01
We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC-11, CFC-12, CFC-113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near-zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.
Stellar evolution with turbulent diffusion. I. A new formalism of mixing.
NASA Astrophysics Data System (ADS)
Deng, L.; Bressan, A.; Chiosi, C.
1996-09-01
In this paper we present a new formulation of diffusive mixing in stellar interiors aimed at casting light on the kind of mixing that should take place in the so-called overshoot regions surrounding fully convective zones. Key points of the analysis are the inclusion the concept of scale length most effective for mixing, by means of which the diffusion coefficient is formulated, and the inclusion of intermittence and stirring, two properties of turbulence known from laboratory fluid dynamics. The formalism is applied to follow the evolution of a 20Msun_ star with composition Z=0.008 and Y=0.25. Depending on the value of the diffusion coefficient holding in the overshoot region, the evolutionary behaviour of the test stars goes from the case of virtually no mixing (semiconvective like structures) to that of full mixing over there (standard overshoot models). Indeed, the efficiency of mixing in this region drives the extension of the intermediate fully convective shell developing at the onset of the the shell H-burning, and in turn the path in the HR Diagram (HRD). Models with low efficiency of mixing burn helium in the core at high effective temperatures, models with intermediate efficiency perform extended loops in the HRD, finally models with high efficiency spend the whole core He-burning phase at low effective temperatures. In order to cast light on this important point of stellar structure, we test whether or not in the regions of the H-burning shell a convective layer can develop. More precisely, we examine whether the Schwarzschild or the Ledoux criterion ought to be adopted in this region. Furthermore, we test the response of stellar models to the kind of mixing supposed to occur in the H-burning shell regions. Finally, comparing the time scale of thermal dissipation to the evolutionary time scale, we get the conclusion that no mixing in this region should occur. The models with intermediate efficiency of mixing and no mixing at all in the shell H-burning regions are of particular interest as they possess at the same time evolutionary characteristics that are separately typical of models calculated with different schemes of mixing. In other words, the new models share the same properties of models with standard overshoot, namely a wider main sequence band, higher luminosity, and longer lifetimes than classical models, but they also possess extended loops that are the main signature of the classical (semiconvective) description of convection at the border of the core.
Application of mixing-controlled combustion models to gas turbine combustors
NASA Technical Reports Server (NTRS)
Nguyen, Hung Lee
1990-01-01
Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.
Effect of shroud geometry on the effectiveness of a short mixing stack gas eductor model
NASA Astrophysics Data System (ADS)
Kavalis, A. E.
1983-06-01
An existing apparatus for testing models of gas eductor systems using high temperature primary flow was modified to provide improved control and performance over a wide range of gas temperature and flow rates. Secondary flow pumping, temperature and pressure data were recorded for two gas eductor system models. The first, previously tested under hot flow conditions, consists of a primary plate with four tilted-angled nozzles and a slotted, shrouded mixing stack with two diffuser rings (overall L/D = 1.5). A portable pyrometer with a surface probe was used for the second model in order to identify any hot spots at the external surface of the mixing stack, shroud and diffuser rings. The second model is shown to have almost the same mixing and pumping performance with the first one but to exhibit much lower shroud and diffuser surface temperatures.
MIXING STUDY FOR JT-71/72 TANKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2013-11-26
All modeling calculations for the mixing operations of miscible fluids contained in HBLine tanks, JT-71/72, were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test results to validate the model. Final performance calculations were performed by using the validated model to quantify the mixing time for the HB-Line tanks. The mixing study results for the JT-71/72 tanks show that, for the cases modeled, the mixing time required for blending of the tank contents is no more than 35 minutes, which is well below 2.5more » hours of recirculation pump operation. Therefore, the results demonstrate the adequacy of 2.5 hours’ mixing time of the tank contents by one recirculation pump to get well mixed.« less
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
Solomon, D. Kip; Genereux, David P.; Plummer, Niel; Busenberg, Eurybiades
2010-01-01
We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC‐11, CFC‐12, CFC‐113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near‐zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W
2016-01-01
External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Multivariate statistical approach to estimate mixing proportions for unknown end members
Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.
2012-01-01
A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.
Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong
2016-04-07
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Tests of Parameterized Langmuir Circulation Mixing in the Oceans Surface Mixed Layer II
2017-08-11
inertial oscillations in the ocean are governed by three-dimensional processes that are not accounted for in a one-dimensional simulation , and it was...Unlimited 52 Paul Martin (228) 688-5447 Recent large-eddy simulations (LES) of Langmuir circulation (LC) within the surface mixed layer (SML) of...used in the Navy Coastal Ocean Model (NCOM) and tested for (a) a simple wind-mixing case, (b) simulations of the upper ocean thermal structure at Ocean
Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus
2017-02-01
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.
Analysis and testing of high entrainment single nozzle jet pumps with variable mixing tubes
NASA Technical Reports Server (NTRS)
Hickman, K. E.; Hill, P. G.; Gilbert, G. B.
1972-01-01
An analytical model was developed to predict the performance characteristics of axisymmetric single-nozzle jet pumps with variable area mixing tubes. The primary flow may be subsonic or supersonic. The computer program uses integral techniques to calculate the velocity profiles and the wall static pressures that result from the mixing of the supersonic primary jet and the subsonic secondary flow. An experimental program was conducted to measure mixing tube wall static pressure variations, velocity profiles, and temperature profiles in a variable area mixing tube with a supersonic primary jet. Static pressure variations were measured at four different secondary flow rates. These test results were used to evaluate the analytical model. The analytical results compared well to the experimental data. Therefore, the analysis is believed to be ready for use to relate jet pump performance characteristics to mixing tube design.
Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál
2017-07-27
The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.
Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun
2017-03-01
In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kuchar, A. P.; Chamberlin, R.
1980-01-01
A scale model performance test was conducted as part of the NASA Energy Efficient Engine (E3) Program, to investigate the geometric variables that influence the aerodynamic design of exhaust system mixers for high-bypass, mixed-flow engines. Mixer configuration variables included lobe number, penetration and perimeter, as well as several cutback mixer geometries. Mixing effectiveness and mixer pressure loss were determined using measured thrust and nozzle exit total pressure and temperature surveys. Results provide a data base to aid the analysis and design development of the E3 mixed-flow exhaust system.
Yang, Ting; Wang, Quanjiu; Wu, Laosheng; Zhao, Guangxu; Liu, Yanli; Zhang, Pengyu
2016-07-01
Nutrients transport is a main source of water pollution. Several models describing transport of soil nutrients such as potassium, phosphate and nitrate in runoff water have been developed. The objectives of this research were to describe the nutrients transport processes by considering the effect of rainfall detachment, and to evaluate the factors that have greatest influence on nutrients transport into runoff. In this study, an existing mass-conservation equation and rainfall detachment process were combined and augmented to predict runoff of nutrients in surface water in a Loess Plateau soil in Northwestern Yangling, China. The mixing depth is a function of time as a result of rainfall impact, not a constant as described in previous models. The new model was tested using two different sub-models of complete-mixing and incomplete-mixing. The complete-mixing model is more popular to use for its simplicity. It captured the runoff trends of those high adsorption nutrients, and of nutrients transport along steep slopes. While the incomplete-mixing model predicted well for the highest observed concentrations of the test nutrients. Parameters inversely estimated by the models were applied to simulate nutrients transport, results suggested that both models can be adopted to describe nutrients transport in runoff under the impact of rainfall. Copyright © 2016 Elsevier B.V. All rights reserved.
Development and Validation of a 3-Dimensional CFB Furnace Model
NASA Astrophysics Data System (ADS)
Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti
At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.
Transition mixing study empirical model report
NASA Technical Reports Server (NTRS)
Srinivasan, R.; White, C.
1988-01-01
The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.
NASA Technical Reports Server (NTRS)
Hawk, C. W.; Landrum, D. B.; Muller, S.; Turner, M.; Parkinson, D.
1998-01-01
The Strutjet approach to Rocket Based Combined Cycle (RBCC) propulsion depends upon fuel-rich flows from the rocket nozzles and turbine exhaust products mixing with the ingested air for successful operation in the ramjet and scramjet modes. It is desirable to delay this mixing process in the air-augmented mode of operation present during low speed flight. A model of the Strutjet device has been built and is undergoing test to investigate the mixing of the streams as a function of distance from the Strutjet exit plane during simulated low speed flight conditions. Cold flow testing of a 1/6 scale Strutjet model is underway and nearing completion. Planar Laser Induced Fluorescence (PLIF) diagnostic methods are being employed to observe the mixing of the turbine exhaust gas with the gases from both the primary rockets and the ingested air simulating low speed, air augmented operation of the RBCC. The ratio of the pressure in the turbine exhaust duct to that in the rocket nozzle wall at the point of their intersection is the independent variable in these experiments. Tests were accomplished at values of 1.0, 1.5 and 2.0 for this parameter. Qualitative results illustrate the development of the mixing zone from the exit plane of the model to a distance of about 10 rocket nozzle exit diameters downstream. These data show the mixing to be confined in the vertical plane for all cases, The lateral expansion is more pronounced at a pressure ratio of 1.0 and suggests that mixing with the ingested flow would be likely beginning at a distance of 7 nozzle exit diameters downstream of the nozzle exit plane.
NASA Astrophysics Data System (ADS)
Sherwood, Christopher R.; Aretxabaleta, Alfredo L.; Harris, Courtney K.; Rinehimer, J. Paul; Verney, Romaric; Ferré, Bénédicte
2018-05-01
We describe and demonstrate algorithms for treating cohesive and mixed sediment that have been added to the Regional Ocean Modeling System (ROMS version 3.6), as implemented in the Coupled Ocean-Atmosphere-Wave-Sediment Transport Modeling System (COAWST Subversion repository revision 1234). These include the following: floc dynamics (aggregation and disaggregation in the water column); changes in floc characteristics in the seabed; erosion and deposition of cohesive and mixed (combination of cohesive and non-cohesive) sediment; and biodiffusive mixing of bed sediment. These routines supplement existing non-cohesive sediment modules, thereby increasing our ability to model fine-grained and mixed-sediment environments. Additionally, we describe changes to the sediment bed layering scheme that improve the fidelity of the modeled stratigraphic record. Finally, we provide examples of these modules implemented in idealized test cases and a realistic application.
Experimental Testing and Modeling Analysis of Solute Mixing at Water Distribution Pipe Junctions
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. Here we have categorized pipe junctions into five hydraulic types, for which flow distribution factors and analytical equations for describing the solute mixing ...
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
Tests of two convection theories for red giant and red supergiant envelopes
NASA Technical Reports Server (NTRS)
Stothers, Richard B.; Chin, Chao-Wen
1995-01-01
Two theories of stellar envelope convection are considered here in the context of red giants and red supergiants of intermediate to high mass: Boehm-Vitense's standard mixing-length theory (MLT) and Canuto & Mazzitelli's new theory incorporating the full spectrum of turbulence (FST). Both theories assume incompressible convection. Two formulations of the convective mixing length are also evaluated: l proportional to the local pressure scale height (H(sub P)) and l proportional to the distance from the upper boundary of the convection zone (z). Applications to test both theories are made by calculating stellar evolutionary sequences into the red zone (z). Applications to test both theories are made by calculating stellar evolutionary sequences into the red phase of core helium burning. Since the theoretically predicted effective temperatures for cool stars are known to be sensitive to the assigned value of the mixing length, this quantity has been individually calibrated for each evolutionary sequence. The calibration is done in a composite Hertzsprung-Russell diagram for the red giant and red supergiant members of well-observed Galactic open clusters. The MLT model requires the constant of proportionality for the convective mixing length to vary by a small but statistically significant amount with stellar mass, whereas the FST model succeeds in all cases with the mixing lenghth simply set equal to z. The structure of the deep stellar interior, however, remains very nearly unaffected by the choices of convection theory and mixing lenghth. Inside the convective envelope itself, a density inversion always occurs, but is somewhat smaller for the convectively more efficient MLT model. On physical grounds the FST model is preferable, and seems to alleviate the problem of finding the proper mixing length.
Barnett, J Matthew; Yu, Xiao-Ying; Recknagle, Kurtis P; Glissmeyer, John A
2016-11-01
A planned laboratory space and exhaust system modification to the Pacific Northwest National Laboratory Material Science and Technology Building indicated that a new evaluation of the mixing at the air sampling system location would be required for compliance to ANSI/HPS N13.1-2011. The modified exhaust system would add a third fan, thereby increasing the overall exhaust rate out the stack, thus voiding the previous mixing study. Prior to modifying the radioactive air emissions exhaust system, a three-dimensional computational fluid dynamics computer model was used to evaluate the mixing at the sampling system location. Modeling of the original three-fan system indicated that not all mixing criteria could be met. A second modeling effort was conducted with the addition of an air blender downstream of the confluence of the three fans, which then showed satisfactory mixing results. The final installation included an air blender, and the exhaust system underwent full-scale tests to verify velocity, cyclonic flow, gas, and particulate uniformity. The modeling results and those of the full-scale tests show agreement between each of the evaluated criteria. The use of a computational fluid dynamics code was an effective aid in the design process and allowed the sampling system to remain in its original location while still meeting the requirements for sampling at a well mixed location.
The Apollo 16 regolith - A petrographically-constrained chemical mixing model
NASA Technical Reports Server (NTRS)
Kempa, M. J.; Papike, J. J.; White, C.
1980-01-01
A mixing model for Apollo 16 regolith samples has been developed, which differs from other A-16 mixing models in that it is both petrographically constrained and statistically sound. The model was developed using three components representative of rock types present at the A-16 site, plus a representative mare basalt. A linear least-squares fitting program employing the chi-squared test and sum of components was used to determine goodness of fit. Results for surface soils indicate that either there are no significant differences between Cayley and Descartes material at the A-16 site or, if differences do exist, they have been obscured by meteoritic reworking and mixing of the lithologies.
Testing the Grossman model of medical spending determinants with macroeconomic panel data.
Hartwig, Jochen; Sturm, Jan-Egbert
2018-02-16
Michael Grossman's human capital model of the demand for health has been argued to be one of the major achievements in theoretical health economics. Attempts to test this model empirically have been sparse, however, and with mixed results. These attempts so far relied on using-mostly cross-sectional-micro data from household surveys. For the first time in the literature, we bring in macroeconomic panel data for 29 OECD countries over the period 1970-2010 to test the model. To check the robustness of the results for the determinants of medical spending identified by the model, we include additional covariates in an extreme bounds analysis (EBA) framework. The preferred model specifications (including the robust covariates) do not lend much empirical support to the Grossman model. This is in line with the mixed results of earlier studies.
Reed, Frances M; Fitzgerald, Les; Rae, Melanie
2016-01-01
To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Mixed reality temporal bone surgical dissector: mechanical design.
Hochman, Jordan Brent; Sepehri, Nariman; Rampersad, Vivek; Kraut, Jay; Khazraee, Milad; Pisa, Justyn; Unger, Bertram
2014-08-08
The Development of a Novel Mixed Reality (MR) Simulation. An evolving training environment emphasizes the importance of simulation. Current haptic temporal bone simulators have difficulty representing realistic contact forces and while 3D printed models convincingly represent vibrational properties of bone, they cannot reproduce soft tissue. This paper introduces a mixed reality model, where the effective elements of both simulations are combined; haptic rendering of soft tissue directly interacts with a printed bone model. This paper addresses one aspect in a series of challenges, specifically the mechanical merger of a haptic device with an otic drill. This further necessitates gravity cancelation of the work assembly gripper mechanism. In this system, the haptic end-effector is replaced by a high-speed drill and the virtual contact forces need to be repositioned to the drill tip from the mid wand. Previous publications detail generation of both the requisite printed and haptic simulations. Custom software was developed to reposition the haptic interaction point to the drill tip. A custom fitting, to hold the otic drill, was developed and its weight was offset using the haptic device. The robustness of the system to disturbances and its stable performance during drilling were tested. The experiments were performed on a mixed reality model consisting of two drillable rapid-prototyped layers separated by a free-space. Within the free-space, a linear virtual force model is applied to simulate drill contact with soft tissue. Testing illustrated the effectiveness of gravity cancellation. Additionally, the system exhibited excellent performance given random inputs and during the drill's passage between real and virtual components of the model. No issues with registration at model boundaries were encountered. These tests provide a proof of concept for the initial stages in the development of a novel mixed-reality temporal bone simulator.
Testing Modeling Assumptions in the West Africa Ebola Outbreak
NASA Astrophysics Data System (ADS)
Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre
2016-10-01
The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.
NASA Technical Reports Server (NTRS)
Hawk, C. W.; Landrum, D. B.; Muller, S.; Turner, M.; Parkinson, D.
1998-01-01
The Strutjet approach to Rocket Based Combined Cycle (RBCC) propulsion depends upon fuel-rich flows from the rocket nozzles and turbine exhaust products mixing with the ingested air for successful operation in the ramjet and scramjet modes. It is desirable to delay this mixing process in the air-augmented mode of operation present during low speed flight. A model of the Strutjet device has been built and is undergoing test to investigate the mixing of the streams as a function of distance from the Strutjet exit plane during simulated low speed flight conditions. Cold flow testing of a 1/6 scale Strutjet model is underway and nearing completion. Planar Laser Induced Fluorescence (PLIF) diagnostic methods are being employed to observe the mixing of the turbine exhaust gas with the gases from both the primary rockets and the ingested air simulating low speed, air augmented operation of the RBCC. The ratio of the pressure in the turbine exhaust duct to that in the rocket nozzle wall at the point of their intersection is the independent variable in these experiments. Tests were accomplished at values of 1.0, 1.5 and 2.0 for this parameter. Qualitative results illustrate the development of the mixing zone from the exit plane of the model to a distance of about 19 equivalent rocket nozzle exit diameters downstream. These data show the mixing to be confined in the vertical plane for all cases, The lateral expansion is more pronounced at a pressure ratio of 1.0 and suggests that mixing with the ingested flow would be likely beginning at a distance of 7 nozzle exit diameters downstream of the nozzle exit plane.
Open-target sparse sensing of biological agents using DNA microarray
2011-01-01
Background Current biosensors are designed to target and react to specific nucleic acid sequences or structural epitopes. These 'target-specific' platforms require creation of new physical capture reagents when new organisms are targeted. An 'open-target' approach to DNA microarray biosensing is proposed and substantiated using laboratory generated data. The microarray consisted of 12,900 25 bp oligonucleotide capture probes derived from a statistical model trained on randomly selected genomic segments of pathogenic prokaryotic organisms. Open-target detection of organisms was accomplished using a reference library of hybridization patterns for three test organisms whose DNA sequences were not included in the design of the microarray probes. Results A multivariate mathematical model based on the partial least squares regression (PLSR) was developed to detect the presence of three test organisms in mixed samples. When all 12,900 probes were used, the model correctly detected the signature of three test organisms in all mixed samples (mean(R2)) = 0.76, CI = 0.95), with a 6% false positive rate. A sampling algorithm was then developed to sparsely sample the probe space for a minimal number of probes required to capture the hybridization imprints of the test organisms. The PLSR detection model was capable of correctly identifying the presence of the three test organisms in all mixed samples using only 47 probes (mean(R2)) = 0.77, CI = 0.95) with nearly 100% specificity. Conclusions We conceived an 'open-target' approach to biosensing, and hypothesized that a relatively small, non-specifically designed, DNA microarray is capable of identifying the presence of multiple organisms in mixed samples. Coupled with a mathematical model applied to laboratory generated data, and sparse sampling of capture probes, the prototype microarray platform was able to capture the signature of each organism in all mixed samples with high sensitivity and specificity. It was demonstrated that this new approach to biosensing closely follows the principles of sparse sensing. PMID:21801424
NASA Astrophysics Data System (ADS)
Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.
2017-07-01
Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.
NASA Astrophysics Data System (ADS)
Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.
2008-12-01
The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.
Mixed Phase Modeling in GlennICE with Application to Engine Icing
NASA Technical Reports Server (NTRS)
Wright, William B.; Jorgenson, Philip C. E.; Veres, Joseph P.
2011-01-01
A capability for modeling ice crystals and mixed phase icing has been added to GlennICE. Modifications have been made to the particle trajectory algorithm and energy balance to model this behavior. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to four mixed phase ice accretions performed in the Cox icing tunnel in order to calibrate an ice erosion model. A sample ice ingestion case was performed using the Energy Efficient Engine (E3) model in order to illustrate current capabilities. Engine performance characteristics were supplied using the Numerical Propulsion System Simulation (NPSS) model for this test case.
Budget model can aid group practice planning.
Bender, A D
1991-12-01
A medical practice can enhance its planning by developing a budgetary model to test effects of planning assumptions on its profitability and cash requirements. A model focusing on patient visits, payment mix, patient mix, and fee and payment schedules can help assess effects of proposed decisions. A planning model is not a substitute for planning but should complement a plan that includes mission, goals, values, strategic issues, and different outcomes.
Modeling and Analysis of Mixed Synchronous/Asynchronous Systems
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan
2012-01-01
Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.
Yang, James J; Williams, L Keoki; Buu, Anne
2017-08-24
A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.
A MULTI-STREAM MODEL FOR VERTICAL MIXING OF A PASSIVE TRACER IN THE CONVECTIVE BOUNDARY LAYER
We study a multi-stream model (MSM) for vertical mixing of a passive tracer in the convective boundary layer, in which the tracer is advected by many vertical streams with different probabilities and diffused by small scale turbulence. We test the MSM algorithm for investigatin...
NASA Technical Reports Server (NTRS)
Herkes, William
2000-01-01
Acoustic and propulsion performance testing of a model-scale Axisymmetric Coannular Ejector nozzle was conducted in the Boeing Low-speed Aeroacoustic Facility. This nozzle is a plug nozzle with an ejector design to provide aspiration of about 20% of the engine flow. A variety of mixing enhancers were designed to promote mixing of the engine and the aspirated flows. These included delta tabs, tone-injection rods, and wheeler ramps. This report addresses the acoustic aspects of the testing. The spectral characteristics of the various configurations of the nozzle are examined on a model-scale basis. This includes indentifying particular noise sources contributing to the spectra and the data are projected to full-scale flyover conditions to evaluate the effectiveness of the nozzle, and of the various mixing enhancers, on reducing the Effective Perceived Noise Levels.
Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm
NASA Astrophysics Data System (ADS)
Kania, Adhe; Sidarto, Kuntjoro Adji
2016-02-01
Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.
Testing of a Shrouded, Short Mixing Stack Gas Eductor Model Using High Temperature Primary Flow.
1982-10-01
problem but of less significance than the heated surfaces of shipboard structure. Various types of electronic equipments and sensors carried by a combatant...here was to validate current procedures by comparison with previous data it was not considered essential to rein- stall these sensors or duplicate...sec) 205 tABLE XIX Mixing Stack Temperatura Data, Model B Thermocouple Axial Mixing Stack Temperature _ mbjr Posii--- .. (I IF) . Uptake 180 850 950
Mixed Model Association with Family-Biased Case-Control Ascertainment.
Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L
2017-01-05
Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shamberger, Patrick J.; Garcia, Michael O.
2007-02-01
Geochemical modeling of magma mixing allows for evaluation of volumes of magma storage reservoirs and magma plumbing configurations. A new analytical expression is derived for a simple two-component box-mixing model describing the proportions of mixing components in erupted lavas as a function of time. Four versions of this model are applied to a mixing trend spanning episodes 3 31 of Kilauea Volcano’s Puu Oo eruption, each testing different constraints on magma reservoir input and output fluxes. Unknown parameters (e.g., magma reservoir influx rate, initial reservoir volume) are optimized for each model using a non-linear least squares technique to fit model trends to geochemical time-series data. The modeled mixing trend closely reproduces the observed compositional trend. The two models that match measured lava effusion rates have constant magma input and output fluxes and suggest a large pre-mixing magma reservoir (46±2 and 49±1 million m3), with little or no volume change over time. This volume is much larger than a previous estimate for the shallow, dike-shaped magma reservoir under the Puu Oo vent, which grew from ˜3 to ˜10 12 million m3. These volumetric differences are interpreted as indicating that mixing occurred first in a larger, deeper reservoir before the magma was injected into the overlying smaller reservoir.
Separate-channel analysis of two-channel microarrays: recovering inter-spot information.
Smyth, Gordon K; Altman, Naomi S
2013-05-26
Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.
An R2 statistic for fixed effects in the linear mixed model.
Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver
2008-12-20
Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Low Velocity Difference Thermal Shear Layer Mixing Rate Measurements
NASA Technical Reports Server (NTRS)
Bush, Robert H.; Culver, Harry C. M.; Weissbein, Dave; Georgiadis, Nicholas J.
2013-01-01
Current CFD modeling techniques are known to do a poor job of predicting the mixing rate and persistence of slot film flow in co-annular flowing ducts with relatively small velocity differences but large thermal gradients. A co-annular test was devised to empirically determine the mixing rate of slot film flow in a constant area circular duct (D approx. 1ft, L approx. 10ft). The axial rate of wall heat-up is a sensitive measure of the mixing rate of the two flows. The inflow conditions were varied to simulate a variety of conditions characteristic of moderate by-pass ratio engines. A series of air temperature measurements near the duct wall provided a straightforward means to measure the axial temperature distribution and thus infer the mixing rate. This data provides a characterization of the slot film mixing rates encountered in typical jet engine environments. The experimental geometry and entrance conditions, along with the sensitivity of the results as the entrance conditions vary, make this a good test for turbulence models in a regime important to modern air-breathing propulsion research and development.
Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration
NASA Technical Reports Server (NTRS)
Groce, Alex; Joshi, Rajeev
2008-01-01
Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.
NASA Astrophysics Data System (ADS)
Tai, Y.; Watanabe, T.; Nagata, K.
2018-03-01
A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.
Mixed infections reveal virulence differences between host-specific bee pathogens.
Klinger, Ellen G; Vojvodic, Svjetlana; DeGrandi-Hoffman, Gloria; Welker, Dennis L; James, Rosalind R
2015-07-01
Dynamics of host-pathogen interactions are complex, often influencing the ecology, evolution and behavior of both the host and pathogen. In the natural world, infections with multiple pathogens are common, yet due to their complexity, interactions can be difficult to predict and study. Mathematical models help facilitate our understanding of these evolutionary processes, but empirical data are needed to test model assumptions and predictions. We used two common theoretical models regarding mixed infections (superinfection and co-infection) to determine which model assumptions best described a group of fungal pathogens closely associated with bees. We tested three fungal species, Ascosphaera apis, Ascosphaera aggregata and Ascosphaera larvis, in two bee hosts (Apis mellifera and Megachile rotundata). Bee survival was not significantly different in mixed infections vs. solo infections with the most virulent pathogen for either host, but fungal growth within the host was significantly altered by mixed infections. In the host A. mellifera, only the most virulent pathogen was present in the host post-infection (indicating superinfective properties). In M. rotundata, the most virulent pathogen co-existed with the lesser-virulent one (indicating co-infective properties). We demonstrated that the competitive outcomes of mixed infections were host-specific, indicating strong host specificity among these fungal bee pathogens. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Bartkus, Tadas P.; Struk, Peter M.; Tsao, Jen-Ching
2017-01-01
This paper builds on previous work that compares numerical simulations of mixed-phase icing clouds with experimental data. The model couples the thermal interaction between ice particles and water droplets of the icing cloud with the flowing air of an icing wind tunnel for simulation of NASA Glenn Research Centers (GRC) Propulsion Systems Laboratory (PSL). Measurements were taken during the Fundamentals of Ice Crystal Icing Physics Tests at the PSL tunnel in March 2016. The tests simulated ice-crystal and mixed-phase icing that relate to ice accretions within turbofan engines. Experimentally measured air temperature, humidity, total water content, liquid and ice water content, as well as cloud particle size, are compared with model predictions. The model showed good trend agreement with experimentally measured values, but often over-predicted aero-thermodynamic changes. This discrepancy is likely attributed to radial variations that this one-dimensional model does not address. One of the key findings of this work is that greater aero-thermodynamic changes occur when humidity conditions are low. In addition a range of mixed-phase clouds can be achieved by varying only the tunnel humidity conditions, but the range of humidities to generate a mixed-phase cloud becomes smaller when clouds are composed of smaller particles. In general, the model predicted melt fraction well, in particular with clouds composed of larger particle sizes.
ERIC Educational Resources Information Center
Livingstone, Holly A.; Day, Arla L.
2005-01-01
Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…
Development and validation of a turbulent-mix model for variable-density and compressible flows.
Banerjee, Arindam; Gore, Robert A; Andrews, Malcolm J
2010-10-01
The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.
Statistical models of global Langmuir mixing
NASA Astrophysics Data System (ADS)
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
Hot-flow tests of a series of 10-percent-scale turbofan forced mixing nozzles
NASA Technical Reports Server (NTRS)
Head, V. L.; Povinelli, L. A.; Gerstenmaier, W. H.
1984-01-01
An approximately 1/10-scale model of a mixed-flow exhaust system was tested in a static facility with fully simulated hot-flow cruise and takeoff conditions. Nine mixer geometries with 12 to 24 lobes were tested. The areas of the core and fan stream were held constant to maintain a bypass ratio of approximately 5. The research results presented in this report were obtained as part of a program directed toward developing an improved mixer design methodology by using a combined analytical and experimental approach. The effects of lobe spacing, lobe penetration, lobe-to-centerbody gap, lobe contour, and scalloping of the radial side walls were investigated. Test measurements included total pressure and temperature surveys, flow angularity surveys, and wall and centerbody surface static pressure measurements. Contour plots at various stations in the mixing region are presented to show the mixing effectiveness for the various lobe geometries.
Toward Verification of USM3D Extensions for Mixed Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Frink, Neal T.; Ding, Ejiang; Parlette, Edward B.
2013-01-01
The unstructured tetrahedral grid cell-centered finite volume flow solver USM3D has been recently extended to handle mixed element grids composed of hexahedral, prismatic, pyramidal, and tetrahedral cells. Presently, two turbulence models, namely, baseline Spalart-Allmaras (SA) and Menter Shear Stress Transport (SST), support mixed element grids. This paper provides an overview of the various numerical discretization options available in the newly enhanced USM3D. Using the SA model, the flow solver extensions are verified on three two-dimensional test cases available on the Turbulence Modeling Resource website at the NASA Langley Research Center. The test cases are zero pressure gradient flat plate, planar shear, and bump-inchannel. The effect of cell topologies on the flow solution is also investigated using the planar shear case. Finally, the assessment of various cell and face gradient options is performed on the zero pressure gradient flat plate case.
Mixed reality temporal bone surgical dissector: mechanical design
2014-01-01
Objective The Development of a Novel Mixed Reality (MR) Simulation. An evolving training environment emphasizes the importance of simulation. Current haptic temporal bone simulators have difficulty representing realistic contact forces and while 3D printed models convincingly represent vibrational properties of bone, they cannot reproduce soft tissue. This paper introduces a mixed reality model, where the effective elements of both simulations are combined; haptic rendering of soft tissue directly interacts with a printed bone model. This paper addresses one aspect in a series of challenges, specifically the mechanical merger of a haptic device with an otic drill. This further necessitates gravity cancelation of the work assembly gripper mechanism. In this system, the haptic end-effector is replaced by a high-speed drill and the virtual contact forces need to be repositioned to the drill tip from the mid wand. Previous publications detail generation of both the requisite printed and haptic simulations. Method Custom software was developed to reposition the haptic interaction point to the drill tip. A custom fitting, to hold the otic drill, was developed and its weight was offset using the haptic device. The robustness of the system to disturbances and its stable performance during drilling were tested. The experiments were performed on a mixed reality model consisting of two drillable rapid-prototyped layers separated by a free-space. Within the free-space, a linear virtual force model is applied to simulate drill contact with soft tissue. Results Testing illustrated the effectiveness of gravity cancellation. Additionally, the system exhibited excellent performance given random inputs and during the drill’s passage between real and virtual components of the model. No issues with registration at model boundaries were encountered. Conclusion These tests provide a proof of concept for the initial stages in the development of a novel mixed-reality temporal bone simulator. PMID:25927300
Chandra Observations and Models of the Mixed Morphology Supernova Remnant W44: Global Trends
NASA Technical Reports Server (NTRS)
Shelton, R. L.; Kuntz, K. D.; Petre, R.
2004-01-01
We report on the Chandra observations of the archetypical mixed morphology (or thermal composite) supernova remnant, W44. As with other mixed morphology remnants, W44's projected center is bright in thermal X-rays. It has an obvious radio shell, but no discernable X-ray shell. In addition, X-ray bright knots dot W44's image. The spectral analysis of the Chandra data show that the remnant s hot, bright projected center is metal-rich and that the bright knots are regions of comparatively elevated elemental abundances. Neon is among the affected elements, suggesting that ejecta contributes to the abundance trends. Furthermore, some of the emitting iron atoms appear to be underionized with respect to the other ions, providing the first potential X-ray evidence for dust destruction in a supernova remnant. We use the Chandra data to test the following explanations for W44's X-ray bright center: 1.) entropy mixing due to bulk mixing or thermal conduction, 2.) evaporation of swept up clouds, and 3.) a metallicity gradient, possibly due to dust destruction and ejecta enrichment. In these tests, we assume that the remnant has evolved beyond the adiabatic evolutionary stage, which explains the X-ray dimness of the shell. The entropy mixed model spectrum was tested against the Chandra spectrum for the remnant's projected center and found to be a good match. The evaporating clouds model was constrained by the finding that the ionization parameters of the bright knots are similar to those of the surrounding regions. While both the entropy mixed and the evaporating clouds models are known to predict centrally bright X-ray morphologies, their predictions fall short of the observed brightness gradient. The resulting brightness gap can be largely filled in by emission from the extra metals in and near the remnant's projected center. The preponderance of evidence (including that drawn from other studies) suggests that W44's remarkable morphology can be attributed to dust destruction and ejecta enrichment within an entropy mixed, adiabatic phase supernova remnant. The Chandra data prompts a new question - by what astrophysical mechanisms are the metals distributed so inhomogeneously in the supernova remnant.
On the Effect of an Anisotropy-Resolving Subgrid-Scale Model on Turbulent Vortex Motions
2014-09-19
sense, the model by Abe (2013) can be named the ”stabilized mixed model” ( SMM , hereafter). Furthermore, considering the basic concept of the mixed model...with SMM . Further investigations of this ex- tended anisotropic SGS model will be necessary in fu- ture studies. 3 Computational Conditions Although the...basic capability of the SMM was val- idated by application to some test cases (Abe, 2013; Abe 2014), there still remain several points to be fur
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-08-16
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-01-01
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
Transport and mixing in strongly coupled dusty plasma medium
NASA Astrophysics Data System (ADS)
Dharodi, Vikram; Das, Amita; Patel, Bhavesh
2016-10-01
The generalized hydrodynamic (GHD) fluid model has been employed to study the transport and mixing properties of Dusty plasma medium in strong coupling limit. The response of lighter electron and ion species to the dust motion is taken to be instantaneous i.e. inertia-less. Thus the electron and ion density are presumed to follow the Boltzman relation. In the incompressible limit (i-GHD) the model supports Transverse Shear wave in contrast to the Hydrodynamic fluids. It has been shown that the presence of these waves leads to a better mixing of fluid in this case. Several cases of flow configuration have been considered for the study. The transport and mixing attributes have been quantified by studying the dynamical evolution of tracer particles in the system. The diffusion and clustering of these test particles are directly linked to the mixing characteristic of a medium. The displacement of these particles provides for a quantitative estimate of the diffusion coefficient of the medium. It is shown that these test particles often organize themselves in spatially inhomogeneous pattern leading to the phenomena of clustering.
Jansa, Václav
2017-01-01
Height to crown base (HCB) of a tree is an important variable often included as a predictor in various forest models that serve as the fundamental tools for decision-making in forestry. We developed spatially explicit and spatially inexplicit mixed-effects HCB models using measurements from a total 19,404 trees of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.) on the permanent sample plots that are located across the Czech Republic. Variables describing site quality, stand density or competition, and species mixing effects were included into the HCB model with use of dominant height (HDOM), basal area of trees larger in diameters than a subject tree (BAL- spatially inexplicit measure) or Hegyi’s competition index (HCI—spatially explicit measure), and basal area proportion of a species of interest (BAPOR), respectively. The parameters describing sample plot-level random effects were included into the HCB model by applying the mixed-effects modelling approach. Among several functional forms evaluated, the logistic function was found most suited to our data. The HCB model for Norway spruce was tested against the data originated from different inventory designs, but model for European beech was tested using partitioned dataset (a part of the main dataset). The variance heteroscedasticity in the residuals was substantially reduced through inclusion of a power variance function into the HCB model. The results showed that spatially explicit model described significantly a larger part of the HCB variations [R2adj = 0.86 (spruce), 0.85 (beech)] than its spatially inexplicit counterpart [R2adj = 0.84 (spruce), 0.83 (beech)]. The HCB increased with increasing competitive interactions described by tree-centered competition measure: BAL or HCI, and species mixing effects described by BAPOR. A test of the mixed-effects HCB model with the random effects estimated using at least four trees per sample plot in the validation data confirmed that the model was precise enough for the prediction of HCB for a range of site quality, tree size, stand density, and stand structure. We therefore recommend measuring of HCB on four randomly selected trees of a species of interest on each sample plot for localizing the mixed-effects model and predicting HCB of the remaining trees on the plot. Growth simulations can be made from the data that lack the values for either crown ratio or HCB using the HCB models. PMID:29049391
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.
Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng
2014-06-01
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.
Generalized linear mixed models with varying coefficients for longitudinal data.
Zhang, Daowen
2004-03-01
The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
NASA Astrophysics Data System (ADS)
Watanabe, T.; Nagata, K.
2016-08-01
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; D'Costa, Joseph F.
1991-01-01
This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.
NASA Astrophysics Data System (ADS)
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
NASA Technical Reports Server (NTRS)
Bartkus, Tadas; Tsao, Jen-Ching; Struk, Peter
2017-01-01
This paper builds on previous work that compares numerical simulations of mixed-phase icing clouds with experimental data. The model couples the thermal interaction between ice particles and water droplets of the icing cloud with the flowing air of an icing wind tunnel for simulation of NASA Glenn Research Centers (GRC) Propulsion Systems Laboratory (PSL). Measurements were taken during the Fundamentals of Ice Crystal Icing Physics Tests at the PSL tunnel in March 2016. The tests simulated ice-crystal and mixed-phase icing that relate to ice accretions within turbofan engines.
NASA Technical Reports Server (NTRS)
Harrington, Douglas (Technical Monitor); Schweiger, P.; Stern, A.; Gamble, E.; Barber, T.; Chiappetta, L.; LaBarre, R.; Salikuddin, M.; Shin, H.; Majjigi, R.
2005-01-01
Hot flow aero-acoustic tests were conducted with Pratt & Whitney's High-Speed Civil Transport (HSCT) Mixer-Ejector Exhaust Nozzles by General Electric Aircraft Engines (GEAE) in the GEAE Anechoic Freejet Noise Facility (Cell 41) located in Evendale, Ohio. The tests evaluated the impact of various geometric and design parameters on the noise generated by a two-dimensional (2-D) shrouded, 8-lobed, mixer-ejector exhaust nozzle. The shrouded mixer-ejector provides noise suppression by mixing relatively low energy ambient air with the hot, high-speed primary exhaust jet. Additional attenuation was obtained by lining the shroud internal walls with acoustic panels, which absorb acoustic energy generated during the mixing process. Two mixer designs were investigated, the high mixing "vortical" and aligned flow "axial", along with variations in the shroud internal mixing area ratios and shroud length. The shrouds were tested as hardwall or lined with acoustic panels packed with a bulk absorber. A total of 21 model configurations at 1:11.47 scale were tested. The models were tested over a range of primary nozzle pressure ratios and primary exhaust temperatures representative of typical HSCT aero thermodynamic cycles. Static as well as flight simulated data were acquired during testing. A round convergent unshrouded nozzle was tested to provide an acoustic baseline for comparison to the test configurations. Comparisons were made to previous test results obtained with this hardware at NASA Glenn's 9- by 15-foot low-speed wind tunnel (LSWT). Laser velocimetry was used to investigate external as well as ejector internal velocity profiles for comparison to computational predictions. Ejector interior wall static pressure data were also obtained. A significant reduction in exhaust system noise was demonstrated with the 2-D shrouded nozzle designs.
Multi-Fluid Interpenetration Mixing in X-ray and Directly Laser driven ICF Capsule Implosions
NASA Astrophysics Data System (ADS)
Wilson, Douglas
2003-10-01
Mix between a surrounding shell and the fuel leads to degradation in ICF capsule performance. Both indirectly (X-ray) and directly laser driven implosions provide a wealth of data to test mix models. One model, the multi-fluid interpenetration mix model of Scannapieco and Cheng (Phys. Lett. A., 299, 49, 2002), was implemented in an ICF code and applied to a wide variety of experiments (e.g. J. D. Kilkenny et al., Proc. Conf Plasm. Phys. Contr. Nuc. Fus. Res. 3, 29(1988), P. Amendt, R. E. Turner, O. L. Landen, Phy. Rev. Lett., 89, 165001 (2002), or Li et al., Phy. Rev. Lett, 89, 165002 (2002)). With its single adjustable parameter fixed, it replicates well the yield degradation with increasing convergence ratio for both directly and indirectly driven capsules. Often, but not always the ion temperatures with mixing are calculated to be higher than in an unmixed implosion, agreeing with observations. Comparison with measured directly driven implosion yield rates ( from the neutron temporal diagnostic or NTD) shows mixing increases rapidly during the burn. The model also reproduces the decrease of the fuel "rho-r" with fill gas pressure, measured by observing escaping deuterons or secondary neutrons. The mix model assumes fully atomically mixed constituents, but when experiments with deuterated plastic layers and 3He fuel are modeled, less that full atomic mix is appropriate. Applying the mix model to the ablator - solid DT interface in indirectly driven ignition capsules for the NIF or LMJ suggests that the capsules will ignite, but that burn after ignition may be somewhat degraded. Situations in which the Scannapieco and Cheng model fails to agree with experiments can guide us to improvements or the development of other models. Some directly driven symmetric implosions suggest that in highly mixed situations, a higher value of the mix parameter may needed. Others show the model underestimating the fuel burn temperature. This work was performed by the Los Alamos National Laboratory under DOE contract number W-7405-Eng-36.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.
How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Solving large test-day models by iteration on data and preconditioned conjugate gradient.
Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A
1999-12-01
A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.
ERIC Educational Resources Information Center
Yao, Lihua; Schwarz, Richard D.
2006-01-01
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S
2016-01-01
Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.
Quantitative computer simulations of extraterrestrial processing operations
NASA Technical Reports Server (NTRS)
Vincent, T. L.; Nikravesh, P. E.
1989-01-01
The automation of a small, solid propellant mixer was studied. Temperature control is under investigation. A numerical simulation of the system is under development and will be tested using different control options. Control system hardware is currently being put into place. The construction of mathematical models and simulation techniques for understanding various engineering processes is also studied. Computer graphics packages were utilized for better visualization of the simulation results. The mechanical mixing of propellants is examined. Simulation of the mixing process is being done to study how one can control for chaotic behavior to meet specified mixing requirements. An experimental mixing chamber is also being built. It will allow visual tracking of particles under mixing. The experimental unit will be used to test ideas from chaos theory, as well as to verify simulation results. This project has applications to extraterrestrial propellant quality and reliability.
NASA Astrophysics Data System (ADS)
Verma, Shashi Kant; Sinha, S. L.; Chandraker, D. K.
2018-05-01
Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to describe the development of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. In this investigation, post benchmark evaluation of the inter-subchannel mixing was initiated to test the ability of state-of-the-art Computational Fluid Dynamics (CFD) codes to numerically predict the important turbulence parameters downstream of a ring type spacer grid in a rod-bundle. A three-dimensional Computational Fluid Dynamics (CFD) tool (STAR-CCM+) was used to model the single phase flow through a 30° segment or 1/12th of the cross segment of a 54-rod bundle with a ring shaped spacer grid. Polyhedrons were used to discretize the computational domain, along with prismatic cells near the walls, with an overall mesh count of 5.2 M cell volumes. The Reynolds Stress Models (RSM) was tested because of RSM accounts for the turbulence anisotropy, to assess their capability in predicting the velocities as well as mass fraction of potassium nitrate measured in the experiment. In this way, the line probes are located in the different position of subchannels which could be used to characterize the progress of the mixing along the flow direction, and the degree of cross-mixing assessed using the quantity of tracer arriving in the neighbouring sub-channels. The predicted dimensionless mixing scalar along the length, however, was in good agreement with the measurements downstream of spacers.
Rheology and Extrusion of Cement-Fly Ashes Pastes
NASA Astrophysics Data System (ADS)
Micaelli, F.; Lanos, C.; Levita, G.
2008-07-01
The addition of fly ashes in cement pastes is tested to optimize the forming of cement based material by extrusion. Two sizes of fly ashes grains are examinated. The rheology of concentrated suspensions of ashes mixes is studied with a parallel plates rheometer. In stationary flow state, tested suspensions viscosities are satisfactorily described by the Krieger-Dougherty model. An "overlapped grain" suspensions model able to describe the bimodal suspensions behaviour is proposed. For higher values of solid volume fraction, Bingham viscoplastic behaviour is identified. Results showed that the plastic viscosity and plastic yield values present minimal values for the same optimal formulation of bimodal mixes. The rheological study is extended to more concentrated systems using an extruder. Finally it is observed that the addition of 30% vol. of optimized ashes mix determined a significant reduction of required extrusion load.
NASA Astrophysics Data System (ADS)
Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.
2017-09-01
This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.
NASA Technical Reports Server (NTRS)
Atlas, R. M.
1976-01-01
An advective mixed layer ocean model was developed by eliminating the assumption of horizontal homogeneity in an already existing mixed layer model, and then superimposing a mean and anomalous wind driven current field. This model is based on the principle of conservation of heat and mechanical energy and utilizes a box grid for the advective part of the calculation. Three phases of experiments were conducted: evaluation of the model's ability to account for climatological sea surface temperature (SST) variations in the cooling and heating seasons, sensitivity tests in which the effect of hypothetical anomalous winds was evaluated, and a thirty-day synoptic calculation using the model. For the case studied, the accuracy of the predictions was improved by the inclusion of advection, although nonadvective effects appear to have dominated.
Three Dimensional CFD Analysis of the GTX Combustor
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Bond, R. B.; Edwards, J. R.
2002-01-01
The annular combustor geometry of a combined-cycle engine has been analyzed with three-dimensional computational fluid dynamics. Both subsonic combustion and supersonic combustion flowfields have been simulated. The subsonic combustion analysis was executed in conjunction with a direct-connect test rig. Two cold-flow and one hot-flow results are presented. The simulations compare favorably with the test data for the two cold flow calculations; the hot-flow data was not yet available. The hot-flow simulation indicates that the conventional ejector-ramjet cycle would not provide adequate mixing at the conditions tested. The supersonic combustion ramjet flowfield was simulated with frozen chemistry model. A five-parameter test matrix was specified, according to statistical design-of-experiments theory. Twenty-seven separate simulations were used to assemble surrogate models for combustor mixing efficiency and total pressure recovery. ScramJet injector design parameters (injector angle, location, and fuel split) as well as mission variables (total fuel massflow and freestream Mach number) were included in the analysis. A promising injector design has been identified that provides good mixing characteristics with low total pressure losses. The surrogate models can be used to develop performance maps of different injector designs. Several complex three-way variable interactions appear within the dataset that are not adequately resolved with the current statistical analysis.
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
Analysis of membrane fusion as a two-state sequential process: evaluation of the stalk model.
Weinreb, Gabriel; Lentz, Barry R
2007-06-01
We propose a model that accounts for the time courses of PEG-induced fusion of membrane vesicles of varying lipid compositions and sizes. The model assumes that fusion proceeds from an initial, aggregated vesicle state ((A) membrane contact) through two sequential intermediate states (I(1) and I(2)) and then on to a fusion pore state (FP). Using this model, we interpreted data on the fusion of seven different vesicle systems. We found that the initial aggregated state involved no lipid or content mixing but did produce leakage. The final state (FP) was not leaky. Lipid mixing normally dominated the first intermediate state (I(1)), but content mixing signal was also observed in this state for most systems. The second intermediate state (I(2)) exhibited both lipid and content mixing signals and leakage, and was sometimes the only leaky state. In some systems, the first and second intermediates were indistinguishable and converted directly to the FP state. Having also tested a parallel, two-intermediate model subject to different assumptions about the nature of the intermediates, we conclude that a sequential, two-intermediate model is the simplest model sufficient to describe PEG-mediated fusion in all vesicle systems studied. We conclude as well that a fusion intermediate "state" should not be thought of as a fixed structure (e.g., "stalk" or "transmembrane contact") of uniform properties. Rather, a fusion "state" describes an ensemble of similar structures that can have different mechanical properties. Thus, a "state" can have varying probabilities of having a given functional property such as content mixing, lipid mixing, or leakage. Our data show that the content mixing signal may occur through two processes, one correlated and one not correlated with leakage. Finally, we consider the implications of our results in terms of the "modified stalk" hypothesis for the mechanism of lipid pore formation. We conclude that our results not only support this hypothesis but also provide a means of analyzing fusion time courses so as to test it and gauge the mechanism of action of fusion proteins in the context of the lipidic hypothesis of fusion.
Pulse Jet Mixing Tests With Noncohesive Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Perry A.; Bamberger, Judith A.; Enderlin, Carl W.
2009-05-11
This report summarizes results from pulse jet mixing (PJM) tests with noncohesive solids in Newtonian liquid conducted during FY 2007 and 2008 to support the design of mixing systems for the Hanford Waste Treatment and Immobilization Plant (WTP). Tests were conducted at three geometric scales using noncohesive simulants. The test data were used to independently develop mixing models that can be used to predict full-scale WTP vessel performance and to rate current WTP mixing system designs against two specific performance requirements. One requirement is to ensure that all solids have been disturbed during the mixing action, which is important tomore » release gas from the solids. The second requirement is to maintain a suspended solids concentration below 20 weight percent at the pump inlet. The models predict the height to which solids will be lifted by the PJM action, and the minimum velocity needed to ensure all solids have been lifted from the floor. From the cloud height estimate we can calculate the concentration of solids at the pump inlet. The velocity needed to lift the solids is slightly more demanding than "disturbing" the solids, and is used as a surrogate for this metric. We applied the models to assess WTP mixing vessel performance with respect to the two perform¬ance requirements. Each mixing vessel was evaluated against these two criteria for two defined waste conditions. One of the wastes was defined by design limits and one was derived from Hanford waste characterization reports. The assessment predicts that three vessel types will satisfy the design criteria for all conditions evaluated. Seven vessel types will not satisfy the performance criteria used for any of the conditions evaluated. The remaining three vessel types provide varying assessments when the different particle characteristics are evaluated. The assessment predicts that three vessel types will satisfy the design criteria for all conditions evaluated. Seven vessel types will not satisfy the performance criteria used for any of the conditions evaluated. The remaining three vessel types provide varying assessments when the different particle characteristics are evaluated. The HLP-022 vessel was also evaluated using 12 m/s pulse jet velocity with 6-in. nozzles, and this design also did not satisfy the criteria for all of the conditions evaluated.« less
Building out a Measurement Model to Incorporate Complexities of Testing in the Language Domain
ERIC Educational Resources Information Center
Wilson, Mark; Moore, Stephen
2011-01-01
This paper provides a summary of a novel and integrated way to think about the item response models (most often used in measurement applications in social science areas such as psychology, education, and especially testing of various kinds) from the viewpoint of the statistical theory of generalized linear and nonlinear mixed models. In addition,…
Mixed-Mode Decohesion Elements for Analyses of Progressive Delamination
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Camanho, Pedro P.; deMoura, Marcelo F.
2001-01-01
A new 8-node decohesion element with mixed mode capability is proposed and demonstrated. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a strain softening law to track the damage state of the interface. The method can be used in conjunction with conventional material degradation procedures to account for inplane and intra-laminar damage modes. The accuracy of the predictions is evaluated in single mode delamination tests, in the mixed-mode bending test, and in a structural configuration consisting of the debonding of a stiffener flange from its skin.
NASA Technical Reports Server (NTRS)
Janardan, B. A.; Hoff, G. E.; Barter, J. W.; Martens, S.; Gliebe, P. R.; Mengle, V.; Dalton, W. N.; Saiyed, Naseem (Technical Monitor)
2000-01-01
This report describes the work performed by General Electric Aircraft Engines (GEAE) and Allison Engine Company (AEC) on NASA Contract NAS3-27720 AoI 14.3. The objective of this contract was to generate quality jet noise acoustic data for separate-flow nozzle models and to design and verify new jet-noise-reduction concepts over a range of simulated engine cycles and flight conditions. Five baseline axisymmetric separate-flow nozzle models having bypass ratios of five and eight with internal and external plugs and 11 different mixing-enhancer model nozzles (including chevrons, vortex-generator doublets, and a tongue mixer) were designed and tested in model scale. Using available core and fan nozzle hardware in various combinations, 28 GEAE/AEC separate-flow nozzle/mixing-enhancer configurations were acoustically evaluated in the NASA Glenn Research Center Aeroacoustic and Propulsion Laboratory. This report describes model nozzle features, facility and data acquisition/reduction procedures, the test matrix, and measured acoustic data analyses. A number of tested core and fan mixing enhancer devices and combinations of devices gave significant jet noise reduction relative to separate-flow baseline nozzles. Inward-flip and alternating-flip core chevrons combined with a straight-chevron fan nozzle exceeded the NASA stretch goal of 3 EPNdB jet noise reduction at typical sideline certification conditions.
Design of a rapid magnetic microfluidic mixer
NASA Astrophysics Data System (ADS)
Ballard, Matthew; Owen, Drew; Mills, Zachary Grant; Hanasoge, Srinivas; Hesketh, Peter; Alexeev, Alexander
2015-11-01
Using three-dimensional simulations and experiments, we demonstrate rapid mixing of fluid streams in a microchannel using orbiting magnetic microbeads. We use a lattice Boltzmann model coupled to a Brownian dynamics model to perform numerical simulations that study in depth the effect of system parameters such as channel configuration and fluid and bead velocities. We use our findings to aid the design of an experimental micromixer. Using this experimental device, we demonstrate rapid microfluidic mixing over a compact channel length, and validate our numerical simulation results. Finally, we use numerical simulations to study the physical mechanisms leading to microfluidic mixing in our system. Our findings demonstrate a promising method of rapid microfluidic mixing over a short distance, with applications in lab-on-a-chip sample testing.
NASA Astrophysics Data System (ADS)
Robati, Masoud
This Doctorate program focuses on the evaluation and improving the rutting resistance of micro-surfacing mixtures. There are many research problems related to the rutting resistance of micro-surfacing mixtures that still require further research to be solved. The main objective of this Ph.D. program is to experimentally and analytically study and improve rutting resistance of micro-surfacing mixtures. During this Ph.D. program major aspects related to the rutting resistance of micro-surfacing mixtures are investigated and presented as follow: 1) evaluation of a modification of current micro-surfacing mix design procedures: On the basis of this effort, a new mix design procedure is proposed for type III micro-surfacing mixtures as rut-fill materials on the road surface. Unlike the current mix design guidelines and specification, the new mix design is capable of selecting the optimum mix proportions for micro-surfacing mixtures; 2) evaluation of test methods and selection of aggregate grading for type III application of micro-surfacing: Within the term of this study, a new specification for selection of aggregate grading for type III application of micro-surfacing is proposed; 3) evaluation of repeatability and reproducibility of micro-surfacing mixture design tests: In this study, limits for repeatability and reproducibility of micro-surfacing mix design tests are presented; 4) a new conceptual model for filler stiffening effect on asphalt mastic of micro-surfacing: A new model is proposed, which is able to establish limits for minimum and maximum filler concentrations in the micro-surfacing mixture base on only the filler important physical and chemical properties; 5) incorporation of reclaimed asphalt pavement and post-fabrication asphalt shingles in micro-surfacing mixture: The effectiveness of newly developed mix design procedure for micro-surfacing mixtures is further validated using recycled materials. The results present the limits for the use of RAP and RAS amount in micro-surfacing mixtures; 6) new colored micro-surfacing formulations with improved durability and performance: The significant improvement of around 45% in rutting resistance of colored and conventional micro-surfacing mixtures is achieved through employing low penetration grade bitumen polymer modified asphalt emulsion stabilized using nanoparticles.
NASA Astrophysics Data System (ADS)
Manolakis, Dimitris G.
2004-10-01
The linear mixing model is widely used in hyperspectral imaging applications to model the reflectance spectra of mixed pixels in the SWIR atmospheric window or the radiance spectra of plume gases in the LWIR atmospheric window. In both cases it is important to detect the presence of materials or gases and then estimate their amount, if they are present. The detection and estimation algorithms available for these tasks are related but they are not identical. The objective of this paper is to theoretically investigate how the heavy tails observed in hyperspectral background data affect the quality of abundance estimates and how the F-test, used for endmember selection, is robust to the presence of heavy tails when the model fits the data.
Fitting the Mixed Rasch Model to a Reading Comprehension Test: Identifying Reader Types
ERIC Educational Resources Information Center
Baghaei, Purya; Carstensen, Claus H.
2013-01-01
Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, M.; Zhang, D.; Wang, Z.; Wang, Y.
2017-12-01
Mixed-phase clouds are persistently observed over the Arctic and the phase partitioning between cloud liquid and ice hydrometeors in mixed-phase clouds has important impacts on the surface energy budget and Arctic climate. In this study, we test the NCAR Community Atmosphere Model Version 5 (CAM5) with the single-column and weather forecast configurations and evaluate the model performance against observation data from the DOE Atmospheric Radiation Measurement (ARM) Program's M-PACE field campaign in October 2004 and long-term ground-based multi-sensor remote sensing measurements. Like most global climate models, we find that CAM5 also poorly simulates the phase partitioning in mixed-phase clouds by significantly underestimating the cloud liquid water content. Assuming pocket structures in the distribution of cloud liquid and ice in mixed-phase clouds as suggested by in situ observations provides a plausible solution to improve the model performance by reducing the Wegner-Bergeron-Findeisen (WBF) process rate. In this study, the modification of the WBF process in the CAM5 model has been achieved with applying a stochastic perturbation to the time scale of the WBF process relevant to both ice and snow to account for the heterogeneous mixture of cloud liquid and ice. Our results show that this modification of WBF process improves the modeled phase partitioning in the mixed-phase clouds. The seasonal variation of mixed-phase cloud properties is also better reproduced in the model in comparison with the long-term ground-based remote sensing observations. Furthermore, the phase partitioning is insensitive to the reassignment time step of perturbations.
Decohesion Elements using Two and Three-Parameter Mixed-Mode Criteria
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Camanho, Pedro P.
2001-01-01
An eight-node decohesion element implementing different criteria to predict delamination growth under mixed-mode loading is proposed. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a softening law to track the damage state of the interface. The power law criterion and a three-parameter mixed-mode criterion are used to predict delamination growth. The accuracy of the predictions is evaluated in single mode delamination and in the mixed-mode bending tests.
Extension of the Haseman-Elston regression model to longitudinal data.
Won, Sungho; Elston, Robert C; Park, Taesung
2006-01-01
We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.
Study of the 190Hg Nucleus: Testing the Existence of U(5) Symmetry
NASA Astrophysics Data System (ADS)
Jahangiri Tazekand, Z.; Mohseni, M.; Mohammadi, M. A.; Sabri, H.
2018-06-01
In this paper, we have considered the energy spectra, quadrupole transition probabilities, energy surface, charge radii, and quadrupole moment of the190Hg nucleus to describe the interplay between phase transitions and configuration mixing of intruder excitations. To this aim, we have used four different formalisms: (i) interacting boson model including configuration mixing, (ii) Z(5) critical symmetry, (iii) U(6)-based transitional Hamiltonian, and (iv) a transitional interacting boson model Hamiltonian in both interacting boson model (IBM)-1 and IBM-2 versions which are based on affine \\widehat{SU(1,1)} Lie algebra. Results show the advantages of configuration mixing and transitional Hamiltonians, in particular IBM-2 formalism, to reproduce the experimental counterparts when the weight of spherical symmetry increased.
Statistical inference methods for sparse biological time series data.
Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita
2011-04-25
Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.
Sonntag, Darrell B; Gao, H Oliver; Holmén, Britt A
2008-08-01
A linear mixed model was developed to quantify the variability of particle number emissions from transit buses tested in real-world driving conditions. Two conventional diesel buses and two hybrid diesel-electric buses were tested throughout 2004 under different aftertreatments, fuels, drivers, and bus routes. The mixed model controlled the confounding influence of factors inherent to on-board testing. Statistical tests showed that particle number emissions varied significantly according to the after treatment, bus route, driver, bus type, and daily temperature, with only minor variability attributable to differences between fuel types. The daily setup and operation of the sampling equipment (electrical low pressure impactor) and mini-dilution system contributed to 30-84% of the total random variability of particle measurements among tests with diesel oxidation catalysts. By controlling for the sampling day variability, the model better defined the differences in particle emissions among bus routes. In contrast, the low particle number emissions measured with diesel particle filters (decreased by over 99%) did not vary according to operating conditions or bus type but did vary substantially with ambient temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohne, Thomas; Kliem, Soren; Rohde, Ulrich
2006-07-01
Coolant mixing in the cold leg, downcomer and the lower plenum of pressurized water reactors is an important phenomenon mitigating the reactivity insertion into the core. Therefore, mixing of the de-borated slugs with the ambient coolant in the reactor pressure vessel was investigated at the four loop 1:5 scaled ROCOM mixing test facility. Thermal hydraulics analyses showed, that weakly borated condensate can accumulate in particular in the pump loop seal of those loops, which do not receive safety injection. After refilling of the primary circuit, natural circulation in the stagnant loops can re-establish simultaneously and the de-borated slugs are shiftedmore » towards the reactor pressure vessel (RPV). In the ROCOM experiments, the length of the flow ramp and the initial density difference between the slugs and the ambient coolant was varied. From the test matrix experiments with 0 resp. 2% density difference between the de-borated slugs and the ambient coolant were used to validate the CFD software ANSYS CFX. To model the effects of turbulence on the mean flow a higher order Reynolds stress turbulence model was employed and a mesh consisting of 6.4 million hybrid elements was utilized. Only the experiments and CFD calculations with modeled density differences show a stratification in the downcomer. Depending on the degree of density differences the less dense slugs flow around the core barrel at the top of the downcomer. At the opposite side the lower borated coolant is entrained by the colder safety injection water and transported to the core. The validation proves that ANSYS CFX is able to simulate appropriately the flow field and mixing effects of coolant with different densities. (authors)« less
Internal Mixing Studied for GE/ARL Ejector Nozzle
NASA Technical Reports Server (NTRS)
Zaman, Khairul
2005-01-01
To achieve jet noise reduction goals for the High Speed Civil Transport aircraft, researchers have been investigating the mixer-ejector nozzle concept. For this concept, a primary nozzle with multiple chutes is surrounded by an ejector. The ejector mixes low-momentum ambient air with the hot engine exhaust to reduce the jet velocity and, hence, the jet noise. It is desirable to mix the two streams as fast as possible in order to minimize the length and weight of the ejector. An earlier model of the mixer-ejector nozzle was tested extensively in the Aerodynamic Research Laboratory (ARL) of GE Aircraft Engines at Cincinnati, Ohio. While testing was continuing with later generations of the nozzle, the earlier model was brought to the NASA Lewis Research Center for relatively fundamental measurements. Goals of the Lewis study were to obtain details of the flow field to aid computational fluid dynamics (CFD) efforts and obtain a better understanding of the flow mechanisms, as well as to experiment with mixing enhancement devices, such as tabs. The measurements were made in an open jet facility for cold (unheated) flow without a surrounding coflowing stream.
Energy Efficient Engine exhaust mixer model technology report addendum; phase 3 test program
NASA Technical Reports Server (NTRS)
Larkin, M. J.; Blatt, J. R.
1984-01-01
The Phase 3 exhaust mixer test program was conducted to explore the trends established during previous Phases 1 and 2. Combinations of mixer design parameters were tested. Phase 3 testing showed that the best performance achievable within tailpipe length and diameter constraints is 2.55 percent better than an optimized separate flow base line. A reduced penetration design achieved about the same overall performance level at a substantially lower level of excess pressure loss but with a small reduction in mixing. To improve reliability of the data, the hot and cold flow thrust coefficient analysis used in Phases 1 and 2 was augmented by calculating percent mixing from traverse data. Relative change in percent mixing between configurations was determined from thrust and flow coefficient increments. The calculation procedure developed was found to be a useful tool in assessing mixer performance. Detailed flow field data were obtained to facilitate calibration of computer codes.
Qu, Long; Guennel, Tobias; Marshall, Scott L
2013-12-01
Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.
Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari
2014-01-01
A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962
Constitutive Behavior of Mixed Sn-Pb/Sn-3.0Ag-0.5Cu Solder Alloys
NASA Astrophysics Data System (ADS)
Tucker, J. P.; Chan, D. K.; Subbarayan, G.; Handwerker, C. A.
2012-03-01
During the transition from Pb-containing solders to Pb-free solders, joints composed of a mixture of Sn-Pb and Sn-Ag-Cu often result from either mixed assemblies or rework. Comprehensive characterization of the mechanical behavior of these mixed solder alloys resulting in a deformationally complete constitutive description is necessary to predict failure of mixed alloy solder joints. Three alloys with 1 wt.%, 5 wt.%, and 20 wt.% Pb were selected so as to represent reasonable ranges of Pb contamination expected from different 63Sn-37Pb components mixed with Sn-3.0Ag-0.5Cu. Creep and displacement-controlled tests were performed on specially designed assemblies at temperatures of 25°C, 75°C, and 125°C using a double lap shear test setup that ensures a nearly homogeneous state of plastic strain at the joint interface. The observed changes in creep and tensile behavior with Pb additions were related to phase equilibria and microstructure differences observed through differential scanning calorimetric and scanning electron microscopic cross-sectional analysis. As Pb content increased, the steady-state creep strain rates increased, and primary creep decreased. Even 1 wt.% Pb addition was sufficient to induce substantially large creep strains relative to the Sn-3.0Ag-0.5Cu alloy. We describe rate-dependent constitutive models for Pb-contaminated Sn-Ag-Cu solder alloys, ranging from the traditional time-hardening creep model to the viscoplastic Anand model. We illustrate the utility of these constitutive models by examining the inelastic response of a chip-scale package (CSP) under thermomechanical loading through finite-element analysis. The models predict that, as Pb content increases, total inelastic dissipation decreases.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.
Generalized Functional Linear Models for Gene-based Case-Control Association Studies
Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao
2014-01-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683
NASA Technical Reports Server (NTRS)
Conley, Julianne M.; Leonard, B. P.
1994-01-01
The modified mixing length (MML) turbulence model was installed in the Proteus Navier-Stokes code, then modified to make it applicable to a wider range of flows typical of aerospace propulsion applications. The modifications are based on experimental data for three flat-plate flows having zero, mild adverse, and strong adverse pressure gradients. Three transonic diffuser test cases were run with the new version of the model in order to evaluate its performance. All results are compared with experimental data and show improvements over calculations made using the Baldwin-Lomax turbulence model, the standard algebraic model in Proteus.
Matacchiera, F; Manes, C; Beaven, R P; Rees-White, T C; Boano, F; Mønster, J; Scheutz, C
2018-02-13
The measurement of methane emissions from landfills is important to the understanding of landfills' contribution to greenhouse gas emissions. The Tracer Dispersion Method (TDM) is becoming widely accepted as a technique, which allows landfill emissions to be quantified accurately provided that measurements are taken where the plumes of a released tracer-gas and landfill-gas are well-mixed. However, the distance at which full mixing of the gases occurs is generally unknown prior to any experimental campaign. To overcome this problem the present paper demonstrates that, for any specific TDM application, a simple Gaussian dispersion model (AERMOD) can be run beforehand to help determine the distance from the source at which full mixing conditions occur, and the likely associated measurement errors. An AERMOD model was created to simulate a series of TDM trials carried out at a UK landfill, and was benchmarked against the experimental data obtained. The model was used to investigate the impact of different factors (e.g. tracer cylinder placements, wind directions, atmospheric stability parameters) on TDM results to identify appropriate experimental set ups for different conditions. The contribution of incomplete vertical mixing of tracer and landfill gas on TDM measurement error was explored using the model. It was observed that full mixing conditions at ground level do not imply full mixing over the entire plume height. However, when full mixing conditions were satisfied at ground level, then the error introduced by variations in mixing higher up were always less than 10%. Copyright © 2018. Published by Elsevier Ltd.
Study of abrasive resistance of foundries models obtained with use of additive technology
NASA Astrophysics Data System (ADS)
Ol'khovik, Evgeniy
2017-10-01
A problem of determination of resistance of the foundry models and patterns from ABS (PLA) plastic, obtained by the method of 3D printing with using FDM additive technology, to abrasive wear and resistance in the environment of foundry sand mould is considered in the present study. The description of a technique and equipment for tests of castings models and patterns for wear is provided in the article. The manufacturing techniques of models with the use of the 3D printer (additive technology) are described. The scheme with vibration load was applied to samples tests. For the most qualitative research of influence of sandy mix on plastic, models in real conditions of abrasive wear have been organized. The results also examined the application of acrylic paintwork to the plastic model and a two-component coating. The practical offers and recommendation on production of master models with the use of FDM technology allowing one to reach indicators of durability, exceeding 2000 cycles of moulding in foundry sand mix, are described.
NASA Astrophysics Data System (ADS)
Le Borgne, T.; Bochet, O.; Klepikova, M.; Kang, P. K.; Shakas, A.; Aquilina, L.; Dufresne, A.; Linde, N.; Dentz, M.; Bour, O.
2016-12-01
Transport processes in fractured media and associated reactions are governed by multiscale heterogeneity ranging from fracture wall roughness at small scale to broadly distributed fracture lengths at network scale. This strong disorder induces a variety of emerging phenomena, including flow channeling, anomalous transport and heat transfer, enhanced mixing and reactive hotspot development. These processes are generally difficult to isolate and monitor in the field because of the high degree of complexity and coupling between them. We report in situ experimental observations from the Ploemeur fractured rock observatory (http://hplus.ore.fr/en/ploemeur) that provide new insights on the dynamics of transport and reaction processes in fractured media. These include dipole and push pull tracer tests that allow understanding and modelling anomalous transport processes characterized by heavy-tailed residence time distributions (Kang et al. 2015), thermal push pull tests that show the existence of highly channeled flow with a strong control on fracture matrix exchanges (Klepikova et al. 2016) and time lapse hydrogeophysical monitoring of saline tracer tests that allow quantifying the distribution of transport length scales governing dispersion processes (Shakas et al. 2016). These transport processes are then shown to induce rapid oxygen delivery and mixing at depth leading to massive biofilm development (Bochet et al., in prep.). Hence, this presentation will attempt to link these observations made at different scales to quantify and model the coupling between flow channeling, non-Fickian transport, mixing and chemical reactions in fractured media. References: Bochet et al. Biofilm blooms driven by enhanced mixing in fractured rock, in prep. Klepikova et al. 2016, Heat as a tracer for understanding transport processes in fractured media: theory and field assessment from multi-scale thermal push-pull tracer tests, Water Resour. Res. 52Shakas et al. 2016, Hydrogeophysical characterization of transport processes in fractured rock by combining push-pull and single-hole ground penetrating radar experiments, Water Resour. Res. 52 Kang et al. 2015, Impact of velocity correlation and distribution on transport in fractured media : Field evidence and theoretical model, Water Resour. Res., 51
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Bushnell, D. M.
1973-01-01
Prandtl's basic mixing length model was used to compute 22 test cases on free turbulent shear flows. The calculations employed appropriate algebraic length scale equations and single values of mixing length constant for planar and axisymmetric flows, respectively. Good agreement with data was obtained except for flows, such as supersonic free shear layers, where large sustained sensitivity changes occur. The inability to predict the more gradual mixing in these flows is tentatively ascribed to the presence of a significant turbulence-induced transverse static pressure gradient which is neglected in conventional solution procedures. Some type of an equation for length scale development was found to be necessary for successful computation of highly nonsimilar flow regions such as jet or wake development from thick wall flows.
An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames
NASA Astrophysics Data System (ADS)
Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin
2015-11-01
Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.
Analysis of the type II robotic mixed-model assembly line balancing problem
NASA Astrophysics Data System (ADS)
Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad
2017-06-01
In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.
A Non-Fickian Mixing Model for Stratified Turbulent Flows
2013-09-30
lateral gradients in the mixed layer, indicative of surface fronts, and with the magnitude of mixed layer depth MLD. Direct testing with our results shows...both are induced by atmospheric forcing. In our case, atmospheric fluxes and wind forcing are still the cause of SM occurrence, but mostly through their...California upwelling simulations, where MLD did not change significantly between HR and LR simulations. As suggested by Capet et al. (2008b), this is likely
DOT National Transportation Integrated Search
1997-11-01
Various agencies have used the Corps of Engineers gyratory testing machine (GTM) to design and test asphalt mixes. Materials properties such as shear strength and strain are measured during the compaction process. However, a compaction process duplic...
Alternative mathematical programming formulations for FSS synthesis
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.
1986-01-01
A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.
An a priori DNS study of the shadow-position mixing model
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; ...
2016-01-15
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
2017-09-03
Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less
Rätz, H-J; Charef, A; Abella, A J; Colloca, F; Ligas, A; Mannini, A; Lloret, J
2013-10-01
A medium-term (10 year) stochastic forecast model is developed and presented for mixed fisheries that can provide estimations of age-specific parameters for a maximum of 10 stocks and 10 fisheries. Designed to support fishery managers dealing with complex, multi-annual management plans, the model can be used to quantitatively test the consequences of various stock-specific and fishery-specific decisions, using non-equilibrium stock dynamics. Such decisions include fishing restrictions and other strategies aimed at achieving sustainable mixed fisheries consistent with the concept of maximum sustainable yield (MSY). In order to test the model, recently gathered data on seven stocks and four fisheries operating in the Ligurian and North Tyrrhenian Seas are used to generate quantitative, 10 year predictions of biomass and catch trends under four different management scenarios. The results show that using the fishing mortality at MSY as the biological reference point for the management of all stocks would be a strong incentive to reduce the technical interactions among concurrent fishing strategies. This would optimize the stock-specific exploitation and be consistent with sustainability criteria. © 2013 The Fisheries Society of the British Isles.
Runtime and Pressurization Analyses of Propellant Tanks
NASA Technical Reports Server (NTRS)
Field, Robert E.; Ryan, Harry M.; Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chung P.
2007-01-01
Multi-element unstructured CFD has been utilized at NASA SSC to carry out analyses of propellant tank systems in different modes of operation. The three regimes of interest at SSC include (a) tank chill down (b) tank pressurization and (c) runtime propellant draw-down and purge. While tank chill down is an important event that is best addressed with long time-scale heat transfer calculations, CFD can play a critical role in the tank pressurization and runtime modes of operation. In these situations, problems with contamination of the propellant by inclusion of the pressurant gas from the ullage causes a deterioration of the quality of the propellant delivered to the test article. CFD can be used to help quantify the mixing and propellant degradation. During tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. It should be noted that traditional CFD modeling is inadequate for such simulations because the fluids in the tank are in a range of different sub-critical and supercritical states and elaborate phase change and mixing rules have to be developed to accurately model the interaction between the ullage gas and the propellant. We show a typical run-time simulation of a spherical propellant tank, containing RP-1 in this case, being pressurized with room-temperature nitrogen at 540 R. Nitrogen, shown in blue on the right-hand side of the figures, enters the tank from the diffuser at the top of the figures and impinges on the RP-1, shown in red, while the propellant is being continuously drained at the rate of 1050 lbs/sec through a pipe at the bottom of the tank. The sequence of frames in Figure 1 shows the resultant velocity fields and mixing between nitrogen and RP-1 in a cross-section of the tank at different times. A vortex is seen to form in the incoming nitrogen stream that tends to entrain propellant, mixing it with the pressurant gas. The RP-1 mass fraction contours in Figure 1 are also indicative of the level of mixing and contamination of the propellant. The simulation is used to track the propagation of the pure propellant front as it is drawn toward the exit with the evolution of the mixing processes in the tank. The CFD simulation modeled a total of 10 seconds of run time. As is seen from Figure 1d, after 5.65 seconds the propellant front is nearing the drain pipe, especially near the center of the tank. Behind this pure propellant front is a mixed fluid of compromised quality that would require the test to end when it reaches the exit pipe. Such unsteady simulations provide an estimate of the time that a high-quality propellant supply to the test article can be guaranteed at the modeled mass flow rate. In the final paper, we will discuss simulations of the LOX and propellant tanks at NASA SSC being pressurized by an inert ullage. Detailed comparisons will be made between the CFD simulations and lower order models as well as with test data. Conditions leading to cryo collapse in the tank will also be identified.
Computer design of microfluidic mixers for protein/RNA folding studies.
Inguva, Venkatesh; Kathuria, Sagar V; Bilsel, Osman; Perot, Blair James
2018-01-01
Kinetic studies of biological macromolecules increasingly use microfluidic mixers to initiate and monitor reaction progress. A motivation for using microfluidic mixers is to reduce sample consumption and decrease mixing time to microseconds. Some applications, such as small-angle x-ray scattering, also require large (>10 micron) sampling areas to ensure high signal-to-noise ratios and to minimize parasitic scattering. Chaotic to marginally turbulent mixers are well suited for these applications because this class of mixers provides a good middle ground between existing laminar and turbulent mixers. In this study, we model various chaotic to marginally turbulent mixing concepts such as flow turning, flow splitting, and vortex generation using computational fluid dynamics for optimization of mixing efficiency and observation volume. Design iterations show flow turning to be the best candidate for chaotic/marginally turbulent mixing. A qualitative experimental test is performed on the finalized design with mixing of 10 M urea and water to validate the flow turning unsteady mixing concept as a viable option for RNA and protein folding studies. A comparison of direct numerical simulations (DNS) and turbulence models suggests that the applicability of turbulence models to these flow regimes may be limited.
Transient thermal analysis for radioactive liquid mixing operations in a large-scaled tank
Lee, S. Y.; Smith, III, F. G.
2014-07-25
A transient heat balance model was developed to assess the impact of a Submersible Mixer Pump (SMP) on radioactive liquid temperature during the process of waste mixing and removal for the high-level radioactive materials stored in Savannah River Site (SRS) tanks. The model results will be mainly used to determine the SMP design impacts on the waste tank temperature during operations and to develop a specification for a new SMP design to replace existing longshaft mixer pumps used during waste removal. The present model was benchmarked against the test data obtained by the tank measurement to examine the quantitative thermalmore » response of the tank and to establish the reference conditions of the operating variables under no SMP operation. The results showed that the model predictions agreed with the test data of the waste temperatures within about 10%.« less
ERIC Educational Resources Information Center
Edwards, Autumn; Edwards, Chad
2013-01-01
The purpose of this experiment was to test the influence of mixed reviews appearing as computer-mediated word-of-mouth communication (WOM) on student perceptions of instructors (attractiveness and credibility) and attitudes toward learning course content (affective learning and state motivation). Using the heuristic-systematic processing model, it…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2011-05-17
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank to ensure uniformity of the discharge stream. Mixing is accomplished with one to four dual-nozzle slurry pumps located within the tank liquid. For the work, a Tank 48 simulation model with a maximum of four slurry pumps in operation has been developed to estimate flow patterns for efficient solid mixing. The modeling calculations were performed by using two modeling approaches. One approach is a single-phase Computational Fluid Dynamics (CFD) model to evaluate the flow patterns and qualitativemore » mixing behaviors for a range of different modeling conditions since the model was previously benchmarked against the test results. The other is a two-phase CFD model to estimate solid concentrations in a quantitative way by solving the Eulerian governing equations for the continuous fluid and discrete solid phases over the entire fluid domain of Tank 48. The two-phase results should be considered as the preliminary scoping calculations since the model was not validated against the test results yet. A series of sensitivity calculations for different numbers of pumps and operating conditions has been performed to provide operational guidance for solids suspension and mixing in the tank. In the analysis, the pump was assumed to be stationary. Major solid obstructions including the pump housing, the pump columns, and the 82 inch central support column were included. The steady state and three-dimensional analyses with a two-equation turbulence model were performed with FLUENT{trademark} for the single-phase approach and CFX for the two-phase approach. Recommended operational guidance was developed assuming that local fluid velocity can be used as a measure of sludge suspension and spatial mixing under single-phase tank model. For quantitative analysis, a two-phase fluid-solid model was developed for the same modeling conditions as the single-phase model. The modeling results show that the flow patterns driven by four pump operation satisfy the solid suspension requirement, and the average solid concentration at the plane of the transfer pump inlet is about 12% higher than the tank average concentrations for the 70 inch tank level and about the same as the tank average value for the 29 inch liquid level. When one of the four pumps is not operated, the flow patterns are satisfied with the minimum suspension velocity criterion. However, the solid concentration near the tank bottom is increased by about 30%, although the average solid concentrations near the transfer pump inlet have about the same value as the four-pump baseline results. The flow pattern results show that although the two-pump case satisfies the minimum velocity requirement to suspend the sludge particles, it provides the marginal mixing results for the heavier or larger insoluble materials such as MST and KTPB particles. The results demonstrated that when more than one jet are aiming at the same position of the mixing tank domain, inefficient flow patterns are provided due to the highly localized momentum dissipation, resulting in inactive suspension zone. Thus, after completion of the indexed solids suspension, pump rotations are recommended to avoid producing the nonuniform flow patterns. It is noted that when tank liquid level is reduced from the highest level of 70 inches to the minimum level of 29 inches for a given number of operating pumps, the solid mixing efficiency becomes better since the ratio of the pump power to the mixing volume becomes larger. These results are consistent with the literature results.« less
New solutions to the constant-head test performed at a partially penetrating well
NASA Astrophysics Data System (ADS)
Chang, Y. C.; Yeh, H. D.
2009-05-01
SummaryThe mathematical model describing the aquifer response to a constant-head test performed at a fully penetrating well can be easily solved by the conventional integral transform technique. In addition, the Dirichlet-type condition should be chosen as the boundary condition along the rim of wellbore for such a test well. However, the boundary condition for a test well with partial penetration must be considered as a mixed-type condition. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The model for such a mixed boundary problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the dual series equations and perturbation method. This approach provides analytical results for the drawdown in the partially penetrating well and the well discharge along the screen. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.
1982-08-01
Trajectory and Concentration of Various Plumes 59 IV.2 Tank and Cargo Geometry Assumed for Discharge Rate Calculation Using HACS Venting Rate Model 61...Discharge Rate Calculation Using HACS Venting Rate Model 62 IV.4 Original Test Plan for Validation of the Continuous Spill Model 66 IV.5 Final Test Plan...at t= 0. exEyEz = turbulent diffusivities. p = water density. Pc = chemical density. Symbols Used Only in Continuous-Spill Models for a Steady River b
Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong
2016-01-01
A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson's correlation coefficient. The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79-0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81-0.84). Excellent calibration was reported in the two models with Pearson's correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008). The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Testing homogeneity in Weibull-regression models.
Bolfarine, Heleno; Valença, Dione M
2005-10-01
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.
Steady state RANS simulations of temperature fluctuations in single phase turbulent mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kickhofel, J.; Fokken, J.; Kapulla, R.
2012-07-01
Single phase turbulent mixing in nuclear power plant circuits where a strong temperature gradient is present is known to precipitate pipe failure due to thermal fatigue. Experiments in a square mixing channel offer the opportunity to study the phenomenon under simple and easily reproducible boundary conditions. Measurements of this kind have been performed extensively at the Paul Scherrer Inst. in Switzerland with a high density of instrumentation in the Generic Mixing Experiment (GEMIX). As a fundamental mixing phenomena study closely related to the thermal fatigue problem, the experimental results from GEMIX are valuable for the validation of CFD codes strivingmore » to accurately simulate both the temperature and velocity fields in single phase turbulent mixing. In the experiments two iso-kinetic streams meet at a shallow angle of 3 degrees and mix in a straight channel of square cross-section under various degrees of density, temperature, and viscosity stratification over a range of Reynolds numbers ranging from 5*10{sup 3} to 1*10{sup 5}. Conductivity measurements, using wire-mesh and wall sensors, as well as optical measurements, using particle image velocimetry, were conducted with high temporal and spatial resolutions (up to 2.5 kHz and 1 mm in the case of the wire mesh sensor) in the mixing zone, downstream of a splitter plate. The present paper communicates the results of RANS modeling of selected GEMIX tests. Steady-state CFD calculations using a RANS turbulence model represent an inexpensive method for analyzing large and complex components in commercial nuclear reactors, such as the downcomer and reactor pressure vessel heads. Crucial to real world applicability, however, is the ability to model turbulent heat fluctuations in the flow; the Turbulent Heat Flux Transport model developed by ANSYS CFX is capable, by implementation of a transport equation for turbulent heat fluxes, of readily modeling these values. Furthermore, the closure of the turbulent heat flux transport equation evokes a transport equation for the variance of the enthalpy. It is therefore possible to compare the modeled fluctuations of the liquid temperature directly with the scalar fluctuations recorded experimentally with the wire-mesh. Combined with a working Turbulent Heat Flux Transport model, complex mixing problems in large geometries could be better understood. We aim for the validation of Reynolds Stress based RANS simulations extended by the Turbulent Heat Flux Transport model by modeling the GEMIX experiments in detail. Numerical modeling has been performed using both BSL and SSG Reynolds Stress Models in a test matrix comprising experimental trials at the GEMIX facility. We expand on the turbulent mixing RANS CFD results of (Manera 2009) in a few ways. In the GEMIX facility we introduce density stratification in the flow while removing the characteristic large scale vorticity encountered in T-junctions and therefore find better conditions to check the diffusive conditions in the model. Furthermore, we study the performance of the model in a very different, simpler scalar fluctuation spectrum. The paper discusses the performance of the model regarding the dissipation of the turbulent kinetic energy and dissipation of the enthalpy variance. A novel element is the analyses of cases with density stratification. (authors)« less
Computational Analyses of Pressurization in Cryogenic Tanks
NASA Technical Reports Server (NTRS)
Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chun P.; Field, Robert E.; Ryan, Harry
2010-01-01
A comprehensive numerical framework utilizing multi-element unstructured CFD and rigorous real fluid property routines has been developed to carry out analyses of propellant tank and delivery systems at NASA SSC. Traditionally CFD modeling of pressurization and mixing in cryogenic tanks has been difficult primarily because the fluids in the tank co-exist in different sub-critical and supercritical states with largely varying properties that have to be accurately accounted for in order to predict the correct mixing and phase change between the ullage and the propellant. For example, during tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. In our modeling framework, we incorporated two different approaches to real fluids modeling: (a) the first approach is based on the HBMS model developed by Hirschfelder, Beuler, McGee and Sutton and (b) the second approach is based on a cubic equation of state developed by Soave, Redlich and Kwong (SRK). Both approaches cover fluid properties and property variation spanning sub-critical gas and liquid states as well as the supercritical states. Both models were rigorously tested and properties for common fluids such as oxygen, nitrogen, hydrogen etc were compared against NIST data in both the sub-critical as well as supercritical regimes.
Highlights from High Energy Neutrino Experiments at CERN
NASA Astrophysics Data System (ADS)
Schlatter, W.-D.
2015-07-01
Experiments with high energy neutrino beams at CERN provided early quantitative tests of the Standard Model. This article describes results from studies of the nucleon quark structure and of the weak current, together with the precise measurement of the weak mixing angle. These results have established a new quality for tests of the electroweak model. In addition, the measurements of the nucleon structure functions in deep inelastic neutrino scattering allowed first quantitative tests of QCD.
Mixed-Effects Models for Count Data with Applications to Educational Research
ERIC Educational Resources Information Center
Shin, Jihyung
2012-01-01
This research is motivated by an analysis of reading research data. We are interested in modeling the test outcome of ability to fluently recode letters into sounds of kindergarten children aged between 5 and 7. The data showed excessive zero scores (more than 30% of children) on the test. In this dissertation, we carefully examine the models…
VCE early acoustic test results of General Electric's high-radius ratio coannular plug nozzle
NASA Technical Reports Server (NTRS)
Knott, P. R.; Brausch, J. F.; Bhutiani, P. K.; Majjigi, R. K.; Doyle, V. L.
1980-01-01
Results of variable cycle engine (VCE) early acoustic engine and model scale tests are presented. A summary of an extensive series of far field acoustic, advanced acoustic, and exhaust plume velocity measurements with a laser velocimeter of inverted velocity and temperature profile, high radius ratio coannular plug nozzles on a YJ101 VCE static engine test vehicle are reviewed. Select model scale simulated flight acoustic measurements for an unsuppressed and a mechanical suppressed coannular plug nozzle are also discussed. The engine acoustic nozzle tests verify previous model scale noise reduction measurements. The engine measurements show 4 to 6 PNdB aft quadrant jet noise reduction and up to 7 PNdB forward quadrant shock noise reduction relative to a fully mixed conical nozzle at the same specific thrust and mixed pressure ratio. The influences of outer nozzle radius ratio, inner stream velocity ratio, and area ratio are discussed. Also, laser velocimeter measurements of mean velocity and turbulent velocity of the YJ101 engine are illustrated. Select model scale static and simulated flight acoustic measurements are shown which corroborate that coannular suppression is maintained in forward speed.
ERIC Educational Resources Information Center
Kim, Jiseon
2010-01-01
Classification testing has been widely used to make categorical decisions by determining whether an examinee has a certain degree of ability required by established standards. As computer technologies have developed, classification testing has become more computerized. Several approaches have been proposed and investigated in the context of…
System equivalent model mixing
NASA Astrophysics Data System (ADS)
Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis
2018-05-01
This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.
Neutrino masses and mixing from S4 flavor twisting
NASA Astrophysics Data System (ADS)
Ishimori, Hajime; Shimizu, Yusuke; Tanimoto, Morimitsu; Watanabe, Atsushi
2011-02-01
We discuss a neutrino mass model based on the S4 discrete symmetry where the symmetry breaking is triggered by the boundary conditions of the bulk right-handed neutrino in the fifth spacial dimension. The three generations of the left-handed lepton doublets and the right-handed neutrinos are assigned to be the triplets of S4. The magnitudes of the lepton mixing angles, especially the reactor angle, are related to the neutrino mass patterns, and the model will be tested in future neutrino experiments, e.g., an early discovery of the reactor angle favors the normal hierarchy. For the inverted hierarchy, the lepton mixing is predicted to be almost the tribimaximal mixing. The size of the extra dimension has a connection to the possible mass spectrum; a small (large) volume corresponds to the normal (inverted) mass hierarchy.
THE ROLE OF THERMOHALINE MIXING IN INTERMEDIATE- AND LOW-METALLICITY GLOBULAR CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelou, George C.; Stancliffe, Richard J.; Church, Ross P.
It is now widely accepted that globular cluster red giant branch (RGB) stars owe their strange abundance patterns to a combination of pollution from progenitor stars and in situ extra mixing. In this hybrid theory a first generation of stars imprints abundance patterns into the gas from which a second generation forms. The hybrid theory suggests that extra mixing is operating in both populations and we use the variation of [C/Fe] with luminosity to examine how efficient this mixing is. We investigate the observed RGBs of M3, M13, M92, M15, and NGC 5466 as a means to test a theorymore » of thermohaline mixing. The second parameter pair M3 and M13 are of intermediate metallicity and our models are able to account for the evolution of carbon along the RGB in both clusters, although in order to fit the most carbon-depleted main-sequence stars in M13 we require a model whose initial [C/Fe] abundance leads to a carbon abundance lower than is observed. Furthermore, our results suggest that stars in M13 formed with some primary nitrogen (higher C+N+O than stars in M3). In the metal-poor regime only NGC 5466 can be tentatively explained by thermohaline mixing operating in multiple populations. We find thermohaline mixing unable to model the depletion of [C/Fe] with magnitude in M92 and M15. It appears as if extra mixing is occurring before the luminosity function bump in these clusters. To reconcile the data with the models would require first dredge-up to be deeper than found in extant models.« less
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.
Using generalized additive (mixed) models to analyze single case designs.
Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J
2014-04-01
This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Schellenberg, Benjamin J I; Verner-Filion, Jérémie; Gaudreau, Patrick; Bailis, Daniel S; Lafrenière, Marc-André K; Vallerand, Robert J
2018-03-10
Passion research has focused extensively on the unique effects of both harmonious passion and obsessive passion (Vallerand, 2015). We adopted a quadripartite approach (Gaudreau & Thompson, 2010) to test whether physical and psychological well-being are distinctly related to subtypes of passion with varying within-person passion combinations: pure harmonious passion, pure obsessive passion, mixed passion, and non-passion. In four studies (total N = 3,122), we tested whether passion subtypes were differentially associated with self-reported general health (Study 1; N = 1,218 undergraduates), health symptoms in video gamers (Study 2; N = 269 video game players), global psychological well-being (Study 3; N = 1,192 undergraduates), and academic burnout (Study 4; N = 443 undergraduates) using latent moderated structural equation modeling. Pure harmonious passion was generally associated with more positive levels of physical health and psychological well-being compared to pure obsessive passion, mixed passion, and non-passion. In contrast, outcomes were more negative for pure obsessive passion compared to both mixed passion and non-passion subtypes. This research underscores the theoretical and empirical usefulness of a quadripartite approach for the study of passion. Overall, the results demonstrate the benefits of having harmonious passion, even when obsessive passion is also high (i.e., mixed passion), and highlight the costs associated with a pure obsessive passion. © 2018 Wiley Periodicals, Inc.
Characterization and Modeling of Atmospheric Flow Within and Above Plant Canopies
NASA Astrophysics Data System (ADS)
Souza Freire Grion, Livia
The turbulent flow within and above plant canopies is responsible for the exchange of momentum, heat, gases and particles between vegetation and the atmosphere. Turbulence is also responsible for the mixing of air inside the canopy, playing an important role in chemical and biophysical processes occurring in the plants' environment. In the last fifty years, research has significantly advanced the understanding of and ability to model the flow field within and above the canopy, but important issues remain unsolved. In this work, we focus on (i) the estimation of turbulent mixing timescales within the canopy from field data; and (ii) the development of new computationally efficient modeling approaches for the coupled canopy-atmosphere flow field. The turbulent mixing timescale represents how quickly turbulence creates a well-mixed environment within the canopy. When the mixing timescale is much smaller than the timescale of other relevant processes (e.g. chemical reactions, deposition), the system can be assumed to be well-mixed and detailed modeling of turbulence is not critical to predict the system evolution. Conversely, if the mixing timescale is comparable or larger than the other timescales, turbulence becomes a controlling factor for the concentration of the variables involved; hence, turbulence needs to be taken into account when studying and modeling such processes. In this work, we used a combination of ozone concentration and high-frequency velocity data measured within and above the canopy in the Amazon rainforest to characterize turbulent mixing. The eddy diffusivity parameter (used as a proxy for mixing efficiency) was applied in a simple theoretical model of one-dimensional diffusion, providing an estimate of turbulent mixing timescales as a function of height within the canopy and time-of-day. Results showed that, during the day, the Amazon rainforest is characterized by well-mixed conditions with mixing timescales smaller than thirty minutes in the upper-half of the canopy, and partially mixed conditions in the lower half of the canopy. During the night, most of the canopy (except for the upper 20%) is either partially or poorly mixed, resulting in mixing timescales of up to several hours. For the specific case of ozone, the mixing timescales observed during the day are much lower than the chemical and deposition timescales, whereas chemical processes and turbulence have comparable timescales during the night. In addition, the high day-to-day variability in mixing conditions and the fast increase in mixing during the morning transition period indicate that turbulence within the canopy needs to be properly investigated and modeled in many studies involving plant-atmosphere interactions. Motivated by the findings described above, this work proposes and tests a new approach for modeling canopy flows. Typically, vertical profiles of flow statistics are needed to represent canopy-atmosphere exchanges in chemical and biophysical processes happening within the canopy. Current single-column models provide only steady-state (equilibrium) profiles, and rely on closure assumptions that do not represent the dominant non-local turbulent fluxes present in canopy flows. We overcome these issues by adapting the one-dimensional turbulent (ODT) model to represent atmospheric flows from the ground up to the top of the atmospheric boundary layer (ABL). The ODT model numerically resolves the one-dimensional diffusion equation along a vertical line (representing a horizontally homogeneous ABL column), and the presence of three-dimensional turbulence is added through the effect of stochastic eddies. Simulations of ABL without canopy were performed for different atmospheric stabilities and a diurnal cycle, to test the capabilities of this modeling approach in representing unsteady flows with strong non-local transport. In addition, four different types of canopies were simulated, one of them including the transport of scalar with a point source located inside the canopy. The comparison of all simulations with theory and field data provided satisfactory results. The main advantages of using ODT compared to typical 1D canopy-flow models are the ability to represent the coupled canopy-ABL flow with one single modeling approach, the presence of non-local turbulent fluxes, the ability to simulate transient conditions, the straightforward representation of multiple scalar fields, and the presence of only one adjustable parameter (as opposed to the several adjustable constants and boundary conditions needed for other modeling approaches). The results obtained with ODT as a stand-alone model motivated its use as a surface parameterization for Large-Eddy Simulation (LES). In this two-way coupling between LES and ODT, the former is used to simulate the ABL in a case where a canopy is present but cannot be resolved by the LES (i.e., the LES first vertical grid point is above the canopy). ODT is used to represent the flow field between the ground and the first LES grid point, including the region within and just above the canopy. In this work, we tested the ODT-LES model for three different types of canopies and obtained promising results. Although more work is needed in order to improve first and second-order statistics within the canopy (i.e. in the ODT domain), the results obtained for the flow statistics in the LES domain and for the third order statistics in the ODT domain demonstrate that the ODT-LES model is capable of capturing some important features of the canopy-atmosphere interaction. This new surface superparameterization approach using ODT provides a new alternative for simulations that require complex interactions between the flow field and near-surface processes (e.g. sand and snow drift, waves over water surfaces) and can potentially be extended to other large-scale models, such as mesoscale and global circulation models.
Experimental Applications of Automatic Test Markup Language (ATML)
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris
2012-01-01
The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.
Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu
2016-01-01
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793
Microstructural effects on constitutive and fatigue fracture behavior of TinSilverCopper solder
NASA Astrophysics Data System (ADS)
Tucker, Jonathon P.
As microelectronic package construction becomes more diverse and complex, the need for accurate, geometry-independent material constitutive and failure models increases. Evaluations of packages based on accelerated environmental tests (such as accelerated thermal cycling or power cycling) only provide package-dependent reliability information. In addition, extrapolations of such test data to life predictions under field conditions are often empirical. Besides geometry, accelerated environmental test data must account for microstructural factors such as alloy composition or isothermal aging condition, resulting in expensive experimental variation. In this work, displacement-controlled, creep, and fatigue lap shear tests are conducted on specially designed SnAgCu test specimens with microstructures representative to those found in commercial microelectronic packages. The data are used to develop constitutive and fatigue fracture material models capable of describing deformation and fracture behavior for the relevant temperature and strain rate ranges. Furthermore, insight is provided into the microstructural variation of solder joints and the subsequent effect on material behavior. These models are appropriate for application to packages of any geometrical construction. The first focus of the thesis is on Pb-mixed SnAgCu solder alloys. During the transition from Pb-containing solders to Pb-free solders, joints composed of a mixture of SnPb and SnAgCu often result from either mixed assemblies or rework. Three alloys of 1, 5 and 20 weight percent Pb were selected so as to represent reasonable ranges of Pb contamination expected from different 63Sn37Pb components mixed with Sn3.0Ag0.5Cu. Displacement-controlled (constant strain rate) and creep tests were performed at temperatures of 25°C, 75°C, and 125°C using a double lap shear test setup that ensures a nearly homogeneous state of plastic strain at the joint interface. Rate-dependent constitutive models for Pb-contaminated SnAgCu solder alloys ranging from the traditional time-hardening creep model to the viscoplastic Anand model are described. The second focus of the thesis is on fatigue damage accumulation in SnAgCu solder alloys. While, typical fatigue fracture models are empirical, recently a non-empirical model termed Maximum Entropy Fracture Model (MEFM) was proposed. MEFM is a thermodynamically consistent and information theory inspired damage accumulation theory for ductile solids. This model has been validated recently for Sn3.8Ag0.7Cu solder alloy, and uses a single damage accumulation parameter to relate the probability of fracture to accumulated entropic dissipation. Isothermal cycling fatigue tests on Sn3.0Ag0.5Cu and mixed SnPb/Sn3.0Ag0.5Cu solder alloys at varying strain rates and temperatures are conducted using a custom-built microscale mechanical tester capable of submicron displacement resolution. MEFM is applied here in conjunction with the Anand viscoplasticity model to predict the softening occurring over successive cycles as a result of damage accumulation. The damage accumulation parameters for Sn3.0Ag0.5Cu in different aged states are related to a microstructural parameter which quantitatively describes the state of coarsening. In addition, damage accumulation parameters for the three mixed solder alloys are reported. This approach allows for a non-empirical prediction of both constitutive and fracture behavior of packages of different geometries and different microstructural states under thermo-mechanical fatigue. Approaches to solder joint reliability predictions from materials science and mechanics perspectives differ dramatically. Materials science methods identify key failure mechanisms, but most models cannot predict failure. In contrast, mechanics approaches often provide estimates of joint lifetime, but fail to provide insight into microstructural influences. This work attempts to connect the two fields by relating constitutive behavior and fatigue fracture models for different alloys and aging conditions to one or more microstructural parameters.
HYDRAULICS AND MIXING EVALUATIONS FOR NT-21/41 TANKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.; Barnes, O.
2014-11-17
The hydraulic results demonstrate that pump head pressure of 20 psi recirculates about 5.6 liters/min flowrate through the existing 0.131-inch orifice when a valve connected to NT-41 is closed. In case of the valve open to NT-41, the solution flowrates to HB-Line tanks, NT-21 and NT-41, are found to be about 0.5 lpm and 5.2 lpm, respectively. The modeling calculations for the mixing operations of miscible fluids contained in the HB-Line tank NT-21 were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test resultsmore » to validate the model. Final performance calculations were performed for the nominal case by using the validated model to quantify the mixing time for the HB-Line tank. The results demonstrate that when a pump recirculates a solution volume of 5.7 liters every minute out of the 72-liter tank contents containing two acid solutions of 2.7 M and 0 M concentrations (i.e., water), a minimum mixing time of 1.5 hours is adequate for the tank contents to get the tank contents adequately mixed. In addition, the sensitivity results for the tank contents of 8 M existing solution and 1.5 M incoming species show that the mixing time takes about 2 hours to get the solutions mixed.« less
Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei
2017-11-01
A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.
Crown structure and growth efficiency of red spruce in uneven-aged, mixed-species stands in Maine
Douglas A. Maguire; John C. Brissette; Lianhong. Gu
1998-01-01
Several hypotheses about the relationships among individual tree growth, tree leaf area, and relative tree size or position were tested with red spruce (Picea rubens Sarg.) growing in uneven-aged, mixed-species forests of south-central Maine, U.S.A. Based on data from 65 sample trees, predictive models were developed to (i)...
Mushquash, Aislin R; Sherry, Simon B
2013-04-01
The perfectionism model of binge eating is an integrative model explaining why perfectionism is tied to binge eating. This study extended and tested this emerging model by proposing daughters' socially prescribed perfectionism (i.e., perceiving one's mother is harshly demanding perfection of oneself) and mothers' psychological control (i.e., a negative parenting style involving control and demandingness) contribute indirectly to daughters' binge eating by generating situations or experiences that trigger binge eating. These binge triggers include discrepancies (i.e., viewing oneself as falling short of one's mother's expectations), depressive affect (i.e., feeling miserable and sad), and dietary restraint (i.e., behaviors aimed at reduced caloric intake). This model was tested in 218 mother-daughter dyads studied using a mixed longitudinal and daily diary design. Daughters were undergraduate students. Results largely supported hypotheses, with bootstrapped tests of mediation suggesting daughters' socially prescribed perfectionism and mothers' psychological control contribute to binge eating through binge triggers. For undergraduate women who believe their mothers rigidly require them to be perfect and whose mothers are demanding and controlling, binge eating may provide a means of coping with or escaping from an unhealthy, unsatisfying mother-daughter relationship. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sulcus reproduction with elastomeric impression materials: a new in vitro testing method.
Finger, Werner J; Kurokawa, Rie; Takahashi, Hidekazu; Komatsu, Masashi
2008-12-01
Aim of this study was to investigate the depth reproduction of differently wide sulci with elastomeric impression materials by single- and double-mix techniques using a tooth and sulcus model, simulating clinical conditions. Impressions with one vinyl polysiloxane (VPS; FLE), two polyethers (PE; IMP and P2), and one hybrid VPS/PE elastomer (FUS) were taken from a truncated steel cone with a circumferential 2 mm deep sulcus, 50, 100 or 200 microm wide. The "root surface" was in steel and the "periodontal tissue" in reversible hydrocolloid. Single-mix impressions were taken with light-body (L) or monophase (M) pastes, double-mix impressions with L as syringe and M or heavy-body (H) as tray materials (n=8). Sulcus reproduction was determined by 3D laser topography of impressions at eight locations, 45 degrees apart. Statistical data analysis by ANOVA and multiple comparison tests (p<0.05). For 200 microm wide sulci, significant differences were found between impression materials only: FLE=IMP>FUS=P2. At 50 and 100 microm width, significant differences were found between materials (IMP>FUS=FLE>P2) and techniques (L+H=L+M>M>L). The sulcus model is considered useful for screening evaluation of elastomeric impression materials ability to reproduce narrow sulci. All tested materials and techniques reproduced 200 microm wide sulci to almost nominal depth. Irrespective of the impression technique used, IMP showed the best penetration ability in 50 and 100 microm sulci. Double-mix techniques are more suitable to reproduce narrow sulci than single-mix techniques.
NASA Astrophysics Data System (ADS)
Phanikumar, Mantha S.; McGuire, Jennifer T.
2010-08-01
Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.
Introducing "Emotioncy" as a Potential Source of Test Bias: A Mixed Rasch Modeling Study
ERIC Educational Resources Information Center
Pishghadam, Reza; Baghaei, Purya; Seyednozadi, Zahra
2017-01-01
This article attempts to present emotioncy as a potential source of test bias to inform the analysis of test item performance. Emotioncy is defined as a hierarchy, ranging from "exvolvement" (auditory, visual, and kinesthetic) to "involvement" (inner and arch), to emphasize the emotions evoked by the senses. This study…
Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena
2013-12-01
There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly significant difference for each dimension measured (except for the inter-abutment distance between the first and the second die) between any two groups of stone models obtained from the four impression techniques. Pair wise comparison for each measurement did not reveal any significant difference (except for the faciolingual distance of the third die) between the casts produced using the two step double mix impression technique and the matrix impression system. The two step double mix impression technique produced stone dies that showed the least dimensional variation. During fabrication of a cast restoration, laboratory procedures should not only compensate for the cement thickness, but also for the increase or decrease in die dimensions.
Chatterjee, Manavi; Verma, Pinki; Palit, Gautam
2010-03-01
The present study was undertaken to compare medicinal plants against mixed anxiety-depressive disorder (MAD) to evaluate their potency in combating MAD disorders. Previous studies from our lab have shown that Bacopa monniera (BM), and Panax quniquefolium (PQ) have significant adaptogenic properties. Hence, we have further confirmed their activity in stress related disorders like anxiety and depression in animal model, rodents and assessed their efficacy. In our experimental protocol, gross behaviour was observed through Digiscan animal activity monitor. Anxiety was studied through light dark test, elevated plus maze test and holeboard test. Depression experiments were conducted following tail suspension test and forced swim test. Further, rotarod test was also used to study any defects in motor in-coordination in mice. It was observed that BM at the dose of 80 mg/kg (po) and PQ at 100 mg/kg (po) were effective as an anti-anxiety as well anti-depressant activity and had no motor in-coordination in mice. Hence, these extracts can be used as a potent therapeutic agent in treating mixed anxiety-depressive disorder (MAD).
Rime-, mixed- and glaze-ice evaluations of three scaling laws
NASA Technical Reports Server (NTRS)
Anderson, David N.
1994-01-01
This report presents the results of tests at NASA Lewis to evaluate three icing scaling relationships or 'laws' for an unheated model. The laws were LWC x time = constant, one proposed by a Swedish-Russian group and one used at ONERA in France. Icing tests were performed in the NASA Lewis Icing Research Tunnel (IRT) with cylinders ranging from 2.5- to 15.2-cm diameter. Reference conditions were chosen to provide rime, mixed and glaze ice. Scaled conditions were tested for several scenarios of size and velocity scaling, and the resulting ice shapes compared. For rime-ice conditions, all three of the scaling laws provided scaled ice shapes which closely matched reference ice shapes. For mixed ice and for glaze ice none of the scaling laws produced consistently good simulation of the reference ice shapes. Explanations for the observed results are proposed, and scaling issues requiring further study are identified.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.
Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano
2017-11-08
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation
NASA Technical Reports Server (NTRS)
Holt, James B.; Ruf, Joe
1999-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.
Interpretable inference on the mixed effect model with the Box-Cox transformation.
Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M
2017-07-10
We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Artun, Huseyin; Costu, Bayram
2013-01-01
The aim of this study was to explore a group of prospective primary teachers' conceptual understanding of diffusion and osmosis as they implemented a 5E constructivist model and related materials in a science methods course. Fifty prospective primary teachers' ideas were elicited using a pre- and post-test and delayed post-test survey consisting…
ERIC Educational Resources Information Center
Elaldi, Senel
2016-01-01
This study aimed to determine the effect of mastery learning model supported with reflective thinking activities on the fifth grade medical students' academic achievement. Mixed methods approach was applied in two samples (n = 64 and n = 6). Quantitative part of the study was based on a pre-test-post-test control group design with an experiment…
TEMPEST code modifications and testing for erosion-resisting sludge simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onishi, Y.; Trent, D.S.
The TEMPEST computer code has been used to address many waste retrieval operational and safety questions regarding waste mobilization, mixing, and gas retention. Because the amount of sludge retrieved from the tank is directly related to the sludge yield strength and the shear stress acting upon it, it is important to incorporate the sludge yield strength into simulations of erosion-resisting tank waste retrieval operations. This report describes current efforts to modify the TEMPEST code to simulate pump jet mixing of erosion-resisting tank wastes and the models used to test for erosion of waste sludge with yield strength. Test results formore » solid deposition and diluent/slurry jet injection into sludge layers in simplified tank conditions show that the modified TEMPEST code has a basic ability to simulate both the mobility and immobility of the sludges with yield strength. Further testing, modification, calibration, and verification of the sludge mobilization/immobilization model are planned using erosion data as they apply to waste tank sludges.« less
Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong
2016-01-01
Background and Purpose A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. Methods The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson’s correlation coefficient. Results The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79–0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81–0.84). Excellent calibration was reported in the two models with Pearson’s correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008). Conclusions The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke. PMID:27846282
Spray Bar Zero-Gravity Vent System for On-Orbit Liquid Hydrogen Storage
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Flachbart, R. H.; Martin, J. J.; Hedayat, A.; Fazah, M.; Lak, T.; Nguyen, H.; Bailey, J. W.
2003-01-01
During zero-gravity orbital cryogenic propulsion operations, a thermodynamic vent system (TVS) concept is expected to maintain tank pressure control without propellant resettling. In this case, a longitudinal spray bar mixer system, coupled with a Joule-Thompson (J-T) valve and heat exchanger, was evaluated in a series of TVS tests using the 18 cu m multipurpose hydrogen test bed. Tests performed at fill levels of 90, 50, and 25 percent, coupled with heat tank leaks of about 20 and 50 W, successfully demonstrated tank pressure control within a 7-kPa band. Based on limited testing, the presence of helium constrained the energy exchange between the gaseous and liquid hydrogen (LH2) during the mixing cycles. A transient analytical model, formulated to characterize TVS performance, was used to correlate the test data. During self-pressurization cycles following tank lockup, the model predicted faster pressure rise rates than were measured; however, once the system entered the cyclic self-pressurization/mixing/venting operational mode, the modeled and measured data were quite similar. During a special test at the 25-percent fill level, the J-T valve was allowed to remain open and successfully reduced the bulk LH2 saturation pressure from 133 to 70 kPa in 188 min.
Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses
Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah
2015-01-01
Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481
NASA Technical Reports Server (NTRS)
Yin, Q.-Z.; Sanborn, M. E.; Goodrich, C. A.; Zolensky, M.; Fioretti, A. M.; Shaddad, M.; Kohl, I. E.; Young, E. D.
2018-01-01
There is an increasing number of Cr-O-Ti isotope studies that show that solar system materials are divided into two main populations, one carbonaceous chondrite (CC)-like and the other is non-carbonaceous (NCC)-like, with minimal mixing between them attributed to a gap opened in the propoplanetary disk due to Jupiter's formation. The Grand Tack model suggests that there should be a particular time in the disk history when this gap is breached and ensuring a subsequent large-scale mixing between S- and C-type asteroids (inner solar system and outer solar system materials), an idea supported by our recent work on chondrule (Delta)17O-(epsilon)54Cr isotope systematics.
Experimental and computational fluid dynamics studies of mixing of complex oral health products
NASA Astrophysics Data System (ADS)
Cortada-Garcia, Marti; Migliozzi, Simona; Weheliye, Weheliye Hashi; Dore, Valentina; Mazzei, Luca; Angeli, Panagiota; ThAMes Multiphase Team
2017-11-01
Highly viscous non-Newtonian fluids are largely used in the manufacturing of specialized oral care products. Mixing often takes place in mechanically stirred vessels where the flow fields and mixing times depend on the geometric configuration and the fluid physical properties. In this research, we study the mixing performance of complex non-Newtonian fluids using Computational Fluid Dynamics models and validate them against experimental laser-based optical techniques. To this aim, we developed a scaled-down version of an industrial mixer. As test fluids, we used mixtures of glycerol and a Carbomer gel. The viscosities of the mixtures against shear rate at different temperatures and phase ratios were measured and found to be well described by the Carreau model. The numerical results were compared against experimental measurements of velocity fields from Particle Image Velocimetry (PIV) and concentration profiles from Planar Laser Induced Fluorescence (PLIF).
NASA Astrophysics Data System (ADS)
Stöckl, Stefan; Rotach, Mathias W.; Kljun, Natascha
2018-01-01
We discuss the results of Gibson and Sailor (Boundary-Layer Meteorol 145:399-406, 2012) who suggest several corrections to the mathematical formulation of the Lagrangian particle dispersion model of Rotach et al. (Q J R Meteorol Soc 122:367-389, 1996). While most of the suggested corrections had already been implemented in the 1990s, one suggested correction raises a valid point, but results in a violation of the well-mixed criterion. Here we improve their idea and test the impact on model results using a well-mixed test and a comparison with wind-tunnel experimental data. The new approach results in similar dispersion patterns as the original approach, while the approach suggested by Gibson and Sailor leads to erroneously reduced concentrations near the ground in convective and especially forced convective conditions.
A novel iterative mixed model to remap three complex orthopedic traits in dogs
Huang, Meng; Hayward, Jessica J.; Corey, Elizabeth; Garrison, Susan J.; Wagner, Gabriela R.; Krotscheck, Ursula; Hayashi, Kei; Schweitzer, Peter A.; Lust, George; Boyko, Adam R.; Todhunter, Rory J.
2017-01-01
Hip dysplasia (HD), elbow dysplasia (ED), and rupture of the cranial (anterior) cruciate ligament (RCCL) are the most common complex orthopedic traits of dogs and all result in debilitating osteoarthritis. We reanalyzed previously reported data: the Norberg angle (a quantitative measure of HD) in 921 dogs, ED in 113 cases and 633 controls, and RCCL in 271 cases and 399 controls and their genotypes at ~185,000 single nucleotide polymorphisms. A novel fixed and random model with a circulating probability unification (FarmCPU) function, with marker-based principal components and a kinship matrix to correct for population stratification, was used. A Bonferroni correction at p<0.01 resulted in a P< 6.96 ×10−8. Six loci were identified; three for HD and three for RCCL. An associated locus at CFA28:34,369,342 for HD was described previously in the same dogs using a conventional mixed model. No loci were identified for RCCL in the previous report but the two loci for ED in the previous report did not reach genome-wide significance using the FarmCPU model. These results were supported by simulation which demonstrated that the FarmCPU held no power advantage over the linear mixed model for the ED sample but provided additional power for the HD and RCCL samples. Candidate genes for HD and RCCL are discussed. When using FarmCPU software, we recommend a resampling test, that a positive control be used to determine the optimum pseudo quantitative trait nucleotide-based covariate structure of the model, and a negative control be used consisting of permutation testing and the identical resampling test as for the non-permuted phenotypes. PMID:28614352
Simulatd Nitrogen Cycling Response to Elevated CO2 in Pinus taeda and Mixed Dediduous Forests
D.W. Johnson
1999-01-01
Interactions between elevated CO2 and N cycling were explored with a nutrient cycling model (NuCM, Johnson et al. 1993, 1995) for a Pinus tuedu L. site at Duke University North Carolina, and a mixed deciduous site at Walker Branch, Tennessee. The simulations tested whether N limitation would prevent growth increases in response to elevated CO...
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Impact of Antarctic mixed-phase clouds on climate.
Lawson, R Paul; Gettelman, Andrew
2014-12-23
Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm(-2), and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than -20 °C.
Impact of Antarctic mixed-phase clouds on climate
Lawson, R. Paul; Gettelman, Andrew
2014-01-01
Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm−2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than −20 °C. PMID:25489069
ERIC Educational Resources Information Center
Birnbaum, Michael H.
2007-01-01
Four experiments with 1391 participants compared descriptive models of risky decision making. The first replicated and extended evidence refuting cumulative prospect theory (CPT) as an explanation of Allais paradoxes. The second and third experiments used a new design to unconfound tests of upper and lower coalescing, which allows tests of…
Examination of Test and Item Statistics from Visual and Verbal Mathematics Questions
ERIC Educational Resources Information Center
Alpayar, Cagla; Gulleroglu, H. Deniz
2017-01-01
The aim of this research is to determine whether students' test performance and approaches to test questions change based on the type of mathematics questions (visual or verbal) administered to them. This research is based on a mixed-design model. The quantitative data are gathered from 297 seventh grade students, attending seven different middle…
Ouma, Paul O; Agutu, Nathan O; Snow, Robert W; Noor, Abdisalan M
2017-09-18
Precise quantification of health service utilisation is important for the estimation of disease burden and allocation of health resources. Current approaches to mapping health facility utilisation rely on spatial accessibility alone as the predictor. However, other spatially varying social, demographic and economic factors may affect the use of health services. The exclusion of these factors can lead to the inaccurate estimation of health facility utilisation. Here, we compare the accuracy of a univariate spatial model, developed only from estimated travel time, to a multivariate model that also includes relevant social, demographic and economic factors. A theoretical surface of travel time to the nearest public health facility was developed. These were assigned to each child reported to have had fever in the Kenya demographic and health survey of 2014 (KDHS 2014). The relationship of child treatment seeking for fever with travel time, household and individual factors from the KDHS2014 were determined using multilevel mixed modelling. Bayesian information criterion (BIC) and likelihood ratio test (LRT) tests were carried out to measure how selected factors improve parsimony and goodness of fit of the time model. Using the mixed model, a univariate spatial model of health facility utilisation was fitted using travel time as the predictor. The mixed model was also used to compute a multivariate spatial model of utilisation, using travel time and modelled surfaces of selected household and individual factors as predictors. The univariate and multivariate spatial models were then compared using the receiver operating area under the curve (AUC) and a percent correct prediction (PCP) test. The best fitting multivariate model had travel time, household wealth index and number of children in household as the predictors. These factors reduced BIC of the time model from 4008 to 2959, a change which was confirmed by the LRT test. Although there was a high correlation of the two modelled probability surfaces (Adj R 2 = 88%), the multivariate model had better AUC compared to the univariate model; 0.83 versus 0.73 and PCP 0.61 versus 0.45 values. Our study shows that a model that uses travel time, as well as household and individual-level socio-demographic factors, results in a more accurate estimation of use of health facilities for the treatment of childhood fever, compared to one that relies on only travel time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less
Koerner, Tess K.; Zhang, Yang
2017-01-01
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422
Steel Containment Vessel Model Test: Results and Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costello, J.F.; Hashimote, T.; Hessheimer, M.F.
A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformationmore » behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.« less
Lin, Monica H; Kwan, Virginia S Y; Cheung, Anna; Fiske, Susan T
2005-01-01
The Stereotype Content Model hypothesizes anti-Asian American stereotypes differentiating two dimensions: (excessive) competence and (deficient) sociability. The Scale of Anti-Asian American Stereotypes (SAAAS) shows this envious mixed prejudice in six studies. Study 1 began with 131 racial attitude items. Studies 2 and 3 tested 684 respondents on a focused 25-item version. Studies 4 and 5 tested the final 25-item SAAAS on 222 respondents at three campuses; scores predicted outgroup friendships, cultural experiences, and (over)estimated campus presence. Study 6 showed that allegedly low sociability, rather than excessively high competence, drives rejection of Asian Americans, consistent with system justification theory. The SAAAS demonstrates mixed, envious anti-Asian American prejudice, contrasting with more-often-studied contemptuous racial prejudices (i.e., against Blacks).
Lobréaux, Stéphane; Melodelima, Christelle
2015-02-01
We tested the use of Generalized Linear Mixed Models to detect associations between genetic loci and environmental variables, taking into account the population structure of sampled individuals. We used a simulation approach to generate datasets under demographically and selectively explicit models. These datasets were used to analyze and optimize GLMM capacity to detect the association between markers and selective coefficients as environmental data in terms of false and true positive rates. Different sampling strategies were tested, maximizing the number of populations sampled, sites sampled per population, or individuals sampled per site, and the effect of different selective intensities on the efficiency of the method was determined. Finally, we apply these models to an Arabidopsis thaliana SNP dataset from different accessions, looking for loci associated with spring minimal temperature. We identified 25 regions that exhibit unusual correlations with the climatic variable and contain genes with functions related to temperature stress. Copyright © 2014 Elsevier Inc. All rights reserved.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex
Lindsay, Grace W.
2017-01-01
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463
Plasma transport in an Eulerian AMR code
Vold, E. L.; Rauenzahn, R. M.; Aldrich, C. H.; ...
2017-04-04
A plasma transport model has been implemented in an Eulerian AMR radiation-hydrodynamics code, xRage, which includes plasma viscosity in the momentum tensor, viscous dissipation in the energy equations, and binary species mixing with consistent species mass and energy fluxes driven by concentration gradients, ion and electron baro-diffusion terms and temperature gradient forces. The physics basis, computational issues, numeric options, and results from several test problems are discussed. The transport coefficients are found to be relatively insensitive to the kinetic correction factors when the concentrations are expressed with the molar fractions and the ion mass differences are large. The contributions tomore » flow dynamics from plasma viscosity and mass diffusion were found to increase significantly as scale lengths decrease in an inertial confinement fusion relevant Kelvin-Helmholtz instability mix layer. The mixing scale lengths in the test case are on the order of 100 μm and smaller for viscous effects to appear and 10 μm or less for significant ion species diffusion, evident over durations on the order of nanoseconds. The temperature gradient driven mass flux is seen to deplete a high Z tracer ion at the ion shock front. The plasma transport model provides the generation of the atomic mix per unit of interfacial area between two species with no free parameters. The evolution of the total atomic mix then depends also on an accurate resolution or estimate of the interfacial area between the species mixing by plasma transport. High resolution simulations or a more Lagrangian-like treatment of species interfaces may be required to distinguish plasma transport and numerical diffusion in an Eulerian computation of complex and dynamically evolving mix regions.« less
Plasma transport in an Eulerian AMR code
NASA Astrophysics Data System (ADS)
Vold, E. L.; Rauenzahn, R. M.; Aldrich, C. H.; Molvig, K.; Simakov, A. N.; Haines, B. M.
2017-04-01
A plasma transport model has been implemented in an Eulerian AMR radiation-hydrodynamics code, xRage, which includes plasma viscosity in the momentum tensor, viscous dissipation in the energy equations, and binary species mixing with consistent species mass and energy fluxes driven by concentration gradients, ion and electron baro-diffusion terms and temperature gradient forces. The physics basis, computational issues, numeric options, and results from several test problems are discussed. The transport coefficients are found to be relatively insensitive to the kinetic correction factors when the concentrations are expressed with the molar fractions and the ion mass differences are large. The contributions to flow dynamics from plasma viscosity and mass diffusion were found to increase significantly as scale lengths decrease in an inertial confinement fusion relevant Kelvin-Helmholtz instability mix layer. The mixing scale lengths in the test case are on the order of 100 μm and smaller for viscous effects to appear and 10 μm or less for significant ion species diffusion, evident over durations on the order of nanoseconds. The temperature gradient driven mass flux is seen to deplete a high Z tracer ion at the ion shock front. The plasma transport model provides the generation of the atomic mix per unit of interfacial area between two species with no free parameters. The evolution of the total atomic mix then depends also on an accurate resolution or estimate of the interfacial area between the species mixing by plasma transport. High resolution simulations or a more Lagrangian-like treatment of species interfaces may be required to distinguish plasma transport and numerical diffusion in an Eulerian computation of complex and dynamically evolving mix regions.
Development of fuel oil management system software: Phase 1, Tank management module. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lange, H.B.; Baker, J.P.; Allen, D.
1992-01-01
The Fuel Oil Management System (FOMS) is a micro-computer based software system being developed to assist electric utilities that use residual fuel oils with oil purchase and end-use decisions. The Tank Management Module (TMM) is the first FOMS module to be produced. TMM enables the user to follow the mixing status of oils contained in a number of oil storage tanks. The software contains a computational model of residual fuel oil mixing which addresses mixing that occurs as one oil is added to another in a storage tank and also purposeful mixing of the tank by propellers, recirculation or convection.Themore » model also addresses the potential for sludge formation due to incompatibility of oils being mixed. Part 1 of the report presents a technical description of the mixing model and a description of its development. Steps followed in developing the mixing model included: (1) definition of ranges of oil properties and tank design factors used by utilities; (2) review and adaption of prior applicable work; (3) laboratory development; and (4) field verification. Also, a brief laboratory program was devoted to exploring the suitability of suggested methods for predicting viscosities, flash points and pour points of oil mixtures. Part 2 of the report presents a functional description of the TMM software and a description of its development. The software development program consisted of the following steps: (1) on-site interviews at utilities to prioritize needs and characterize user environments; (2) construction of the user interface; and (3) field testing the software.« less
Development of fuel oil management system software: Phase 1, Tank management module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lange, H.B.; Baker, J.P.; Allen, D.
1992-01-01
The Fuel Oil Management System (FOMS) is a micro-computer based software system being developed to assist electric utilities that use residual fuel oils with oil purchase and end-use decisions. The Tank Management Module (TMM) is the first FOMS module to be produced. TMM enables the user to follow the mixing status of oils contained in a number of oil storage tanks. The software contains a computational model of residual fuel oil mixing which addresses mixing that occurs as one oil is added to another in a storage tank and also purposeful mixing of the tank by propellers, recirculation or convection.Themore » model also addresses the potential for sludge formation due to incompatibility of oils being mixed. Part 1 of the report presents a technical description of the mixing model and a description of its development. Steps followed in developing the mixing model included: (1) definition of ranges of oil properties and tank design factors used by utilities; (2) review and adaption of prior applicable work; (3) laboratory development; and (4) field verification. Also, a brief laboratory program was devoted to exploring the suitability of suggested methods for predicting viscosities, flash points and pour points of oil mixtures. Part 2 of the report presents a functional description of the TMM software and a description of its development. The software development program consisted of the following steps: (1) on-site interviews at utilities to prioritize needs and characterize user environments; (2) construction of the user interface; and (3) field testing the software.« less
Gómez, Javier B; Gimeno, María J; Auqué, Luis F; Acero, Patricia
2014-01-15
This paper presents the mixing modelling results for the hydrogeochemical characterisation of groundwaters in the Laxemar area (Sweden). This area is one of the two sites that have been investigated, under the financial patronage of the Swedish Nuclear Waste and Management Co. (SKB), as possible candidates for hosting the proposed repository for the long-term storage of spent nuclear fuel. The classical geochemical modelling, interpreted in the light of the palaeohydrogeological history of the system, has shown that the driving process in the geochemical evolution of this groundwater system is the mixing between four end-member waters: a deep and old saline water, a glacial meltwater, an old marine water, and a meteoric water. In this paper we put the focus on mixing and its effects on the final chemical composition of the groundwaters using a comprehensive methodology that combines principal component analysis with mass balance calculations. This methodology allows us to test several combinations of end member waters and several combinations of compositional variables in order to find optimal solutions in terms of mixing proportions. We have applied this methodology to a dataset of 287 groundwater samples from the Laxemar area collected and analysed by SKB. The best model found uses four conservative elements (Cl, Br, oxygen-18 and deuterium), and computes mixing proportions with respect to three end member waters (saline, glacial and meteoric). Once the first order effect of mixing has been taken into account, water-rock interaction can be used to explain the remaining variability. In this way, the chemistry of each water sample can be obtained by using the mixing proportions for the conservative elements, only affected by mixing, or combining the mixing proportions and the chemical reactions for the non-conservative elements in the system, establishing the basis for predictive calculations. © 2013 Elsevier B.V. All rights reserved.
Testing cloud microphysics parameterizations in NCAR CAM5 with ISDAC and M-PACE observations
NASA Astrophysics Data System (ADS)
Liu, Xiaohong; Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Shi, Xiangjun; Wang, Zhien; Lin, Wuyin; Ghan, Steven J.; Earle, Michael; Liu, Peter S. K.; Zelenyuk, Alla
2011-01-01
Arctic clouds simulated by the National Center for Atmospheric Research (NCAR) Community Atmospheric Model version 5 (CAM5) are evaluated with observations from the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Indirect and Semi-Direct Aerosol Campaign (ISDAC) and Mixed-Phase Arctic Cloud Experiment (M-PACE), which were conducted at its North Slope of Alaska site in April 2008 and October 2004, respectively. Model forecasts for the Arctic spring and fall seasons performed under the Cloud-Associated Parameterizations Testbed framework generally reproduce the spatial distributions of cloud fraction for single-layer boundary-layer mixed-phase stratocumulus and multilayer or deep frontal clouds. However, for low-level stratocumulus, the model significantly underestimates the observed cloud liquid water content in both seasons. As a result, CAM5 significantly underestimates the surface downward longwave radiative fluxes by 20-40 W m-2. Introducing a new ice nucleation parameterization slightly improves the model performance for low-level mixed-phase clouds by increasing cloud liquid water content through the reduction of the conversion rate from cloud liquid to ice by the Wegener-Bergeron-Findeisen process. The CAM5 single-column model testing shows that changing the instantaneous freezing temperature of rain to form snow from -5°C to -40°C causes a large increase in modeled cloud liquid water content through the slowing down of cloud liquid and rain-related processes (e.g., autoconversion of cloud liquid to rain). The underestimation of aerosol concentrations in CAM5 in the Arctic also plays an important role in the low bias of cloud liquid water in the single-layer mixed-phase clouds. In addition, numerical issues related to the coupling of model physics and time stepping in CAM5 are responsible for the model biases and will be explored in future studies.
Verdon, Megan; Morrison, R S; Hemsworth, P H
2018-05-01
This experiment examined the effects of group composition on sow aggressive behaviour and welfare. Over 6 time replicates, 360 sows (parity 1-6) were mixed into groups (10 sows per pen, 1.8 m 2 /sow) composed of animals that were predicted to be aggressive (n = 18 pens) or groups composed of animals that were randomly selected (n = 18 pens). Predicted aggressive sows were selected based on a model-pig test that has been shown to be related to the aggressive behaviour of parity 2 sows when subsequently mixed in groups. Measurements were taken on aggression delivered post-mixing, and aggression delivered around feeding, fresh skin injuries and plasma cortisol concentrations at days 2 and 24 post-mixing. Live weight gain, litter size (born alive, total born, stillborn piglets), and farrowing rate were also recorded. Manipulating the group composition based on predicted sow aggressiveness had no effect (P > 0.05) on sow aggression delivered at mixing or around feeding, fresh injuries, cortisol, weight gain from day 2 to day 24, farrowing rate, or litter size. The lack of treatment effects in the present experiment could be attributed to (1) a failure of the model-pig test to predict aggression in older sows in groups, or (2) the dependence of the expression of the aggressive phenotype on factors such as social experience and characteristics (e.g., physical size and aggressive phenotype) of pen mates. This research draws attention to the intrinsic difficulties associated with predicting behaviour across contexts, particularly when the behaviour is highly dependent on interactions with conspecifics, and highlights the social complexities involved in the presentation of a behavioural phenotype. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valocchi, Albert; Werth, Charles; Liu, Wen-Tso
Bioreduction is being actively investigated as an effective strategy for subsurface remediation and long-term management of DOE sites contaminated by metals and radionuclides (i.e. U(VI)). These strategies require manipulation of the subsurface, usually through injection of chemicals (e.g., electron donor) which mix at varying scales with the contaminant to stimulate metal reducing bacteria. There is evidence from DOE field experiments suggesting that mixing limitations of substrates at all scales may affect biological growth and activity for U(VI) reduction. Although current conceptual models hold that biomass growth and reduction activity is limited by physical mixing processes, a growing body of literaturemore » suggests that reaction could be enhanced by cell-to-cell interaction occurring over length scales extending tens to thousands of microns. Our project investigated two potential mechanisms of enhanced electron transfer. The first is the formation of single- or multiple-species biofilms that transport electrons via direct electrical connection such as conductive pili (i.e. ‘nanowires’) through biofilms to where the electron acceptor is available. The second is through diffusion of electron carriers from syntrophic bacteria to dissimilatory metal reducing bacteria (DMRB). The specific objectives of this work are (i) to quantify the extent and rate that electrons are transported between microorganisms in physical mixing zones between an electron donor and electron acceptor (e.g. U(IV)), (ii) to quantify the extent that biomass growth and reaction are enhanced by interspecies electron transport, and (iii) to integrate mixing across scales (e.g., microscopic scale of electron transfer and macroscopic scale of diffusion) in an integrated numerical model to quantify these mechanisms on overall U(VI) reduction rates. We tested these hypotheses with five tasks that integrate microbiological experiments, unique micro-fluidics experiments, flow cell experiments, and multi-scale numerical models. Continuous fed-batch reactors were used to derive kinetic parameters for DMRB, and to develop an enrichment culture for elucidation of syntrophic relationships in a complex microbial community. Pore and continuum scale experiments using microfluidic and bench top flow cells were used to evaluate the impact of cell-to-cell and microbial interactions on reaction enhancement in mixing-limited bioactive zones, and the mechanisms of this interaction. Some of the microfluidic experiments were used to develop and test models that considers direct cell-to-cell interactions during metal reduction. Pore scale models were incorporated into a multi-scale hybrid modeling framework that combines pore scale modeling at the reaction interface with continuum scale modeling. New computational frameworks for combining continuum and pore-scale models were also developed« less
IRT-Estimated Reliability for Tests Containing Mixed Item Formats
ERIC Educational Resources Information Center
Shu, Lianghua; Schwarz, Richard D.
2014-01-01
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Random Effects Structure for Confirmatory Hypothesis Testing: Keep It Maximal
ERIC Educational Resources Information Center
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.
2013-01-01
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…
A model of the saturation of coupled electron and ion scale gyrokinetic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staebler, Gary M.; Howard, Nathan T.; Candy, Jeffrey M.
A new paradigm of zonal flow mixing as the mechanism by which zonal E × B fluctuations impact the saturation of gyrokinetic turbulence has recently been deduced from the nonlinear 2D spectrum of electric potential fluctuations in gyrokinetic simulations. These state of the art simulations span the physical scales of both ion and electron turbulence. It was found that the zonal flow mixing rate, rather than zonal flow shearing rate, competes with linear growth at both electron and ion scales. A model for saturation of the turbulence by the zonal flow mixing was developed and applied to the quasilinear trappedmore » gyro-Landau fluid transport model (TGLF). The first validation tests of the new saturation model are reported in this paper with data from L-mode and high-β p regime discharges from the DIII-D tokamak. Lastly, the shortfall in the predicted L-mode edge electron energy transport is improved with the new saturation model for these discharges but additional multiscale simulations are required in order to verify the safety factor and collisionality dependencies found in the modeling.« less
A model of the saturation of coupled electron and ion scale gyrokinetic turbulence
Staebler, Gary M.; Howard, Nathan T.; Candy, Jeffrey M.; ...
2017-05-09
A new paradigm of zonal flow mixing as the mechanism by which zonal E × B fluctuations impact the saturation of gyrokinetic turbulence has recently been deduced from the nonlinear 2D spectrum of electric potential fluctuations in gyrokinetic simulations. These state of the art simulations span the physical scales of both ion and electron turbulence. It was found that the zonal flow mixing rate, rather than zonal flow shearing rate, competes with linear growth at both electron and ion scales. A model for saturation of the turbulence by the zonal flow mixing was developed and applied to the quasilinear trappedmore » gyro-Landau fluid transport model (TGLF). The first validation tests of the new saturation model are reported in this paper with data from L-mode and high-β p regime discharges from the DIII-D tokamak. Lastly, the shortfall in the predicted L-mode edge electron energy transport is improved with the new saturation model for these discharges but additional multiscale simulations are required in order to verify the safety factor and collisionality dependencies found in the modeling.« less
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639
Application of DPIV to Enhanced Mixing Heated Nozzle Flows
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Bridges, James
2002-01-01
Digital Particle Imaging Velocimetry (DPIV) is a planar velocity measurement technique that continues to be applied to new and challenging engineering research facilities while significantly reducing facility test time. DPIV was used in the GRC Nozzle Acoustic Test Rig (NATR) to characterize the high temperature (560 C), high speed (is greater than 500 m/s) flow field properties of mixing enhanced jet engine nozzles. The instantaneous velocity maps obtained using DPIV were used to determine mean velocity, rms velocity and two-point correlation statistics to verify the true turbulence characteristics of the flow. These measurements will ultimately be used to properly validate aeroacoustic model predictions by verifying CFD input to these models. These turbulence measurements have previously not been possible in hot supersonic jets. Mapping the nozzle velocity field using point based techniques requires over 60 hours of test time, compared to less than 45 minutes using DPIV, yielding a significant reduction in testing time. A dual camera DPIV configuration was used to maximize the field of view and further minimize the testing time required to map the nozzle flow. The DPIV system field of view covered 127 by 267 mm. Data were acquired at 19 axial stations providing coverage of the flow from the nozzle exit to 2.37 in downstream. At each measurement station, 400 image frame pairs were acquired from each camera. The DPIV measurements of the mixing enhanced nozzle designs illustrate the changes in the flow field resulting in the reduced noise signature.
New Hampshire binder and mix review.
DOT National Transportation Integrated Search
2012-08-01
This review was initiated to compare relative rut testing and simple performance tests (now known as Asphalt Mix : Performance Tests) for the New Hampshire inch mix with 15% Recycled Asphalt Pavement (RAP). The tested mixes were : made from ...
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
Burnett, B P; Jia, Q; Zhao, Y; Levy, R M
2007-09-01
A mixed extract containing two naturally occurring flavonoids, baicalin from Scutellaria baicalensis and catechin from Acacia catechu, was tested for cyclooxygenase (COX) and 5-lipoxygenase (5-LOX) inhibition via enzyme, cellular, and in vivo models. The 50% inhibitory concentration for inhibition of both ovine COX-1 and COX-2 peroxidase enzyme activities was 15 microg/mL, while the mixed extract showed a value for potato 5-LOX enzyme activity of 25 microg/mL. Prostaglandin E2 generation was inhibited by the mixed extract in human osteosarcoma cells expressing COX-2, while leukotriene production was inhibited in both human cell lines, immortalized THP-1 monocyte and HT-29 colorectal adenocarcinoma. In an arachidonic acid-induced mouse ear swelling model, the extract decreased edema in a dose-dependent manner. When arachidonic acid was injected directly into the intra-articular space of mouse ankle joints, the mixed extract abated the swelling and restored function in a rotary drum walking model. These results suggest that this natural, flavonoid mixture acts via "dual inhibition" of COX and LOX enzymes to reduce production of pro-inflammatory eicosanoids and attenuate edema in an in vivo model of inflammation.
Estimating proportions in petrographic mixing equations by least-squares approximation.
Bryan, W B; Finger, L W; Chayes, F
1969-02-28
Petrogenetic hypotheses involving fractional crystallization, assimilation, or mixing of magmas may be expressed and tested as problems in leastsquares approximation. The calculation uses all of the data and yields a unique solution for each model, thus avoiding the ambiguity inherent in graphical or trial-and-error procedures. The compositional change in the 1960 lavas of Kilauea Volcano, Hawaii, is used to illustrate the method of calculation.
NASA Astrophysics Data System (ADS)
Ullman, D. J.; Schmittner, A.; Danabasoglu, G.; Norton, N. J.; Müller, M.
2016-02-01
Oscillations in the moon's orbit around the earth modulate regional tidal dissipation with a periodicity of 18.6 years. In regions where the diurnal tidal constituents dominate diapycnal mixing, this Lunar Nodal Cycle (LNC) may be significant enough to influence ocean circulation, sea surface temperature, and climate variability. Such periodicity in the LNC as an external forcing may provide a mechanistic source for Pacific decadal variability (i.e. Pacific Decadal Oscillation, PDO) where diurnal tidal constituents are strong. We have introduced three enhancements to the latest version of the Community Earth System Model (CESM) to better simulate tidal-forced mixing. First, we have produced a sub-grid scale bathymetry scheme that better resolves the vertical distribution of the barotropic energy flux in regions where the native CESM grid does not resolve high spatial-scale bathymetric features. Second, we test a number of alternative barotropic tidal constituent energy flux fields that are derived from various satellite altimeter observations and tidal models. Third, we introduce modulations of the individual diurnal and semi-diurnal tidal constituents, ranging from monthly to decadal periods, as derived from the full lunisolar tidal potential. Using both ocean-only and fully-coupled configurations, we test the influence of these enhancements, particularly the LNC modulations, on ocean mixing and bidecadal climate variability in CESM.
Mixing of Supersonic Jets in a RBCC Strutjet Propulsion System
NASA Technical Reports Server (NTRS)
Muller, S.; Hawk, Clark W.; Bakker, P. G.; Parkinson, D.; Turner, M.
1998-01-01
The Strutjet approach to Rocket Based Combined Cycle (RBCC) propulsion depends upon fuel-rich flows from the rocket nozzles and turbine exhaust products mixing with the ingested air for successful operation in the ramjet and scramjet modes. It is desirable to delay this mixing process in the air-augmented mode of operation present during take-off and low speed flight. A scale model of the Strutjet device was built and tested to investigate the mixing of the streams as a function of distance from the Strut exit plane in simulated sea level take-off conditions. The Planar Laser Induced Fluorescence (PLIF) diagnostic method has been employed to observe the mixing of the turbine exhaust gas with the gases from both the primary rockets and the ingested air. The ratio of the pressure in the turbine exhaust to that in the rocket nozzle wall at the point where the two jets meet, is the independent variable in these experiments. Tests were accomplished at values of 1.0 (the original design point), 1.5 and 2.0 for this parameter at 8 locations downstream of the rocket nozzle exit. The results illustrate the development of the mixing zone from the exit plane of the strut to a distance of about 18 equivalent rocket nozzle exit diameters downstream (18"). These images show the turbine exhaust to be confined until a short distance downstream. The expansion into the ingested air is more pronounced at a pressure ratio of 1.0 and 1.5 and shows that mixing with this air would likely begin at a distance of 2" downstream of the nozzle exit plane. Of the pressure ratios tested in this research, 2.0 is the best value for delaying the mixing at the operating conditions considered.
Mei, J.; Dong, P.; Kalnaus, S.; ...
2017-07-21
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, J.; Dong, P.; Kalnaus, S.
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
Yang, Yao Bin; Swithenbank, Jim
2008-01-01
Packed bed combustion is still the most common way to burn municipal solid wastes. In this paper, a dispersion model for particle mixing, mainly caused by the movement of the grate in a moving-burning bed, has been proposed and transport equations for the continuity, momentum, species, and energy conservation are described. Particle-mixing coefficients obtained from model tests range from 2.0x10(-6) to 3.0x10(-5)m2/s. A numerical solution is sought to simulate the combustion behaviour of a full-scale 12-tonne-per-h waste incineration furnace at different levels of bed mixing. It is found that an increase in mixing causes a slight delay in the bed ignition but greatly enhances the combustion processes during the main combustion period in the bed. A medium-level mixing produces a combustion profile that is positioned more at the central part of the combustion chamber, and any leftover combustible gases (mainly CO) enter directly into the most intensive turbulence area created by the opposing secondary-air jets and thus are consumed quickly. Generally, the specific arrangement of the impinging secondary-air jets dumps most of the non-uniformity in temperature and CO into the gas flow coming from the bed-top, while medium-level mixing results in the lowest CO emission at the furnace exit and the highest combustion efficiency in the bed.
Modeling condensation with a noncondensable gas for mixed convection flow
NASA Astrophysics Data System (ADS)
Liao, Yehong
2007-05-01
This research theoretically developed a novel mixed convection model for condensation with a noncondensable gas. The model developed herein is comprised of three components: a convection regime map; a mixed convection correlation; and a generalized diffusion layer model. These components were developed in a way to be consistent with the three-level methodology in MELCOR. The overall mixed convection model was implemented into MELCOR and satisfactorily validated with data covering a wide variety of test conditions. In the development of the convection regime map, two analyses with approximations of the local similarity method were performed to solve the multi-component two-phase boundary layer equations. The first analysis studied effects of the bulk velocity on a basic natural convection condensation process and setup conditions to distinguish natural convection from mixed convection. It was found that the superimposed velocity increases condensation heat transfer by sweeping away the noncondensable gas accumulated at the condensation boundary. The second analysis studied effects of the buoyancy force on a basic forced convection condensation process and setup conditions to distinguish forced convection from mixed convection. It was found that the superimposed buoyancy force increases condensation heat transfer by thinning the liquid film thickness and creating a steeper noncondensable gas concentration profile near the condensation interface. In the development of the mixed convection correlation accounting for suction effects, numerical data were obtained from boundary layer analysis for the three convection regimes and used to fit a curve for the Nusselt number of the mixed convection regime as a function of the Nusselt numbers of the natural and forced convection regimes. In the development of the generalized diffusion layer model, the driving potential for mass transfer was expressed as the temperature difference between the bulk and the liquid-gas interface using the Clausius-Clapeyron equation. The model was developed on a mass basis instead of a molar basis to be consistent with general conservation equations. It was found that vapor diffusion is not only driven by a gradient of the molar fraction but also a gradient of the mixture molecular weight at the diffusion layer.
Using existing case-mix methods to fund trauma cases.
Monakova, Julia; Blais, Irene; Botz, Charles; Chechulin, Yuriy; Picciano, Gino; Basinski, Antoni
2010-01-01
Policymakers frequently face the need to increase funding in isolated and frequently heterogeneous (clinically and in terms of resource consumption) patient subpopulations. This article presents a methodologic solution for testing the appropriateness of using existing grouping and weighting methodologies for funding subsets of patients in the scenario where a case-mix approach is preferable to a flat-rate based payment system. Using as an example the subpopulation of trauma cases of Ontario lead trauma hospitals, the statistical techniques of linear and nonlinear regression models, regression trees, and spline models were applied to examine the fit of the existing case-mix groups and reference weights for the trauma cases. The analyses demonstrated that for funding Ontario trauma cases, the existing case-mix systems can form the basis for rational and equitable hospital funding, decreasing the need to develop a different grouper for this subset of patients. This study confirmed that Injury Severity Score is a poor predictor of costs for trauma patients. Although our analysis used the Canadian case-mix classification system and cost weights, the demonstrated concept of using existing case-mix systems to develop funding rates for specific subsets of patient populations may be applicable internationally.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Very large eddy simulation of the Red Sea overflow
NASA Astrophysics Data System (ADS)
Ilıcak, Mehmet; Özgökmen, Tamay M.; Peters, Hartmut; Baumert, Helmut Z.; Iskandarani, Mohamed
Mixing between overflows and ambient water masses is a critical problem of deep-water mass formation in the downwelling branch of the meridional overturning circulation of the ocean. Modeling approaches that have been tested so far rely either on algebraic parameterizations in hydrostatic ocean circulation models, or on large eddy simulations that resolve most of the mixing using nonhydrostatic models. In this study, we examine the performance of a set of turbulence closures, that have not been tested in comparison to observational data for overflows before. We employ the so-called very large eddy simulation (VLES) technique, which allows the use of k-ɛ models in nonhydrostatic models. This is done by applying a dynamic spatial filtering to the k-ɛ equations. To our knowledge, this is the first time that the VLES approach is adopted for an ocean modeling problem. The performance of k-ɛ and VLES models are evaluated by conducting numerical simulations of the Red Sea overflow and comparing them to observations from the Red Sea Outflow Experiment (REDSOX). The computations are constrained to one of the main channels transporting the overflow, which is narrow enough to permit the use of a two-dimensional (and nonhydrostatic) model. A large set of experiments are conducted using different closure models, Reynolds numbers and spatial resolutions. It is found that, when no turbulence closure is used, the basic structure of the overflow, consisting of a well-mixed bottom layer (BL) and entraining interfacial layer (IL), cannot be reproduced. The k-ɛ model leads to unrealistic thicknesses for both BL and IL, while VLES results in the most realistic reproduction of the REDSOX observations.
Round Robin Analyses of the Steel Containment Vessel Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costello, J.F.; Hashimote, T.; Klamerus, E.W.
A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. Several organizations from the US, Europe, and Asia were invited to participate in a Round Robin analysis to perform independent pretest predictions and posttest evaluations of the behavior of the SCV model during the high pressure test. Both pretest and posttest analysis results from all Round Robin participants were compared tomore » the high pressure test data. This paper summarizes the Round Robin analysis activities and discusses the lessons learned from the collective effort.« less
Thavamani, Palanisami; Megharaj, Mallavarapu; Naidu, Ravi
2015-06-01
The use of metal-tolerant polyaromatic hydrocarbon (PAH)-degrading bacteria is viable for mitigating metal inhibition of organic compound biodegradation in the remediation of mixed contaminated sites. Many microbial growth media used for toxicity testing contain high concentrations of metal-binding components such as phosphates that can reduce solution-phase metal concentrations thereby underestimate the real toxicity. In this study, we isolated two PAHs-degrading bacterial consortia from long-term mixed contaminated soils. We have developed a new mineral medium by optimising the concentrations of medium components to allow the bacterial growth and at the same time maintain high bioavailable metal (Cd(2+) as a model metal) in the medium. This medium has more than 60 % Cd as Cd(2+) at pH 6.5 as measured by an ion selective electrode and visual MINTEQ model. The Cd-tolerant patterns of the consortia were tested and minimum inhibitory concentration (MIC) derived. The consortium-5 had the highest MIC of 5 mg l(-1) Cd followed by consortium-9. Both cultures were able to completely metabolise 200 mg l(-1) phenanthrene in less than 4 days in the presence of 5 mg l(-1) Cd. The isolated metal-tolerant PAH-degrading bacterial cultures have great potential for bioremediation of mixed contaminated soils.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luk, V.K.; Hessheimer, M.F.; Matsumoto, T.
A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11--12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented.
Development of an Empirical Methods for Predicting Jet Mixing Noise of Cold Flow Rectangular Jets
NASA Technical Reports Server (NTRS)
Russell, James W.
1999-01-01
This report presents an empirical method for predicting the jet mixing noise levels of cold flow rectangular jets. The report presents a detailed analysis of the methodology used in development of the prediction method. The empirical correlations used are based on narrow band acoustic data for cold flow rectangular model nozzle tests conducted in the NASA Langley Jet Noise Laboratory. There were 20 separate nozzle test operating conditions. For each operating condition 60 Hz bandwidth microphone measurements were made over a frequency range from 0 to 60,000 Hz. Measurements were performed at 16 polar directivity angles ranging from 45 degrees to 157.5 degrees. At each polar directivity angle, measurements were made at 9 azimuth directivity angles. The report shows the methods employed to remove screech tones and shock noise from the data in order to obtain the jet mixing noise component. The jet mixing noise was defined in terms of one third octave band spectral content, polar and azimuth directivity, and overall power level. Empirical correlations were performed over the range of test conditions to define each of these jet mixing noise parameters as a function of aspect ratio, jet velocity, and polar and azimuth directivity angles. The report presents the method for predicting the overall power level, the average polar directivity, the azimuth directivity and the location and shape of the spectra for jet mixing noise of cold flow rectangular jets.
Development of Eulerian Code Modeling for ICF Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, Paul A.
2014-02-27
One of the most pressing unexplained phenomena standing in the way of ICF ignition is understanding mix and how it interacts with burn. Experiments were being designed and fielded as part of the Defect-Induced Mix Experiment (DIME) project to obtain data about the extent of material mix and how this mix influenced burn. Experiments on the Omega laser and National Ignition Facility (NIF) provided detailed data for comparison to the Eulerian code RAGE1. The Omega experiments were able to resolve the mix and provide “proof of principle” support for subsequent NIF experiments, which were fielded from July 2012 through Junemore » 2013. The Omega shots were fired at least once per year between 2009 and 2012. RAGE was not originally designed to model inertial confinement fusion (ICF) implosions. It still lacks lasers, so the code has been validated using an energy source. To test RAGE, the simulation output is compared to data and by means of postprocessing tools that were developed. Here, the various postprocessing tools are described with illustrative examples.« less
NASA Astrophysics Data System (ADS)
Galewsky, Joseph
2018-01-01
In situ measurements of water vapor isotopic composition from Mauna Loa, Hawaii, are merged with soundings from Hilo to show an inverse relationship between the estimated inversion strength (EIS) and isotopically derived measures of lower-tropospheric mixing. Remote sensing estimates of cloud fraction, cloud liquid water path, and cloud top pressure were all found to be higher (lower) under low (high) EIS. Inverse modeling of the isotopic data corresponding to terciles of EIS conditions provide quantitative constraints on the last-saturation temperatures and mixing fractions that govern the humidity above the trade inversion. The mixing fraction of water vapor transported from the boundary layer to Mauna Loa decreases with respect to EIS at a rate of about 3% K-1, corresponding to a mixing ratio decrease of 0.6 g kg-1 K-1. A last-saturation temperature of 240 K can match all observations. This approach can be applied in other settings and may be used to test models of low-cloud climate feedbacks.
A Theory-Driven Model of Community College Student Engagement
ERIC Educational Resources Information Center
Schuetz, Pam
2008-01-01
This mixed-methods study develops, operationalizes, and tests a new conceptual model of community college student engagement. Themes emerging from participant observations and semistructured interviews with 30 adult students enrolled at a Large Best Practices Community College (LBPCC) over the 2005-2006 academic year are used to guide selection of…
Grading System and Student Effort
ERIC Educational Resources Information Center
Paredes, Valentina
2017-01-01
Several papers have proposed that the grading system affects students' incentives to exert effort. In particular, the previous literature has compared student effort under relative and absolute grading systems, but the results are mixed and the implications of the models have not been empirically tested. In this paper, I build a model where…
Model of non-stationary, inhomogeneous turbulence
Bragg, Andrew D.; Kurien, Susan; Clark, Timothy T.
2016-07-08
Here, we compare results from a spectral model for non-stationary, inhomogeneous turbulence (Besnard et al. in Theor Comp Fluid Dyn 8:1–35, 1996) with direct numerical simulation (DNS) data of a shear-free mixing layer (SFML) (Tordella et al. in Phys Rev E 77:016309, 2008). The SFML is used as a test case in which the efficacy of the model closure for the physical-space transport of the fluid velocity field can be tested in a flow with inhomogeneity, without the additional complexity of mean-flow coupling. The model is able to capture certain features of the SFML quite well for intermediate to longmore » times, including the evolution of the mixing-layer width and turbulent kinetic energy. At short-times, and for more sensitive statistics such as the generation of the velocity field anisotropy, the model is less accurate. We propose two possible causes for the discrepancies. The first is the local approximation to the pressure-transport and the second is the a priori spherical averaging used to reduce the dimensionality of the solution space of the model, from wavevector to wavenumber space. DNS data are then used to gauge the relative importance of both possible deficiencies in the model.« less
Postma, Erik; Siitari, Heli; Schwabl, Hubert; Richner, Heinz; Tschirren, Barbara
2014-03-01
Egg components are important mediators of prenatal maternal effects in birds and other oviparous species. Because different egg components can have opposite effects on offspring phenotype, selection is expected to favour their mutual adjustment, resulting in a significant covariation between egg components within and/or among clutches. Here we tested for such correlations between maternally derived yolk immunoglobulins and yolk androgens in great tit (Parus major) eggs using a multivariate mixed-model approach. We found no association between yolk immunoglobulins and yolk androgens within clutches, indicating that within clutches the two egg components are deposited independently. Across clutches, however, there was a significant negative relationship between yolk immunoglobulins and yolk androgens, suggesting that selection has co-adjusted their deposition. Furthermore, an experimental manipulation of ectoparasite load affected patterns of covariance among egg components. Yolk immunoglobulins are known to play an important role in nestling immune defence shortly after hatching, whereas yolk androgens, although having growth-enhancing effects under many environmental conditions, can be immunosuppressive. We therefore speculate that variation in the risk of parasitism may play an important role in shaping optimal egg composition and may lead to the observed pattern of yolk immunoglobulin and yolk androgen deposition across clutches. More generally, our case study exemplifies how multivariate mixed-model methodology presents a flexible tool to not only quantify, but also test patterns of (co)variation across different organisational levels and environments, allowing for powerful hypothesis testing in ecophysiology.
One hundred years of Arctic ice cover variations as simulated by a one-dimensional, ice-ocean model
NASA Astrophysics Data System (ADS)
Hakkinen, S.; Mellor, G. L.
1990-09-01
A one-dimensional ice-ocean model consisting of a second moment, turbulent closure, mixed layer model and a three-layer snow-ice model has been applied to the simulation of Arctic ice mass and mixed layer properties. The results for the climatological seasonal cycle are discussed first and include the salt and heat balance in the upper ocean. The coupled model is then applied to the period 1880-1985, using the surface air temperature fluctuations from Hansen et al. (1983) and from Wigley et al. (1981). The analysis of the simulated large variations of the Arctic ice mass during this period (with similar changes in the mixed layer salinity) shows that the variability in the summer melt determines to a high degree the variability in the average ice thickness. The annual oceanic heat flux from the deep ocean and the maximum freezing rate and associated nearly constant minimum surface salinity flux did not vary significantly interannually. This also implies that the oceanic influence on the Arctic ice mass is minimal for the range of atmospheric variability tested.
ERIC Educational Resources Information Center
Pilten, Gulhiz
2016-01-01
The purpose of the present research is investigating the effects of reciprocal teaching in comprehending expository texts. The research was designed with mixed method. The quantitative dimension of the present research was designed in accordance with pre-test-post-test control group experiment model. The quantitative dimension of the present…
Automated Simultaneous Assembly of Multistage Testlets for a High-Stakes Licensing Examination
ERIC Educational Resources Information Center
Breithaupt, Krista; Hare, Donovan R.
2007-01-01
Many challenges exist for high-stakes testing programs offering continuous computerized administration. The automated assembly of test questions to exactly meet content and other requirements, provide uniformity, and control item exposure can be modeled and solved by mixed-integer programming (MIP) methods. A case study of the computerized…
Impact of Antarctic mixed-phase clouds on climate
Lawson, R. Paul; Gettelman, Andrew
2014-12-08
Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. In this paper, we modify the National Center for Atmospheric Research (NCAR) Community Earthmore » System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm –2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. Finally, these sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than –20 °C.« less
Jet-Surface Interaction - High Aspect Ratio Nozzle Test: Test Summary
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
The Jet-Surface Interaction High Aspect Ratio Nozzle Test was conducted in the Aero-Acoustic Propulsion Laboratory at the NASA Glenn Research Center in the fall of 2015. There were four primary goals specified for this test: (1) extend the current noise database for rectangular nozzles to higher aspect ratios, (2) verify data previously acquired at small-scale with data from a larger model, (3) acquired jet-surface interaction noise data suitable for creating verifying empirical noise models and (4) investigate the effect of nozzle septa on the jet-mixing and jet-surface interaction noise. These slides give a summary of the test with representative results for each goal.
Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.
2017-01-01
Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.
NASA Technical Reports Server (NTRS)
Spring, Samuel D.
2006-01-01
This report documents the results of an experimental program conducted on two advanced metallic alloy systems (Rene' 142 directionally solidified alloy (DS) and Rene' N6 single crystal alloy) and the characterization of two distinct internal state variable inelastic constitutive models. The long term objective of the study was to develop a computational life prediction methodology that can integrate the obtained material data. A specialized test matrix for characterizing advanced unified viscoplastic models was specified and conducted. This matrix included strain controlled tensile tests with intermittent relaxtion test with 2 hr hold times, constant stress creep tests, stepped creep tests, mixed creep and plasticity tests, cyclic temperature creep tests and tests in which temperature overloads were present to simulate actual operation conditions for validation of the models. The selected internal state variable models where shown to be capable of representing the material behavior exhibited by the experimental results; however the program ended prior to final validation of the models.
Functional linear models for association analysis of quantitative traits.
Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao
2013-11-01
Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY PERIODICALS, INC.
Simulations of arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE
Xie, Shaocheng; Boyle, James; Klein, Stephen A.; ...
2008-02-27
[1] Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of themore » boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. Furthermore, this paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.« less
Simulations of Arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE
NASA Astrophysics Data System (ADS)
Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Liu, Xiaohong; Ghan, Steven
2008-02-01
Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of the boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. This paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.
NASA Astrophysics Data System (ADS)
Glazkov, S. A.; Gorbushin, A. R.; Osipova, S. L.; Semenov, A. V.
2016-10-01
The report describes the results of flow field experimental research in TsAGI T-128 transonic wind tunnel. During the tests Mach number, stagnation pressure, test section wall perforation ratio, angles between the test section panels and mixing chamber flaps varied. Based on the test results one determined corrections to the free-stream Mach number related to the flow speed difference in the model location and in the zone of static pressure measurement on the test section walls, nonuniformity of the longitudinal velocity component in the model location, optimal position of the movable test section elements to provide flow field uniformity in the test section and minimize the test leg drag.
Posttest Analyses of the Steel Containment Vessel Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costello, J.F.; Hessheimer, M.F.; Ludwigsen, J.S.
A high pressure test of a scale model of a steel containment vessel (SCV) was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. This testis part of a program to investigate the response of representative models of nuclear containment structures to pressure loads beyond the design basis accident. The posttest analyses of this test focused on three areas where the pretest analysis effort did not adequately predict the model behavior duringmore » the test. These areas are the onset of global yielding, the strain concentrations around the equipment hatch and the strain concentrations that led to a small tear near a weld relief opening that was not modeled in the pretest analysis.« less
Larson, Diane L.; Bright, J.B.; Drobney, Pauline; Larson, Jennifer L.; Palaia, Nicholas; Rabie, Paul A.; Vacek, Sara; Wells, Douglas
2013-01-01
Theory has predicted, and many experimental studies have confirmed, that resident plant species richness is inversely related to invisibility. Likewise, potential invaders that are functionally similar to resident plant species are less likely to invade than are those from different functional groups. Neither of these ideas has been tested in the context of an operational prairie restoration. Here, we tested the hypotheses that within tallgrass prairie restorations (1) as seed mix species richness increased, cover of the invasive perennial forb, Canada thistle (Cirsium arvense) would decline; and (2) guilds (both planted and arising from the seedbank) most similar to Canada thistle would have a larger negative effect on it than less similar guilds. Each hypothesis was tested on six former agricultural fields restored to tallgrass prairie in 2005; all were within the tallgrass prairie biome in Minnesota, USA. A mixed-model with repeated measures (years) in a randomized block (fields) design indicated that seed mix richness had no effect on cover of Canada thistle. Structural equation models assessing effects of cover of each planted and non-planted guild on cover of Canada thistle in 2006, 2007, and 2010 revealed that planted Asteraceae never had a negative effect on Canada thistle. In contrast, planted cool-season grasses and non-Asteraceae forbs, and many non-planted guilds had negative effects on Canada thistle cover. We conclude that early, robust establishment of native species, regardless of guild, is of greater importance in resistance to Canada thistle than is similarity of guilds in new prairie restorations.
Bottom quark anti-quark production and mixing in proton anti-proton collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Zhaoou
2003-03-01
The studies of bottom quark-antiquark production in proton-antiproton collisions play an important role in testing perturbative QCD. Measuring the mixing parameter of B mesons imposes constraints on the quark mixing (CKM) matrix and enhances the understanding of the Standard Model. Multi-GeV pmore » $$\\bar{p}$$ colliders produce a significant amount of b$$\\bar{b}$$ pairs and thus enable studies in both of these fields. This thesis presents results of the b$$\\bar{b}$$ production cross section from p$$\\bar{p}$$ collisions at √s = 1.8 TeV and the time-integrated average B$$\\bar{B}$$ mixing parameter ($$\\bar{χ}$$) using highmass dimuon d a ta collected by CDF during its Run IB.« less
An operational global-scale ocean thermal analysis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.
1990-04-01
The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less
Design of Training Systems (DOTS) Project: Test and Evaluation of Phase II Models
1976-04-01
when the process being modeled is very much dependent upon human resoarces, precise requirement formulas are usually V unavailable. In this...mixed integer formulation options. The SGRR, in a sense, is an automiation of what is cu~rrently beinig donec men~tall y by instructors and trai ninrg nv...test and evaluation (T&E); information concerning CNETS LCDR R. J. Biersner Human Factors Analysis, N-214 AV 922-1392 CNTECHTRA CDR J. D. Davis
Magnitude and sources of bias in the detection of mixed strain M. tuberculosis infection.
Plazzotta, Giacomo; Cohen, Ted; Colijn, Caroline
2015-03-07
High resolution tests for genetic variation reveal that individuals may simultaneously host more than one distinct strain of Mycobacterium tuberculosis. Previous studies find that this phenomenon, which we will refer to as "mixed infection", may affect the outcomes of treatment for infected individuals and may influence the impact of population-level interventions against tuberculosis. In areas where the incidence of TB is high, mixed infections have been found in nearly 20% of patients; these studies may underestimate the actual prevalence of mixed infection given that tests may not be sufficiently sensitive for detecting minority strains. Specific reasons for failing to detect mixed infections would include low initial numbers of minority strain cells in sputum, stochastic growth in culture and the physical division of initial samples into parts (typically only one of which is genotyped). In this paper, we develop a mathematical framework that models the study designs aimed to detect mixed infections. Using both a deterministic and a stochastic approach, we obtain posterior estimates of the prevalence of mixed infection. We find that the posterior estimate of the prevalence of mixed infection may be substantially higher than the fraction of cases in which it is detected. We characterize this bias in terms of the sensitivity of the genotyping method and the relative growth rates and initial population sizes of the different strains collected in sputum. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.
Kaya, M S; Güçlü, B; Schimmel, M; Akyüz, S
2017-11-01
The unappealing taste of the chewing material and the time-consuming repetitive task in masticatory performance tests using artificial foodstuff may discourage children from performing natural chewing movements. Therefore, the aim was to determine the validity and reliability of a two-colour chewing gum mixing ability test for masticatory performance (MP) assessment in mixed dentition children. Masticatory performance was tested in two groups: systemically healthy fully dentate young adults and children in mixed dentition. Median particle size was assessed using a comminution test, and a two-colour chewing gum mixing ability test was applied for MP analysis. Validity was tested with Pearson correlation, and reliability was tested with intra-class correlation coefficient, Pearson correlation and Bland-Altman plots. Both comminution and two-colour chewing gum mixing ability tests revealed statistically significant MP differences between children (n = 25) and adults (n = 27, both P < 0·01). Pearson correlation between comminution and two-colour chewing gum mixing ability tests was positive and significant (r = 0·418, P = 0·002). Correlations for interobserver reliability and test-retest values were significant (r = 0·990, P = 0·0001 and r = 0·995, P = 0·0001). Although both methods could discriminate MP differences, the comminution test detected these differences generally in a wider range compared to two-colour chewing gum mixing ability test. However, considering the high reliability of the results, the two-colour chewing gum mixing ability test can be used to assess masticatory performance in children, especially at non-clinical settings. © 2017 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Deliyannis, Constantine P.; Ryan, Sean G.; Beers, Timothy C.; Thorburn, Julie A.
1994-01-01
Lithium abundances in halo stars, when interpreted correctly, hold the key to uncovering the primordial Li abundance Li(sub p). However, whereas standard stellar evolutionary models imply consistency in standard big bang nucleosynthesis (BBN), models with rotationally induced mixing imply a higher Li(sub p), possibly implying an inconsistency in standard BBN. We report here Li detections in two cool halo dwarfs, Gmb 1830 and HD 134439. These are the coolest and lowest Li detections in halo dwarfs to date, and are consistent with the metallicity dependence of Li depletion in published models. If the recent report of a beryllium deficiency in Gmb 1830 represents a real Be depletion, then the rotational models would be favored. We propose tests to reduce critical uncertainties.
USDA-ARS?s Scientific Manuscript database
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises t...
ERIC Educational Resources Information Center
Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.
2011-01-01
Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…
Quasi 1D Modeling of Mixed Compression Supersonic Inlets
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.
2012-01-01
The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.
Testing the Porcelli Sawtooth Trigger Module
NASA Astrophysics Data System (ADS)
Bateman, G.; Nave, M. F. F.; Parail, V.
2005-10-01
The Porcelli sawtooth trigger model [1] is implemented as a module for the National Transport Code Collaboration Module Library [2] and is tested using BALDUR and JETTO integrated modeling simulations of JET and other tokamak discharges. Statistical techniques are used to compute the average sawtooth period and the random scatter in sawtooth periods obtained during selected time intervals in the simulations compared with the corresponding statistical measures obtained from experimental data. It is found that the results are affected systematically by the fraction of magnetic reconnection during each sawtooth crash and by the model that is used for transport within the sawtooth mixing region. The physical processes that affect the sawtooth cycle in the simulations are found to involve an interaction among magnetic diffusion, reheating within the sawtooth mixing region, the instabilities that trigger a sawtooth crash in the Porcelli model, and the magnetic reconnection produced by each sawtooth crash. [1] F. Porcelli, et al., Plasma Phys. Contol. Fusion 38 (1996) 2163. [2] A.H. Kritz, et al., Comput. Phys. Commun. 164 (2004) 108; http://w3.pppl.gov/NTCC. Supported by DOE DE-FG02-92-ER-54141.
Linear mixed model for heritability estimation that explicitly addresses environmental variation.
Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S
2016-07-05
The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.
A Multi-wavenumber Theory for Eddy Diffusivities: Applications to the DIMES Region
NASA Astrophysics Data System (ADS)
Chen, R.; Gille, S. T.; McClean, J.; Flierl, G.; Griesel, A.
2014-12-01
Climate models are sensitive to the representation of ocean mixing processes. This has motivated recent efforts to collect observations aimed at improving mixing estimates and parameterizations. The US/UK field program Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES), begun in 2009, is providing such estimates upstream of and within the Drake Passage. This region is characterized by topography, and strong zonal jets. In previous studies, mixing length theories, based on the assumption that eddies are dominated by a single wavenumber and phase speed, were formulated to represent the estimated mixing patterns in jets. However, in spite of the success of the single wavenumber theory in some other scenarios, it does not effectively predict the vertical structures of observed eddy diffusivities in the DIMES area. Considering that eddy motions encompass a wide range of wavenumbers, which all contribute to mixing, in this study we formulated a multi-wavenumber theory to predict eddy mixing rates. We test our theory for a domain encompassing the entire Southern Ocean. We estimated eddy diffusivities and mixing lengths from one million numerical floats in a global eddying model. These float-based mixing estimates were compared with the predictions from both the single-wavenumber and the multi-wavenumber theories. Our preliminary results in the DIMES area indicate that, compared to the single-wavenumber theory, the multi-wavenumber theory better predicts the vertical mixing structures in the vast areas where the mean flow is weak; however in the intense jet region, both theories have similar predictive skill.
Wave–turbulence interaction-induced vertical mixing and its effects in ocean and climate models
Qiao, Fangli; Yuan, Yeli; Deng, Jia; Dai, Dejun; Song, Zhenya
2016-01-01
Heated from above, the oceans are stably stratified. Therefore, the performance of general ocean circulation models and climate studies through coupled atmosphere–ocean models depends critically on vertical mixing of energy and momentum in the water column. Many of the traditional general circulation models are based on total kinetic energy (TKE), in which the roles of waves are averaged out. Although theoretical calculations suggest that waves could greatly enhance coexisting turbulence, no field measurements on turbulence have ever validated this mechanism directly. To address this problem, a specially designed field experiment has been conducted. The experimental results indicate that the wave–turbulence interaction-induced enhancement of the background turbulence is indeed the predominant mechanism for turbulence generation and enhancement. Based on this understanding, we propose a new parametrization for vertical mixing as an additive part to the traditional TKE approach. This new result reconfirmed the past theoretical model that had been tested and validated in numerical model experiments and field observations. It firmly establishes the critical role of wave–turbulence interaction effects in both general ocean circulation models and atmosphere–ocean coupled models, which could greatly improve the understanding of the sea surface temperature and water column properties distributions, and hence model-based climate forecasting capability. PMID:26953182
Recent Advances in the LEWICE Icing Model
NASA Technical Reports Server (NTRS)
Wright, William B.; Addy, Gene; Struk, Peter; Bartkus, Tadas
2015-01-01
This paper will describe two recent modifications to the Glenn ICE software. First, a capability for modeling ice crystals and mixed phase icing has been modified based on recent experimental data. Modifications have been made to the ice particle bouncing and erosion model. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to ice crystal ice accretions performed in the NRC Research Altitude Test Facility (RATFac). Second, modifications were made to the run back model based on data and observations from thermal scaling tests performed in the NRC Altitude Icing Tunnel.
Zooming in on neutrino oscillations with DUNE
NASA Astrophysics Data System (ADS)
Srivastava, Rahul; Ternes, Christoph A.; Tórtola, Mariam; Valle, José W. F.
2018-05-01
We examine the capabilities of the DUNE experiment as a probe of the neutrino mixing paradigm. Taking the current status of neutrino oscillations and the design specifications of DUNE, we determine the experiment's potential to probe the structure of neutrino mixing and C P violation. We focus on the poorly determined parameters θ23 and δC P and consider both two and seven years of run. We take various benchmarks as our true values, such as the current preferred values of θ23 and δC P, as well as several theory-motivated choices. We determine quantitatively DUNE's potential to perform a precision measurement of θ23, as well as to test the C P violation hypothesis in a model-independent way. We find that, after running for seven years, DUNE will make a substantial step in the precise determination of these parameters, bringing to quantitative test the predictions of various theories of neutrino mixing.
Evaluation of a metering, mixing, and dispensing system for mixing polysulfide adhesive
NASA Technical Reports Server (NTRS)
Evans, Kurt B.
1989-01-01
Tests were performed to evaluate whether a metered mixing system can mix PR-1221 polysulfide adhesive as well as or better than batch-mixed adhesive; also, to evaluate the quality of meter-mixed PR-1860 and PS-875 polysulfide adhesives. These adhesives are candidate replacements for PR-1221 which will not be manufactured in the future. The following material properties were evaluated: peel strength, specific gravity and adhesive components of mixed adhesives, Shore A hardness, tensile adhesion strength, and flow rate. Finally, a visual test called the butterfly test was performed to observe for bubbles and unmixed adhesive. The results of these tests are reported and discussed.
Mixed Effects Modeling of Morris Water Maze Data: Advantages and Cautionary Notes
ERIC Educational Resources Information Center
Young, Michael E.; Clark, M. H.; Goffus, Andrea; Hoane, Michael R.
2009-01-01
Morris water maze data are most commonly analyzed using repeated measures analysis of variance in which daily test sessions are analyzed as an unordered categorical variable. This approach, however, may lack power, relies heavily on post hoc tests of daily performance that can complicate interpretation, and does not target the nonlinear trends…
An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests
ERIC Educational Resources Information Center
Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.
2013-01-01
Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…
NASA Astrophysics Data System (ADS)
Yoon, Seung Chul; Windham, William R.; Ladely, Scott; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Narang, Neelam; Cray, William C.
2012-05-01
We investigated the feasibility of visible and near-infrared (VNIR) hyperspectral imaging for rapid presumptive-positive screening of six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) on spread plates of mixed cultures. Although the traditional culture method is still the "gold standard" for presumptive-positive pathogen screening, it is time-consuming, labor-intensive, not effective in testing large amount of food samples, and cannot completely prevent unwanted background microflora from growing together with target microorganisms on agar media. A previous study was performed using the data obtained from pure cultures individually inoculated on spot and/or spread plates in order to develop multivariate classification models differentiating each colony of the six non-O157 STEC serogroups and to optimize the models in terms of parameters. This study dealt with the validation of the trained and optimized models with a test set of new independent samples obtained from colonies on spread plates of mixed cultures. A new validation protocol appropriate to a hyperspectral imaging study for mixed cultures was developed. One imaging experiment with colonies obtained from two serial dilutions was performed. A total of six agar plates were prepared, where O45, O111 and O121 serogroups were inoculated into all six plates and each of O45, O103 and O145 serogroups was added into the mixture of the three common bacterial cultures. The number of colonies grown after 24-h incubation was 331 and the number of pixels associated with the grown colonies was 16,379. The best model found from this validation study was based on pre-processing with standard normal variate and detrending (SNVD), first derivative, spectral smoothing, and k-nearest neighbor classification (kNN, k=3) of scores in the principal component subspace spanned by 6 principal components. The independent testing results showed 95% overall detection accuracy at pixel level and 97% at colony level. The developed model was proven to be still valid even for the independent samples although the size of a test set was small and only one experiment was performed. This study was an important first step in validating and updating multivariate classification models for rapid screening of ground beef samples contaminated by non-O157 STEC pathogens using hyperspectral imaging.
Davidov, Ori; Rosen, Sophia
2011-04-01
In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.
Heavy neutrino mixing and single production at linear collider
NASA Astrophysics Data System (ADS)
Gluza, J.; Maalampi, J.; Raidal, M.; Zrałek, M.
1997-02-01
We study the single production of heavy neutrinos via the processes e- e+ -> νN and e- γ -> W- N at future linear colliders. As a base of our considerations we take a wide class of models, both with vanishing and non-vanishing left-handed Majorana neutrino mass matrix mL. We perform a model independent analyses of the existing experimental data and find connections between the characteristic of heavy neutrinos (masses, mixings, CP eigenvalues) and the mL parameters. We show that with the present experimental constraints heavy neutrino masses almost up to the collision energy can be tested in the future experiments.
NASA Technical Reports Server (NTRS)
Struk, Peter; Tsao, Jen-Ching; Bartkus, Tadas
2017-01-01
This paper describes plans and preliminary results for using the NASA Propulsion Systems Lab (PSL) to experimentally study the fundamental physics of ice-crystal ice accretion. NASA is evaluating whether this facility, in addition to full-engine and motor-driven-rig tests, can be used for more fundamental ice-accretion studies that simulate the different mixed-phase icing conditions along the core flow passage of a turbo-fan engine compressor. The data from such fundamental accretion tests will be used to help develop and validate models of the accretion process. This paper presents data from some preliminary testing performed in May 2015 which examined how a mixed-phase cloud could be generated at PSL using evaporative cooling in a warmer-than-freezing environment.
NASA Technical Reports Server (NTRS)
Struk, Peter; Tsao, Jen-Ching; Bartkus, Tadas
2016-01-01
This presentation accompanies the paper titled Plans and Preliminary Results of Fundamental Studies of Ice Crystal Icing Physics in the NASA Propulsion Systems Laboratory. NASA is evaluating whether PSL, in addition to full-engine and motor-driven-rig tests, can be used for more fundamental ice-accretion studies that simulate the different mixed-phase icing conditions along the core flow passage of a turbo-fan engine compressor. The data from such fundamental accretion tests will be used to help develop and validate models of the accretion process. This presentation (and accompanying paper) presents data from some preliminary testing performed in May 2015 which examined how a mixed-phase cloud could be generated at PSL using evaporative cooling in a warmer-than-freezing environment.
NASA Astrophysics Data System (ADS)
Buchholz, Max; Grossmann, Frank; Ceotto, Michele
2018-03-01
We present and test an approximate method for the semiclassical calculation of vibrational spectra. The approach is based on the mixed time-averaging semiclassical initial value representation method, which is simplified to a form that contains a filter to remove contributions from approximately harmonic environmental degrees of freedom. This filter comes at no additional numerical cost, and it has no negative effect on the accuracy of peaks from the anharmonic system of interest. The method is successfully tested for a model Hamiltonian and then applied to the study of the frequency shift of iodine in a krypton matrix. Using a hierarchic model with up to 108 normal modes included in the calculation, we show how the dynamical interaction between iodine and krypton yields results for the lowest excited iodine peaks that reproduce experimental findings to a high degree of accuracy.
Physiological effects of diet mixing on consumer fitness: a meta-analysis.
Lefcheck, Jonathan S; Whalen, Matthew A; Davenport, Theresa M; Stone, Joshua P; Duffy, J Emmett
2013-03-01
The degree of dietary generalism among consumers has important consequences for population, community, and ecosystem processes, yet the effects on consumer fitness of mixing food types have not been examined comprehensively. We conducted a meta-analysis of 161 peer-reviewed studies reporting 493 experimental manipulations of prey diversity to test whether diet mixing enhances consumer fitness based on the intrinsic nutritional quality of foods and consumer physiology. Averaged across studies, mixed diets conferred significantly higher fitness than the average of single-species diets, but not the best single prey species. More than half of individual experiments, however, showed maximal growth and reproduction on mixed diets, consistent with the predicted benefits of a balanced diet. Mixed diets including chemically defended prey were no better than the average prey type, opposing the prediction that a diverse diet dilutes toxins. Finally, mixed-model analysis showed that the effect of diet mixing was stronger for herbivores than for higher trophic levels. The generally weak evidence for the nutritional benefits of diet mixing in these primarily laboratory experiments suggests that diet generalism is not strongly favored by the inherent physiological benefits of mixing food types, but is more likely driven by ecological and environmental influences on consumer foraging.
Dark gauge bosons: LHC signatures of non-abelian kinetic mixing
Argüelles, Carlos A.; He, Xiao-Gang; Ovanesyan, Grigory; ...
2017-04-20
We consider non-abelian kinetic mixing between the Standard Model and a dark sector gauge group associated with the presence of a scalar triplet. The magnitude of the resulting dark photon coupling ϵ is determined by the ratio of the triplet vacuum expectation value, constrained to by by electroweak precision tests, to the scale Λ of the effective theory. The corresponding effective operator Wilson coefficient can be while accommodating null results for dark photon searches, allowing for a distinctive LHC dark photon phenomenology. After outlining the possible LHC signatures, we illustrate by recasting current ATLAS dark photon results into the non-abelianmore » mixing context.« less
Analysis of messy data with heteroscedastic in mean models
NASA Astrophysics Data System (ADS)
Trianasari, Nurvita; Sumarni, Cucu
2016-02-01
In the analysis of the data, we often faced with the problem of data where the data did not meet some assumptions. In conditions of such data is often called data messy. This problem is a consequence of the data that generates outliers that bias or error estimation. To analyze the data messy, there are three approaches, namely standard analysis, transform data and data analysis methods rather than a standard. Simulations conducted to determine the performance of a third comparative test procedure on average often the model variance is not homogeneous. Data simulation of each scenario is raised as much as 500 times. Next, we do the analysis of the average comparison test using three methods, Welch test, mixed models and Welch-r test. Data generation is done through software R version 3.1.2. Based on simulation results, these three methods can be used for both normal and abnormal case (homoscedastic). The third method works very well on data balanced or unbalanced when there is no violation in the homogenity's assumptions variance. For balanced data, the three methods still showed an excellent performance despite the violation of the assumption of homogeneity of variance, with the requisite degree of heterogeneity is high. It can be shown from the level of power test above 90 percent, and the best to Welch method (98.4%) and the Welch-r method (97.8%). For unbalanced data, Welch method will be very good moderate at in case of heterogeneity positive pair with a 98.2% power. Mixed models method will be very good at case of highly heterogeneity was negative negative pairs with power. Welch-r method works very well in both cases. However, if the level of heterogeneity of variance is very high, the power of all method will decrease especially for mixed models methods. The method which still works well enough (power more than 50%) is Welch-r method (62.6%), and the method of Welch (58.6%) in the case of balanced data. If the data are unbalanced, Welch-r method works well enough in the case of highly heterogeneous positive positive or negative negative pairs, there power are 68.8% and 51% consequencly. Welch method perform well enough only in the case of highly heterogeneous variety of positive positive pairs with it is power of 64.8%. While mixed models method is good in the case of a very heterogeneous variety of negative partner with 54.6% power. So in general, when there is a variance is not homogeneous case, Welch method is applied to the data rank (Welch-r) has a better performance than the other methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batha, Steven H.; Fincke, James R.; Schmitt, Mark J.
2012-06-07
LANL has two projects in C10.2: Defect-Induced Mix Experiment (DIME) (ongoing, several runs at Omega; NIF shots this summer); and Shock/Shear (tested at Omega for two years; NIF shots in second half of FY13). Each project is jointly funded by C10.2, other C10 MTEs, and Science Campaigns. DIME is investigating 4{pi} and feature-induced mix in spherically convergent ICF implosions by using imaging of the mix layer. DIME prepared for NIF by demonstrating its PDD mix platform on Omega including imaging mid-Z doped layers and defects. DIME in FY13 will focus on PDD symmetry-dependent mix and moving burn into the mixmore » region for validation of mix/burn models. Re-Shock and Shear are two laser-driven experiments designed to study the turbulent mixing of materials. In FY-2012 43 shear and re-shock experimental shots were executed on the OMEGA laser and a complete time history obtained for both. The FY-2013 goal is to transition the experiment to NIF where the larger scale will provide a longer time period for mix layer growth.« less
NASA Astrophysics Data System (ADS)
McPhee, J.; William, Y. W.
2005-12-01
This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system
Zhou, Lan; Yang, Jin-Bo; Liu, Dan; Liu, Zhan; Chen, Ying; Gao, Bo
2008-06-01
To analyze the possible damage to the remaining tooth and composite restorations when various mixing ratios of bases were used. Testing elastic modulus and poission's ratio of glass-ionomer Vitrebond and self-cured calcium hydroxide Dycal with mixing ratios of 1:1, 3:4, 4:3. Micro-CT was used to scan the first mandibular molar, and the three-dimensional finite element model of the first permanent mandibular molar with class I cavity was established. Analyzing the stress of tooth structure, composite and base cement under physical load when different mixing ratios of base cement were used. The elastic modulus of base cement in various mixing ratios was different, which had the statistic significance. The magnitude and location of stress in restored tooth made no differences when the mixing ratios of Vitrebond and Dycal were changed. The peak stress and spreading area in the model with Dycal was more than that with Vitrebond. Changing the best mixing ratio of base cement can partially influence the mechanistic character, but make no differences on the magnitude and location of stress in restored tooth. During the treatment of deep caries, the base cement of the elastic modulus which is proximal to the dentin and restoration should be chosen to avoid the fracture of tooth or restoration.
Williams, L. Keoki; Buu, Anne
2017-01-01
We propose a multivariate genome-wide association test for mixed continuous, binary, and ordinal phenotypes. A latent response model is used to estimate the correlation between phenotypes with different measurement scales so that the empirical distribution of the Fisher’s combination statistic under the null hypothesis is estimated efficiently. The simulation study shows that our proposed correlation estimation methods have high levels of accuracy. More importantly, our approach conservatively estimates the variance of the test statistic so that the type I error rate is controlled. The simulation also shows that the proposed test maintains the power at the level very close to that of the ideal analysis based on known latent phenotypes while controlling the type I error. In contrast, conventional approaches–dichotomizing all observed phenotypes or treating them as continuous variables–could either reduce the power or employ a linear regression model unfit for the data. Furthermore, the statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that conducting a multivariate test on multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests. The proposed method also offers a new approach to analyzing the Fagerström Test for Nicotine Dependence as multivariate phenotypes in genome-wide association studies. PMID:28081206
Plasma-enhanced mixing and flameholding in supersonic flow
Firsov, Alexander; Savelkin, Konstantin V.; Yarantsev, Dmitry A.; Leonov, Sergey B.
2015-01-01
The results of experimental study of plasma-based mixing, ignition and flameholding in a supersonic model combustor are presented in the paper. The model combustor has a length of 600 mm and cross section of 72 mm width and 60 mm height. The fuel is directly injected into supersonic airflow (Mach number M=2, static pressure Pst=160–250 Torr) through wall orifices. Two series of tests are focused on flameholding and mixing correspondingly. In the first series, the near-surface quasi-DC electrical discharge is generated by flush-mounted electrodes at electrical power deposition of Wpl=3–24 kW. The scope includes parametric study of ignition and flame front dynamics, and comparison of three schemes of plasma generation: the first and the second layouts examine the location of plasma generators upstream and downstream from the fuel injectors. The third pattern follows a novel approach of combined mixing/ignition technique, where the electrical discharge distributes along the fuel jet. The last pattern demonstrates a significant advantage in terms of flameholding limit. In the second series of tests, a long discharge of submicrosecond duration is generated across the flow and along the fuel jet. A gasdynamic instability of thermal cavity developed after a deposition of high-power density in a thin plasma filament promotes the air–fuel mixing. The technique studied in this work has weighty potential for high-speed combustion applications, including cold start/restart of scramjet engines and support of transition regime in dual-mode scramjet and at off-design operation. PMID:26170434
A Vignette (User's Guide) for “An R Package for Statistical ...
StatCharrms is a graphical user front-end for ease of use in analyzing data generated from OCSPP 890.2200, Medaka Extended One Generation Reproduction Test (MEOGRT) and OCSPP 890.2300, Larval Amphibian Gonad Development Assay (LAGDA). The analyses StatCharrms is capable of performing are: Rao-Scott adjusted Cochran-Armitage test for trend By Slices (RSCABS), a Standard Cochran-Armitage test for trend By Slices (SCABS), mixed effects Cox proportional model, Jonckheere-Terpstra step down trend test, Dunn test, one way ANOVA, weighted ANOVA, mixed effects ANOVA, repeated measures ANOVA, and Dunnett test. This document provides a User’s Manual (termed a Vignette by the Comprehensive R Archive Network (CRAN)) for the previously created R-code tool called StatCharrms (Statistical analysis of Chemistry, Histopathology, and Reproduction endpoints using Repeated measures and Multi-generation Studies). The StatCharrms R-code has been publically available directly from EPA staff since the approval of OCSPP 890.2200 and 890.2300, and now is available publically available at the CRAN.
Rayleigh Light Scattering for Concentration Measurements in Turbulent Flows
NASA Technical Reports Server (NTRS)
Pitts, William M.
1996-01-01
Despite intensive research over a number of years, an understanding of scalar mixing in turbulent flows remains elusive. An understanding is required because turbulent mixing has a pivotal role in a wide variety of natural and technologically important processes. As an example, the mixing and transport of pollutants in the atmosphere and in bodies of water are often dependent on turbulent mixing processes. Turbulent mixing is also central to turbulent combustion which underlies most hydrocarbon energy use in modern societies as well as in unwanted fire behavior. Development of models for combusting flows is therefore crucial, however, an understanding of scalar mixing is required before useful models of turbulent mixing and, ultimately, turbulent combustion can be developed. An important subset of turbulent flows is axisymmetric turbulent jets and plumes because they are relatively simple to generate, and because the provide an appropriate test bed for the development of general theories of turbulent mixing which can be applied to more complex geometries and flows. This paper focuses on a number of experimental techniques which have been developed at the National Institute of Standards and Development for measuring concentration in binary axisymmetric turbulent jets. In order to demonstrate the value of these diagnostics, some of the more important results from earlier and on-going investigations are summarized. Topics addressed include the similarity behavior of variable density axisymmetric jets, the behavior of absolutely unstable axisymmetric helium jets, and the role of large scale structures and scalar dissipation in these flows.
Calibrating and testing a gap model for simulating forest management in the Oregon Coast Range
Robert J. Pabst; Matthew N. Goslin; Steven L. Garman; Thomas A. Spies
2008-01-01
The complex mix of economic and ecological objectives facing today's forest managers necessitates the development of growth models with a capacity for simulating a wide range of forest conditions while producing outputs useful for economic analyses. We calibrated the gap model ZELIG to simulate stand level forest development in the Oregon Coast Range as part of a...
Esmedere Eren, Sevim; Karakukcu, Cigdem; Ciraci, Mehmet Z; Ustundag, Yasemin; Karakukcu, Musa
2018-06-01
: The mixing test is used to evaluate whether prolonged activated partial thromboplastin time (APTT) is due to an inhibitor or a factor deficiency. The coagulation reaction is demonstrated with APTT derivative curves on the ACL TOP series. We aimed to determine the utility of APTT derivative curves in the mixing test process. The plasma of a patient was mixed with normal plasma in a 1 : 1 ratio and APTT assay was performed with SynthASil reagent. We observed roughness, biphasic and shoulder patterns in derivative curves during the mixing test. An extended laboratory investigation revealed a positive lupus anticoagulant, low factors XI and IX activities. Along with mixing test cut-off limits, we recommend analysing changes in APTT derivative curves to minimize erroneous interpretations of the mixing test. Derivative curves display either a normalizing pattern in factor deficiencies or an atypical pattern in the presence of lupus anticoagulant.
Nguyen, Huy Truong; Lee, Dong-Kyu; Choi, Young-Geun; Min, Jung-Eun; Yoon, Sang Jun; Yu, Yun-Hyun; Lim, Johan; Lee, Jeongmi; Kwon, Sung Won; Park, Jeong Hill
2016-05-30
Ginseng, the root of Panax ginseng has long been the subject of adulteration, especially regarding its origins. Here, 60 ginseng samples from Korea and China initially displayed similar genetic makeup when investigated by DNA-based technique with 23 chloroplast intergenic space regions. Hence, (1)H NMR-based metabolomics with orthogonal projections on the latent structure-discrimination analysis (OPLS-DA) were applied and successfully distinguished between samples from two countries using seven primary metabolites as discrimination markers. Furthermore, to recreate adulteration in reality, 21 mixed samples of numerous Korea/China ratios were tested with the newly built OPLS-DA model. The results showed satisfactory separation according to the proportion of mixing. Finally, a procedure for assessing mixing proportion of intentionally blended samples that achieved good predictability (adjusted R(2)=0.8343) was constructed, thus verifying its promising application to quality control of herbal foods by pointing out the possible mixing ratio of falsified samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Turbulence Modeling Validation, Testing, and Development
NASA Technical Reports Server (NTRS)
Bardina, J. E.; Huang, P. G.; Coakley, T. J.
1997-01-01
The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.
NASA Astrophysics Data System (ADS)
Yu, Xiao-Ying; Barnett, J. Matthew; Amidan, Brett G.; Recknagle, Kurtis P.; Flaherty, Julia E.; Antonio, Ernest J.; Glissmeyer, John A.
2018-03-01
The ANSI/HPS N13.1-2011 standard requires gaseous tracer uniformity testing for sampling associated with stacks used in radioactive air emissions. Sulfur hexafluoride (SF6), a greenhouse gas with a high global warming potential, has long been the gas tracer used in such testing. To reduce the impact of gas tracer tests on the environment, nitrous oxide (N2O) was evaluated as a potential replacement to SF6. The physical evaluation included the development of a test plan to record percent coefficient of variance and the percent maximum deviation between the two gases while considering variables such as fan configuration, injection position, and flow rate. Statistical power was calculated to determine how many sample sets were needed, and computational fluid dynamic modeling was utilized to estimate overall mixing in stacks. Results show there are no significant differences between the behaviors of the two gases, and SF6 modeling corroborated N2O test results. Although, in principle, all tracer gases should behave in an identical manner for measuring mixing within a stack, the series of physical tests guided by statistics was performed to demonstrate the equivalence of N2O testing to SF6 testing in the context of stack qualification tests. The results demonstrate that N2O is a viable choice leading to a four times reduction in global warming impacts for future similar compliance driven testing.
Arctic Ocean Model Intercomparison Using Sound Speed
NASA Astrophysics Data System (ADS)
Dukhovskoy, D. S.; Johnson, M. A.
2002-05-01
The monthly and annual means from three Arctic ocean - sea ice climate model simulations are compared for the period 1979-1997. Sound speed is used to integrate model outputs of temperature and salinity along a section between Barrow and Franz Josef Land. A statistical approach is used to test for differences among the three models for two basic data subsets. We integrated and then analyzed an upper layer between 2 m - 50 m, and also a deep layer from 500 m to the bottom. The deep layer is characterized by low time-variability. No high-frequency signals appear in the deep layer having been filtered out in the upper layer. There is no seasonal signal in the deep layer and the monthly means insignificantly oscillate about the long-period mean. For the deep ocean the long-period mean can be considered quasi-constant, at least within the 19 year period of our analysis. Thus we assumed that the deep ocean would be the best choice for comparing the means of the model outputs. The upper (mixed) layer was chosen to contrast the deep layer dynamics. There are distinct seasonal and interannual signals in the sound speed time series in this layer. The mixed layer is a major link in the ocean - air interaction mechanism. Thus, different mean states of the upper layer in the models might cause different responses in other components of the Arctic climate system. The upper layer also strongly reflects any differences in atmosphere forcing. To compare data from the three models we have used a one-way t-test for the population mean, the Wilcoxon one-sample signed-rank test (when the requirement of normality of tested data is violated), and one-way ANOVA method and F-test to verify our hypothesis that the model outputs have the same mean sound speed. The different statistical approaches have shown that all models have different mean characteristics of the deep and upper layers of the Arctic Ocean.
Kronholm, Scott C.; Capel, Paul D.
2016-01-01
Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.
Fermionic extensions of the Standard Model in light of the Higgs couplings
NASA Astrophysics Data System (ADS)
Bizot, Nicolas; Frigerio, Michele
2016-01-01
As the Higgs boson properties settle, the constraints on the Standard Model extensions tighten. We consider all possible new fermions that can couple to the Higgs, inspecting sets of up to four chiral multiplets. We confront them with direct collider searches, electroweak precision tests, and current knowledge of the Higgs couplings. The focus is on scenarios that may depart from the decoupling limit of very large masses and vanishing mixing, as they offer the best prospects for detection. We identify exotic chiral families that may receive a mass from the Higgs only, still in agreement with the hγγ signal strength. A mixing θ between the Standard Model and non-chiral fermions induces order θ 2 deviations in the Higgs couplings. The mixing can be as large as θ ˜ 0 .5 in case of custodial protection of the Z couplings or accidental cancellation in the oblique parameters. We also notice some intriguing effects for much smaller values of θ, especially in the lepton sector. Our survey includes a number of unconventional pairs of vector-like and Majorana fermions coupled through the Higgs, that may induce order one corrections to the Higgs radiative couplings. We single out the regions of parameters where hγγ and hgg are unaffected, while the hγZ signal strength is significantly modified, turning a few times larger than in the Standard Model in two cases. The second run of the LHC will effectively test most of these scenarios.
Assessing and Upgrading Ocean Mixing for the Study of Climate Change
NASA Astrophysics Data System (ADS)
Howard, A. M.; Fells, J.; Lindo, F.; Tulsee, V.; Canuto, V.; Cheng, Y.; Dubovikov, M. S.; Leboissetier, A.
2016-12-01
Climate is critical. Climate variability affects us all; Climate Change is a burning issue. Droughts, floods, other extreme events, and Global Warming's effects on these and problems such as sea-level rise and ecosystem disruption threaten lives. Citizens must be informed to make decisions concerning climate such as "business as usual" vs. mitigating emissions to keep warming within bounds. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. To make useful predictions we must realistically model each component of the climate system, including the ocean, whose critical role includes transporting&storing heat and dissolved CO2. We need physically based parameterizations of key ocean processes that can't be put explicitly in a global climate model, e.g. vertical&lateral mixing. The NASA-GISS turbulence group uses theory to model mixing including: 1) a comprehensive scheme for small scale vertical mixing, including convection&shear, internal waves & double-diffusion, and bottom tides 2) a new parameterization for the lateral&vertical mixing by mesoscale eddies. For better understanding we write our own programs. To assess the modelling MATLAB programs visualize and calculate statistics, including means, standard deviations and correlations, on NASA-GISS OGCM output with different mixing schemes and help us study drift from observations. We also try to upgrade the schemes, e.g. the bottom tidal mixing parameterizations' roughness, calculated from high resolution topographic data using Gaussian weighting functions with cut-offs. We study the effects of their parameters to improve them. A FORTRAN program extracts topography data subsets of manageable size for a MATLAB program, tested on idealized cases, to visualize&calculate roughness on. Students are introduced to modeling a complex system, gain a deeper appreciation of climate science, programming skills and familiarity with MATLAB, while furthering climate science by improving our mixing schemes. We are incorporating climate research into our college curriculum. The PI is both a member of the turbulence group at NASA-GISS and an associate professor at Medgar Evers College of CUNY, an urban minority serving institution in central Brooklyn. Supported by NSF Award AGS-1359293.
Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ
Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less
NASA Astrophysics Data System (ADS)
Yang, Zhengjun; Wang, Fujun; Zhou, Peijian
2012-09-01
The current research of large eddy simulation (LES) of turbulent flow in pumps mainly concentrates in applying conventional subgrid-scale (SGS) model to simulate turbulent flow, which aims at obtaining the flow field in pump. The selection of SGS model is usually not considered seriously, so the accuracy and efficiency of the simulation cannot be ensured. Three SGS models including Smagorinsky-Lilly model, dynamic Smagorinsky model and dynamic mixed model are comparably studied by using the commercial CFD code Fluent combined with its user define function. The simulations are performed for the turbulent flow in a centrifugal pump impeller. The simulation results indicate that the mean flows predicted by the three SGS models agree well with the experimental data obtained from the test that detailed measurements of the flow inside the rotating passages of a six-bladed shrouded centrifugal pump impeller performed using particle image velocimetry (PIV) and laser Doppler velocimetry (LDV). The comparable results show that dynamic mixed model gives the most accurate results for mean flow in the centrifugal pump impeller. The SGS stress of dynamic mixed model is decompose into the scale similar part and the eddy viscous part. The scale similar part of SGS stress plays a significant role in high curvature regions, such as the leading edge and training edge of pump blade. It is also found that the dynamic mixed model is more adaptive to compute turbulence in the pump impeller. The research results presented is useful to improve the computational accuracy and efficiency of LES for centrifugal pumps, and provide important reference for carrying out simulation in similar fluid machineries.
Bhattacharyya, Onil; Schull, Michael; Shojania, Kaveh; Stergiopoulos, Vicky; Naglie, Gary; Webster, Fiona; Brandao, Ricardo; Mohammed, Tamara; Christian, Jennifer; Hawker, Gillian; Wilson, Lynn; Levinson, Wendy
2016-01-01
Integrating care for people with complex needs is challenging. Indeed, evidence of solutions is mixed, and therefore, well-designed, shared evaluation approaches are needed to create cumulative learning. The Toronto-based Building Bridges to Integrate Care (BRIDGES) collaborative provided resources to refine and test nine new models linking primary, hospital and community care. It used mixed methods, a cross-project meta-evaluation and shared outcome measures. Given the range of skills required to develop effective interventions, a novel incubator was used to test and spread opportunities for system integration that included operational expertise and support for evaluation and process improvement.
NASA Astrophysics Data System (ADS)
Nikurashin, Maxim; Gunn, Andrew
2017-04-01
The meridional overturning circulation (MOC) is a planetary-scale oceanic flow which is of direct importance to the climate system: it transports heat meridionally and regulates the exchange of CO2 with the atmosphere. The MOC is forced by wind and heat and freshwater fluxes at the surface and turbulent mixing in the ocean interior. A number of conceptual theories for the sensitivity of the MOC to changes in forcing have recently been developed and tested with idealized numerical models. However, the skill of the simple conceptual theories to describe the MOC simulated with higher complexity global models remains largely unknown. In this study, we present a systematic comparison of theoretical and modelled sensitivity of the MOC and associated deep ocean stratification to vertical mixing and southern hemisphere westerlies. The results show that theories that simplify the ocean into a single-basin, zonally-symmetric box are generally in a good agreement with a realistic, global ocean circulation model. Some disagreement occurs in the abyssal ocean, where complex bottom topography is not taken into account by simple theories. Distinct regimes, where the MOC has a different sensitivity to wind or mixing, as predicted by simple theories, are also clearly shown by the global ocean model. The sensitivity of the Indo-Pacific, Atlantic, and global basins is analysed separately to validate the conceptual understanding of the upper and lower overturning cells in the theory.
Numerical Analysis of Mixed-Phase Icing Cloud Simulations in the NASA Propulsion Systems Laboratory
NASA Technical Reports Server (NTRS)
Bartkus, Tadas; Tsao, Jen-Ching; Struk, Peter; Van Zante, Judith
2017-01-01
This presentation describes the development of a numerical model that couples the thermal interaction between ice particles, water droplets, and the flowing gas of an icing wind tunnel for simulation of NASA Glenn Research Centers Propulsion Systems Laboratory (PSL). The ultimate goal of the model is to better understand the complex interactions between the test parameters and have greater confidence in the conditions at the test section of the PSL tunnel. The model attempts to explain the observed changes in test conditions by coupling the conservation of mass and energy equations for both the cloud particles and flowing gas mass. Model predictions were compared to measurements taken during May 2015 testing at PSL, where test conditions varied gas temperature, pressure, velocity and humidity levels, as well as the cloud total water content, particle initial temperature, and particle size distribution.
Numerical Analysis of Mixed-Phase Icing Cloud Simulations in the NASA Propulsion Systems Laboratory
NASA Technical Reports Server (NTRS)
Bartkus, Tadas P.; Tsao, Jen-Ching; Struk, Peter M.; Van Zante, Judith F.
2017-01-01
This paper describes the development of a numerical model that couples the thermal interaction between ice particles, water droplets, and the flowing gas of an icing wind tunnel for simulation of NASA Glenn Research Centers Propulsion Systems Laboratory (PSL). The ultimate goal of the model is to better understand the complex interactions between the test parameters and have greater confidence in the conditions at the test section of the PSL tunnel. The model attempts to explain the observed changes in test conditions by coupling the conservation of mass and energy equations for both the cloud particles and flowing gas mass. Model predictions were compared to measurements taken during May 2015 testing at PSL, where test conditions varied gas temperature, pressure, velocity and humidity levels, as well as the cloud total water content, particle initial temperature, and particle size distribution.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Axdahl, Erik L.; Cabell, Karen F.
2014-01-01
With the increasing costs of physics experiments and simultaneous increase in availability and maturity of computational tools it is not surprising that computational fluid dynamics (CFD) is playing an increasingly important role, not only in post-test investigations, but also in the early stages of experimental planning. This paper describes a CFD-based effort executed in close collaboration between computational fluid dynamicists and experimentalists to develop a virtual experiment during the early planning stages of the Enhanced Injection and Mixing project at NASA Langley Research Center. This projects aims to investigate supersonic combustion ramjet (scramjet) fuel injection and mixing physics, improve the understanding of underlying physical processes, and develop enhancement strategies and functional relationships relevant to flight Mach numbers greater than 8. The purpose of the virtual experiment was to provide flow field data to aid in the design of the experimental apparatus and the in-stream rake probes, to verify the nonintrusive measurements based on NO-PLIF, and to perform pre-test analysis of quantities obtainable from the experiment and CFD. The approach also allowed for the joint team to develop common data processing and analysis tools, and to test research ideas. The virtual experiment consisted of a series of Reynolds-averaged simulations (RAS). These simulations included the facility nozzle, the experimental apparatus with a baseline strut injector, and the test cabin. Pure helium and helium-air mixtures were used to determine the efficacy of different inert gases to model hydrogen injection. The results of the simulations were analyzed by computing mixing efficiency, total pressure recovery, and stream thrust potential. As the experimental effort progresses, the simulation results will be compared with the experimental data to calibrate the modeling constants present in the CFD and validate simulation fidelity. CFD will also be used to investigate different injector concepts, improve understanding of the flow structure and flow physics, and develop functional relationships. Both RAS and large eddy simulations (LES) are planned for post-test analysis of the experimental data.
Prewhitening of Colored Noise Fields for Detection of Threshold Sources
1993-11-07
determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6
Maps and models of density and stiffness within individual Douglas-fir trees
Christine L. Todoroki; Eini C. Lowell; Dennis P. Dykstra; David G. Briggs
2012-01-01
Spatial maps of density and stiffness patterns within individual trees were developed using two methods: (1) measured wood properties of veneer sheets; and (2) mixed effects models, to test the hypothesis that within-tree patterns could be predicted from easily measurable tree variables (height, taper, breast-height diameter, and acoustic velocity). Sample trees...
ERIC Educational Resources Information Center
Herman, Melissa R.
This paper describes the achievement patterns of a sample of 1,492 multiracial high school students and examines how their achievement fits into existing theoretical models that explain monoracial differences in achievement. These theoretical models include status attainment, parenting style, oppositional culture, and educational attitudes. The…
A preliminary case-mix classification system for Medicare home health clients.
Branch, L G; Goldberg, H B
1993-04-01
In this study, a hierarchical case-mix model was developed for grouping Medicare home health beneficiaries homogeneously, based on the allowed charges for their home care. Based on information from a two-page form from 2,830 clients from ten states and using the classification and regression trees method, a four-component model was developed that yielded 11 case-mix groups and explained 22% of the variance for the test sample of 1,929 clients. The four components are rehabilitation, special care, skilled-nurse monitoring, and paralysis; each are categorized as present or absent. The range of mean-allowed charges for the 11 groups in the total sample was $473 to $2,562 with a mean of $847. Of the six groups with mean charges above $1,000, none exceeded 5.2% of clients; thus, the high-cost groups are relatively rare.
Exploiting Cross-sensitivity by Bayesian Decoding of Mixed Potential Sensor Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreller, Cortney
LANL mixed-potential electrochemical sensor (MPES) device arrays were coupled with advanced Bayesian inference treatment of the physical model of relevant sensor-analyte interactions. We demonstrated that our approach could be used to uniquely discriminate the composition of ternary gas sensors with three discreet MPES sensors with an average error of less than 2%. We also observed that the MPES exhibited excellent stability over a year of operation at elevated temperatures in the presence of test gases.
Modeling of surface temperature effects on mixed material migration in NSTX-U
NASA Astrophysics Data System (ADS)
Nichols, J. H.; Jaworski, M. A.; Schmid, K.
2016-10-01
NSTX-U will initially operate with graphite walls, periodically coated with thin lithium films to improve plasma performance. However, the spatial and temporal evolution of these films during and after plasma exposure is poorly understood. The WallDYN global mixed-material surface evolution model has recently been applied to the NSTX-U geometry to simulate the evolution of poloidally inhomogenous mixed C/Li/O plasma-facing surfaces. The WallDYN model couples local erosion and deposition processes with plasma impurity transport in a non-iterative, self-consistent manner that maintains overall material balance. Temperature-dependent sputtering of lithium has been added to WallDYN, utilizing an adatom sputtering model developed from test stand experimental data. Additionally, a simplified temperature-dependent diffusion model has been added to WallDYN so as to capture the intercalation of lithium into a graphite bulk matrix. The sensitivity of global lithium migration patterns to changes in surface temperature magnitude and distribution will be examined. The effect of intra-discharge increases in surface temperature due to plasma heating, such as those observed during NSTX Liquid Lithium Divertor experiments, will also be examined. Work supported by US DOE contract DE-AC02-09CH11466.
The Influence of Neck Muscle Activation on Head and Neck Injuries of Occupants in Frontal Impacts.
Li, Fan; Lu, Ronggui; Hu, Wei; Li, Honggeng; Hu, Shiping; Hu, Jiangzhong; Wang, Haibin; Xie, He
2018-01-01
The aim of the present paper was to study the influence of neck muscle activation on head and neck injuries of vehicle occupants in frontal impacts. A mixed dummy-human finite element model was developed to simulate a frontal impact. The head-neck part of a Hybrid III dummy model was replaced by a well-validated head-neck FE model with passive and active muscle characteristics. The mixed dummy-human FE model was validated by 15 G frontal volunteer tests conducted in the Naval Biodynamics Laboratory. The effects of neck muscle activation on the head dynamic responses and neck injuries of occupants in three frontal impact intensities, low speed (10 km/h), medium speed (30 km/h), and high speed (50 km/h), were studied. The results showed that the mixed dummy-human FE model has good biofidelity. The activation of neck muscles can not only lower the head resultant acceleration under different impact intensities and the head angular acceleration in medium- and high-speed impacts, thereby reducing the risks of head injury, but also protect the neck from injury in low-speed impacts.
On-road heavy-duty diesel particulate matter emissions modeled using chassis dynamometer data.
Kear, Tom; Niemeier, D A
2006-12-15
This study presents a model, derived from chassis dynamometer test data, for factors (operational correction factors, or OCFs) that correct (g/mi) heavy-duty diesel particle emission rates measured on standard test cycles for real-world conditions. Using a random effects mixed regression model with data from 531 tests of 34 heavy-duty vehicles from the Coordinating Research Council's E55/E59 research project, we specify a model with covariates that characterize high power transient driving, time spent idling, and average speed. Gram per mile particle emissions rates were negatively correlated with high power transient driving, average speed, and time idling. The new model is capable of predicting relative changes in g/mi on-road heavy-duty diesel particle emission rates for real-world driving conditions that are not reflected in the driving cycles used to test heavy-duty vehicles.
NASA Astrophysics Data System (ADS)
Parsakhoo, Zahra; Shao, Yaping
2017-04-01
Near-surface turbulent mixing has considerable effect on surface fluxes, cloud formation and convection in the atmospheric boundary layer (ABL). Its quantifications is however a modeling and computational challenge since the small eddies are not fully resolved in Eulerian models directly. We have developed a Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer based on the Ito Stochastic Differential Equation (SDE) for air parcels (particles). Due to the complexity of the mixing in the ABL, we find that linear Ito SDE cannot represent convections properly. Three strategies have been tested to solve the problem: 1) to make the deterministic term in the Ito equation non-linear; 2) to change the random term in the Ito equation fractional, and 3) to modify the Ito equation by including Levy flights. We focus on the third strategy and interpret mixing as interaction between at least two stochastic processes with different Lagrangian time scales. The model is in progress to include the collisions among the particles with different characteristic and to apply the 3D model for real cases. One application of the model is emphasized: some land surface patterns are generated and then coupled with the Large Eddy Simulation (LES).
Testing constrained sequential dominance models of neutrinos
NASA Astrophysics Data System (ADS)
Björkeroth, Fredrik; King, Stephen F.
2015-12-01
Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We analyze a class of CSD(n) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the ‘atmospheric’ and ‘solar’ neutrino masses with Yukawa couplings to ({ν }e,{ν }μ ,{ν }τ ) proportional to (0,1,1) and (1,n,n-2), respectively, where n is a positive integer. These coupling patterns may arise in indirect family symmetry models based on A 4. With two right-handed neutrinos, using a χ 2 test, we find a good agreement with data for CSD(3) and CSD(4) where the entire Pontecorvo-Maki-Nakagawa-Sakata mixing matrix is controlled by a single phase η, which takes simple values, leading to accurate predictions for mixing angles and the magnitude of the oscillation phase | {δ }{CP}| . We carefully study the perturbing effect of a third ‘decoupled’ right-handed neutrino, leading to a bound on the lightest physical neutrino mass {m}1{{≲ }}1 meV for the viable cases, corresponding to a normal neutrino mass hierarchy. We also discuss a direct link between the oscillation phase {δ }{CP} and leptogenesis in CSD(n) due to the same see-saw phase η appearing in both the neutrino mass matrix and leptogenesis.
A study of reacting free and ducted hydrogen/air jets
NASA Technical Reports Server (NTRS)
Beach, H. L., Jr.
1975-01-01
The mixing and reaction of a supersonic jet of hydrogen in coaxial free and ducted high temperature test gases were investigated. The importance of chemical kinetics on computed results, and the utilization of free-jet theoretical approaches to compute enclosed flow fields were studied. Measured pitot pressure profiles were correlated by use of a parabolic mixing analysis employing an eddy viscosity model. All computations, including free, ducted, reacting, and nonreacting cases, use the same value of the empirical constant in the viscosity model. Equilibrium and finite rate chemistry models were utilized. The finite rate assumption allowed prediction of observed ignition delay, but the equilibrium model gave the best correlations downstream from the ignition location. Ducted calculations were made with finite rate chemistry; correlations were, in general, as good as the free-jet results until problems with the boundary conditions were encountered.
IBM-2 calculation with configuration mixing for Ge isotopes
NASA Astrophysics Data System (ADS)
Padilla-Rodal, Elizabeth; Galindo-Uribarri, Alfredo
2005-04-01
Recent results on Coulomb excitation experiments of radioactive neutron-rich Ge isotopes at the Holifield Radioactive Ion Beam Facility allow the study of the systematic trend of B(E2; 0^+ ->2^+) between the sub-shell closures at N=40 and the N=50 [1]. The new information on the E2 transition strengths constitutes a stringent test for the nuclear models and has motivated us to revisit the use of Interacting Boson Model in this region. We show that the IBM-2 with configuration mixing is a successful model to describe the shape transition phenomena that take place around N=40 in stable germanium isotopes, as well as the predictions given by this model about the evolution of the structure for the radioactive ^78, 80, 82Ge nuclei. [1] E. Padilla-Rodal Ph.D. Thesis UNAM; submitted for publication.
An Analytical Thermal Model for Autonomous Soaring Research
NASA Technical Reports Server (NTRS)
Allen, Michael
2006-01-01
A viewgraph presentation describing an analytical thermal model used to enable research on autonomous soaring for a small UAV aircraft is given. The topics include: 1) Purpose; 2) Approach; 3) SURFRAD Data; 4) Convective Layer Thickness; 5) Surface Heat Budget; 6) Surface Virtual Potential Temperature Flux; 7) Convective Scaling Velocity; 8) Other Calculations; 9) Yearly trends; 10) Scale Factors; 11) Scale Factor Test Matrix; 12) Statistical Model; 13) Updraft Strength Calculation; 14) Updraft Diameter; 15) Updraft Shape; 16) Smoothed Updraft Shape; 17) Updraft Spacing; 18) Environment Sink; 19) Updraft Lifespan; 20) Autonomous Soaring Research; 21) Planned Flight Test; and 22) Mixing Ratio.
Chen, Yuqian; Ke, Yufeng; Meng, Guifang; Jiang, Jin; Qi, Hongzhi; Jiao, Xuejun; Xu, Minpeng; Zhou, Peng; He, Feng; Ming, Dong
2017-12-01
As one of the most important brain-computer interface (BCI) paradigms, P300-Speller was shown to be significantly impaired once applied in practical situations due to effects of mental workload. This study aims to provide a new method of building training models to enhance performance of P300-Speller under mental workload. Three experiment conditions based on row-column P300-Speller paradigm were performed including speller-only, 3-back-speller and mental-arithmetic-speller. Data under dual-task conditions were introduced to speller-only data respectively to build new training models. Then performance of classifiers with different models was compared under the same testing condition. The results showed that when tasks of imported training data and testing data were the same, character recognition accuracies and round accuracies of P300-Speller with mixed-data training models significantly improved (FDR, p < 0.005). When they were different, performance significantly improved when tested on mental-arithmetic-speller (FDR, p < 0.05) while the improvement was modest when tested on n-back-speller (FDR, p < 0.1). The analysis of ERPs revealed that ERP difference between training data and testing data was significantly diminished when the dual-task data was introduced to training data (FDR, p < 0.05). The new method of training classifier on mixed data proved to be effective in enhancing performance of P300-Speller under mental workload, confirmed the feasibility to build a universal training model and overcome the effects of mental workload in its practical applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Merino, M P; Andrews, B A; Parada, P; Asenjo, J A
2016-11-01
Biomining is defined as biotechnology for metal recovery from minerals, and is promoted by the concerted effort of a consortium of acidophile prokaryotes, comprised of members of the Bacteria and Archaea domains. Ferroplasma acidiphilum and Leptospirillum ferriphilum are the dominant species in extremely acid environments and have great use in bioleaching applications; however, the role of each species in this consortia is still a subject of research. The hypothesis of this work is that F. acidiphilum uses the organic matter secreted by L. ferriphilum for growth, maintaining low levels of organic compounds in the culture medium, preventing their toxic effects on L. ferriphilum. To test this hypothesis, a characterization of Ferroplasma acidiphilum strain BRL-115 was made with the objective of determining its optimal growth conditions. Subsequently, under the optimal conditions, L. ferriphilum and F. acidiphilum were tested growing in each other's supernatant, in order to define if there was exchange of metabolites between the species. With these results, a mixed culture in batch cyclic operation was performed to obtain main specific growth rates, which were used to evaluate a mixed metabolic model previously developed by our group. It was observed that F. acidiphilum, strain BRL-115 is a chemomixotrophic organism, and its growth is maximized with yeast extract at a concentration of 0.04% wt/vol. From the experiments of L. ferriphilum growing on F. acidiphilum supernatant and vice versa, it was observed that in both cases cell growth is favorably affected by the presence of the filtered medium of the other microorganism, proving a synergistic interaction between these species. Specific growth rates were obtained in cyclic batch operation of the mixed culture and were used as input data for a Flux Balance Analysis of the mixed metabolic model, obtaining a reasonable behavior of the metabolic fluxes and the system as a whole, therefore consolidating the model previously developed. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1390-1396, 2016. © 2016 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Galloway, A. W. E.; Eisenlord, M. E.; Brett, M. T.
2016-02-01
Stable isotope (SI) based mixing models are the most common approach used to infer resource pathways in consumers. However, SI based analyses are often underdetermined, and consumer SI fractionation is usually unknown. The use of fatty acid (FA) tracers in mixing models offers an alternative approach that can resolve the underdetermined constraint. A limitation to both methods is the considerable uncertainty about consumer `trophic modification' (TM) of dietary FA or SI, which occurs as consumers transform dietary resources into tissues. We tested the utility of SI and FA approaches for inferring the diets of the marine benthic isopod (Idotea wosnesenskii) fed various marine macroalgae in controlled feeding trials. Our analyses quantified how the accuracy and precision of Bayesian mixing models was influenced by choice of algorithm (SIAR vs MixSIR), fractionation (assumed or known), and whether the model was under or overdetermined (seven sources and two vs 26 tracers) for cases where isopods were fed an exclusive diet of one of the seven different macroalgae. Using the conventional approach (i.e., 2 SI with assumed TM) resulted in average model outputs, i.e., the contribution from the exclusive resource = 0.20 ± 0.23 (0.00-0.79), mean ± SD (95% credible interval), that only differed slightly from the prior assumption. Using the FA based approach with known TM greatly improved model performance, i.e., the contribution from the exclusive resource = 0.91 ± 0.10 (0.58-0.99). The choice of algorithm only made a difference when fractionation was known and the model was overdetermined (FA approach). In this case SIAR and MixSIR had outputs of 0.86 ± 0.11 (0.48-0.96) and 0.96 ± 0.05 (0.79-1.00), respectively. This analysis shows the choice of dietary tracers and the assumption of consumer trophic modification greatly influence the performance of mixing model dietary reconstructions, and ultimately our understanding of what resources actually support aquatic consumers.
Evaluating the influences of mixing strategies on the Biochemical Methane Potential test.
Wang, Bing; Björn, Annika; Strömberg, Sten; Nges, Ivo Achu; Nistor, Mihaela; Liu, Jing
2017-01-01
Mixing plays an important role in the Biochemical Methane Potential (BMP) test, but only limited efforts have been put into it. In this study, various mixing strategies were applied to evaluate the influences on the BMP test, i.e., no mixing, shaking in water bath, shake manually once per day (SKM), automated unidirectional and bidirectional mixing. The results show that the effects of mixing are prominent for the most viscous substrate investigated, as both the highest methane production and highest maximal daily methane production were obtained at the highest mixing intensity. However, the organic removal efficiencies were not affected, which might offer evidence that mixing helps the release of gases trapped in digester liquid. Moreover, mixing is required for improved methane production when the digester content is viscous, conversely, mixing is unnecessary or SKM might be sufficient for the BMP test if the digester content is quite dilute or the substrate is easily degraded. Copyright © 2016 Elsevier Ltd. All rights reserved.
A normal stress subgrid-scale eddy viscosity model in large eddy simulation
NASA Technical Reports Server (NTRS)
Horiuti, K.; Mansour, N. N.; Kim, John J.
1993-01-01
The Smagorinsky subgrid-scale eddy viscosity model (SGS-EVM) is commonly used in large eddy simulations (LES) to represent the effects of the unresolved scales on the resolved scales. This model is known to be limited because its constant must be optimized in different flows, and it must be modified with a damping function to account for near-wall effects. The recent dynamic model is designed to overcome these limitations but is compositionally intensive as compared to the traditional SGS-EVM. In a recent study using direct numerical simulation data, Horiuti has shown that these drawbacks are due mainly to the use of an improper velocity scale in the SGS-EVM. He also proposed the use of the subgrid-scale normal stress as a new velocity scale that was inspired by a high-order anisotropic representation model. The testing of Horiuti, however, was conducted using DNS data from a low Reynolds number channel flow simulation. It was felt that further testing at higher Reynolds numbers and also using different flows (other than wall-bounded shear flows) were necessary steps needed to establish the validity of the new model. This is the primary motivation of the present study. The objective is to test the new model using DNS databases of high Reynolds number channel and fully developed turbulent mixing layer flows. The use of both channel (wall-bounded) and mixing layer flows is important for the development of accurate LES models because these two flows encompass many characteristic features of complex turbulent flows.
Implications of Upwells as Hydrodynamic Jets in a Pulse Jet Mixed System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pease, Leonard F.; Bamberger, Judith A.; Minette, Michael J.
2015-08-01
This report evaluates the physics of the upwell flow in pulse jet mixed systems in the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Although the initial downward flow and radial flow from pulse jet mixers (PJMs) has been analyzed in some detail, the upwells have received considerably less attention despite having significant implications for vessel mixing. Do the upwells behave like jets? How do the upwells scale? When will the central upwell breakthrough? What proportion of the vessel is blended by the upwells themselves? Indeed, how the physics of the central upwell is affected by multiple PJMs (e.g., sixmore » in the proposed mixing vessels), non-Newtonian rheology, and significant multicomponent solids loadings remain unexplored. The central upwell must satisfy several criteria to be considered a free jet. First, it must travel for several diameters in a nearly constant direction. Second, its velocity must decay with the inverse of elevation. Third, it should have an approximately Gaussian profile. Fourth, the influence of surface or body forces must be negligible. A combination of historical data in a 12.75 ft test vessel, newly analyzed data from the 8 ft test vessel, and conservation of momentum arguments derived specifically for PJM operating conditions demonstrate that the central upwell satisfies these criteria where vigorous breakthrough is achieved. An essential feature of scaling from one vessel to the next is the requirement that the underlying physics does not change adversely. One may have confidence in scaling if (1) correlations and formulas capture the relevant physics; (2) the underlying physics does not change from the conditions under which it was developed to the conditions of interest; (3) all factors relevant to scaling have been incorporated, including flow, material, and geometric considerations; and (4) the uncertainty in the relationships is sufficiently narrow to meet required specifications. Although the central upwell satisfies these criteria when vigorous breakthrough is achieved, not all available data follow the free jet profile for the central upwell, particularly at lower nozzle velocities. Alternative flow regimes are considered and new models for cloud height, “cavern height,” and the rate of jet penetration (jet celerity) are benchmarked against data to anchor scaling analyses. This analytical modeling effort to provide a technical basis for scaling PJM mixed vessels has significant implications for vessel mixing, because jet physics underlies “cavern” height, cloud height, and the volume of mixing considerations. A new four-parameter cloud height model compares favorably to experimental results. This model is predictive of breakthrough in 8 ft vessel tests with the two-part simulant. Analysis of the upwell in the presence of yield stresses finds evidence of expanding turbulent jets, confined turbulent jets, and confined laminar flows. For each, the critical elevation at which jet momentum depletes is predicted, which compare favorably to experimental cavern height data. Partially coupled momentum and energy balances suggest that these are limiting cases of a gradual transition from a turbulent expanding flow to a confined laminar flow. This analysis of the central upwell alone lays essential groundwork for complete analysis of mode three mixing (i.e., breakthrough with slow peripheral mixing). Consideration of jet celerity shows that the rate of jet penetration is a governing consideration in breakthrough to the surface. Estimates of the volume of mixing are presented. This analysis shows that flow along the vessel wall is sluggish such that the central upwell governs the volume of mixing. This analysis of the central upwell alone lays essential groundwork for complete analysis of mode three mixing and estimates of hydrogen release rates from first principles.« less
Development of a finite element based thermal cracking performance prediction model.
DOT National Transportation Integrated Search
2009-09-15
Low-temperature cracking of hot-mix asphalt (HMA) pavements continues to be a leading cause of : premature pavement deterioration in regions of cold climate and/or where significant thermal cycling : occurs. Recent advances in fracture testing and mo...
Uncertainty quantification of measured quantities for a HCCI engine: composition or temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petitpas, Guillaume; Whitesides, Russell
UQHCCI_1 computes the measurement uncertainties of a HCCI engine test bench using the pressure trace and the estimated uncertainties of the measured quantities as inputs, then propagating them through Bayesian inference and a mixing model.
DNS and LES of a Shear-Free Mixing Layer
NASA Technical Reports Server (NTRS)
Knaepen, B.; Debliquy, O.; Carati, D.
2003-01-01
The purpose of this work is twofold. First, given the computational resources available today, it is possible to reach, using DNS, higher Reynolds numbers than in Briggs et al.. In the present study, the microscale Reynolds numbers reached in the low- and high-energy homogeneous regions are, respectively, 32 and 69. The results reported earlier can thus be complemented and their robustness in the presence of increased turbulence studied. The second aim of this work is to perform a detailed and documented LES of the shear-free mixing layer. In that respect, the creation of a DNS database at higher Reynolds number is necessary in order to make meaningful LES assessments. From the point of view of LES, the shear-free mixing-layer is interesting since it allows one to test how traditional LES models perform in the presence of an inhomogeneity without having to deal with difficult numerical issues. Indeed, as argued in Briggs et al., it is possible to use a spectral code to study the shear-free mixing layer and one can thus focus on the accuracy of the modelling while avoiding contamination of the results by commutation errors etc. This paper is organized as follows. First we detail the initialization procedure used in the simulation. Since the flow is not statistically stationary, this initialization procedure has a fairly strong influence on the evolution. Although we will focus here on the shear-free mixing layer, the method proposed in the present work can easily be used for other flows with one inhomogeneous direction. The next section of the article is devoted to the description of the DNS. All the relevant parameters are listed and comparison with the Veeravalli & Warhaft experiment is performed. The section on the LES of the shear-free mixing layer follows. A detailed comparison between the filtered DNS data and the LES predictions is presented. It is shown that simple eddy viscosity models perform very well for the present test case, most probably because the flow seems to be almost isotropic in the small-scale range that is not resolved by the LES.
Modeling Cloud Phase Fraction Based on In-situ Observations in Stratiform Clouds
NASA Astrophysics Data System (ADS)
Boudala, F. S.; Isaac, G. A.
2005-12-01
Mixed-phase clouds influence weather and climate in several ways. Due to the fact that they exhibit very different optical properties as compared to ice or liquid only clouds, they play an important role in the earth's radiation balance by modifying the optical properties of clouds. Precipitation development in clouds is also enhanced under mixed-phase conditions and these clouds may contain large supercooled drops that freeze quickly in contact with aircraft surfaces that may be a hazard to aviation. The existence of ice and liquid phase clouds together in the same environment is thermodynamically unstable, and thus they are expected to disappear quickly. However, several observations show that mixed-phase clouds are relatively stable in the natural environment and last for several hours. Although there have been some efforts being made in the past to study the microphysical properties of mixed-phase clouds, there are still a number of uncertainties in modeling these clouds particularly in large scale numerical models. In most models, very simple temperature dependent parameterizations of cloud phase fraction are being used to estimate the fraction of ice or liquid phase in a given mixed-phase cloud. In this talk, two different parameterizations of ice fraction using in-situ aircraft measurements of cloud microphysical properties collected in extratropical stratiform clouds during several field programs will be presented. One of the parameterizations has been tested using a single prognostic equation developed by Tremblay et al. (1996) for application in the Canadian regional weather prediction model. The addition of small ice particles significantly increased the vapor deposition rate when the natural atmosphere is assumed to be water saturated, and thus this enhanced the glaciation of simulated mixed-phase cloud via the Bergeron-Findeisen process without significantly affecting the other cloud microphysical processes such as riming and particle sedimentation rates. After the water vapor pressure in mixed-phase cloud was modified based on the Lord et al. (1984) scheme by weighting the saturation water vapor pressure with ice fraction, it was possible to simulate more stable mixed-phase cloud. It was also noted that the ice particle concentration (L>100 μm) in mixed-phase cloud is lower on average by a factor 3 and as a result the parameterization should be corrected for this effect. After accounting for this effect, the parameterized ice fraction agreed well with observed mean ice fraction.
Bjork, K E; Kopral, C A; Wagner, B A; Dargatz, D A
2015-12-01
Antimicrobial use in agriculture is considered a pathway for the selection and dissemination of resistance determinants among animal and human populations. From 1997 through 2003 the U.S. National Antimicrobial Resistance Monitoring System (NARMS) tested clinical Salmonella isolates from multiple animal and environmental sources throughout the United States for resistance to panels of 16-19 antimicrobials. In this study we applied two mixed effects models, the generalized linear mixed model (GLMM) and accelerated failure time frailty (AFT-frailty) model, to susceptible/resistant and interval-censored minimum inhibitory concentration (MIC) metrics, respectively, from Salmonella enterica subspecies enterica serovar Typhimurium isolates from livestock and poultry. Objectives were to compare characteristics of the two models and to examine the effects of time, species, and multidrug resistance (MDR) on the resistance of isolates to individual antimicrobials, as revealed by the models. Fixed effects were year of sample collection, isolate source species and MDR indicators; laboratory study site was included as a random effect. MDR indicators were significant for every antimicrobial and were dominant effects in multivariable models. Temporal trends and source species influences varied by antimicrobial. In GLMMs, the intra-class correlation coefficient ranged up to 0.8, indicating that the proportion of variance accounted for by laboratory study site could be high. AFT models tended to be more sensitive, detecting more curvilinear temporal trends and species differences; however, high levels of left- or right-censoring made some models unstable and results uninterpretable. Results from GLMMs may be biased by cutoff criteria used to collapse MIC data into binary categories, and may miss signaling important trends or shifts if the series of antibiotic dilutions tested does not span a resistance threshold. Our findings demonstrate the challenges of measuring the AMR ecosystem and the complexity of interacting factors, and have implications for future monitoring. We include suggestions for future data collection and analyses, including alternative modeling approaches. Published by Elsevier B.V.
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ocean Turbulence, III: New GISS Vertical Mixing Scheme
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Howard, A. M.; Cheng, Y.; Muller, C. J.; Leboissetier, A.; Jayne, S. R.
2010-01-01
We have found a new way to express the solutions of the RSM (Reynolds Stress Model) equations that allows us to present the turbulent diffusivities for heat, salt and momentum in a way that is considerably simpler and thus easier to implement than in previous work. The RSM provides the dimensionless mixing efficiencies Gamma-alpha (alpha stands for heat, salt and momentum). However, to compute the diffusivities, one needs additional information, specifically, the dissipation Epsilon. Since a dynamic equation for the latter that includes the physical processes relevant to the ocean is still not available, one must resort to different sources of information outside the RSM to obtain a complete Mixing Scheme usable in OGCMs. As for the RSM results, we show that the Gamma-alpha s are functions of both Ri and Rq (Richardson number and density ratio representing double diffusion, DD); the Gamma-alpha are different for heat, salt and momentum; in the case of heat, the traditional value Gamma-h = 0.2 is valid only in the presence of strong shear (when DD is inoperative) while when shear subsides, NATRE data show that Gamma-h can be three times as large, a result that we reproduce. The salt Gamma-s is given in terms of Gamma-h. The momentum Gamma-m has thus far been guessed with different prescriptions while the RSM provides a well defined expression for Gamma-m(Ri,R-rho). Having tested Gamma-h, we then test the momentum Gamma-m by showing that the turbulent Prandtl number Gamma-m/Gamma-h vs. Ri reproduces the available data quite well. As for the dissipation epsilon, we use different representations, one for the mixed layer (ML), one for the thermocline and one for the ocean;s bottom. For the ML, we adopt a procedure analogous to the one successfully used in PB (planetary boundary layer) studies; for the thermocline, we employ an expression for the variable epsilon/N(exp 2) from studies of the internal gravity waves spectra which includes a latitude dependence; for the ocean bottom, we adopt the enhanced bottom diffusivity expression used by previous authors but with a state of the art internal tidal energy formulation and replace the fixed Gamma-alpha = 0.2 with the RSM result that brings into the problem the Ri, R-rho dependence of the Gamma-alpha; the unresolved bottom drag, which has thus far been either ignored or modeled with heuristic relations, is modeled using a formalism we previously developed and tested in PBL studies. We carried out several tests without an OGCM. Prandtl and flux Richardson numbers vs. Ri. The RSM model reproduces both types of data satisfactorily. DD and Mixing efficiency Gamma-h(Ri,Rq). The RSM model reproduces well the NATRE data. Bimodal epsilon-distribution. NATRE data show that epsilon (Ri < 1) approximately equals 10epsilon(Ri > 1), which our model reproduces. Heat to salt flux ratio. In the Ri much greater than 1 regime, the RSM predictions reproduce the data satisfactorily. NATRE mass diffusivity. The z-profile of the mass diffusivity reproduces well the measurements at NATRE. The local form of the mixing scheme is algebraic with one cubic equation to solve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, VT.; Silva, L.; Digonnet, H.
2011-05-04
The objective of this work is to model the viscoelastic behaviour of polymer from the solid state to the liquid state. With this objective, we perform experimental tensile tests and compare with simulation results. The chosen polymer is a PMMA whose behaviour depends on its temperature. The computation simulation is based on Navier-Stokes equations where we propose a mixed finite element method with an interpolation P1+/P1 using displacement (or velocity) and pressure as principal variables. The implemented technique uses a mesh composed of triangles (2D) or tetrahedra (3D). The goal of this approach is to model the viscoelastic behaviour ofmore » polymers through a fluid-structure coupling technique with a multiphase approach.« less
Testing effects in mixed- versus pure-list designs.
Rowland, Christopher A; Littrell-Baez, Megan K; Sensenig, Amanda E; DeLosh, Edward L
2014-08-01
In the present study, we investigated the role of list composition in the testing effect. Across three experiments, participants learned items through study and initial testing or study and restudy. List composition was manipulated, such that tested and restudied items appeared either intermixed in the same lists (mixed lists) or in separate lists (pure lists). In Experiment 1, half of the participants received mixed lists and half received pure lists. In Experiment 2, all participants were given both mixed and pure lists. Experiment 3 followed Erlebacher's (Psychological Bulletin, 84, 212-219, 1977) method, such that mixed lists, pure tested lists, and pure restudied lists were given to independent groups. Across all three experiments, the final recall results revealed significant testing effects for both mixed and pure lists, with no reliable difference in the magnitude of the testing advantage across list designs. This finding suggests that the testing effect is not subject to a key boundary condition-list design-that impacts other memory phenomena, including the generation effect.
NASA Astrophysics Data System (ADS)
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
Differentiation of mixed biological traces in sexual assaults using DNA fragment analysis
Apostolov, Аleksandar
2014-01-01
During the investigation of sexual abuse, it is not rare that mixed genetic material from two or more persons is detected. In such cases, successful profiling can be achieved using DNA fragment analysis, resulting in individual genetic profiles of offenders and their victims. This has led to an increase in the percentage of identified perpetrators of sexual offenses. The classic and modified genetic models used, allowed us to refine and implement appropriate extraction, polymerase chain reaction and electrophoretic procedures with individual assessment and approach to conducting research. Testing mixed biological traces using DNA fragment analysis appears to be the only opportunity for identifying perpetrators in gang rapes. PMID:26019514
Absolute parameters for AI Phoenicis using WASP photometry
NASA Astrophysics Data System (ADS)
Kirkby-Kent, J. A.; Maxted, P. F. L.; Serenelli, A. M.; Turner, O. D.; Evans, D. F.; Anderson, D. R.; Hellier, C.; West, R. G.
2016-06-01
Context. AI Phe is a double-lined, detached eclipsing binary, in which a K-type sub-giant star totally eclipses its main-sequence companion every 24.6 days. This configuration makes AI Phe ideal for testing stellar evolutionary models. Difficulties in obtaining a complete lightcurve mean the precision of existing radii measurements could be improved. Aims: Our aim is to improve the precision of the radius measurements for the stars in AI Phe using high-precision photometry from the Wide Angle Search for Planets (WASP), and use these improved radius measurements together with estimates of the masses, temperatures and composition of the stars to place constraints on the mixing length, helium abundance and age of the system. Methods: A best-fit ebop model is used to obtain lightcurve parameters, with their standard errors calculated using a prayer-bead algorithm. These were combined with previously published spectroscopic orbit results, to obtain masses and radii. A Bayesian method is used to estimate the age of the system for model grids with different mixing lengths and helium abundances. Results: The radii are found to be R1 = 1.835 ± 0.014 R⊙, R2 = 2.912 ± 0.014 R⊙ and the masses M1 = 1.1973 ± 0.0037 M⊙, M2 = 1.2473 ± 0.0039 M⊙. From the best-fit stellar models we infer a mixing length of 1.78, a helium abundance of YAI = 0.26 +0.02-0.01 and an age of 4.39 ± 0.32 Gyr. Times of primary minimum show the period of AI Phe is not constant. Currently, there are insufficient data to determine the cause of this variation. Conclusions: Improved precision in the masses and radii have improved the age estimate, and allowed the mixing length and helium abundance to be constrained. The eccentricity is now the largest source of uncertainty in calculating the masses. Further work is needed to characterise the orbit of AI Phe. Obtaining more binaries with parameters measured to a similar level of precision would allow us to test for relationships between helium abundance and mixing length.
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
New generation HMA mix designs : accelerated pavement testing of a type C mix with the ALF machine.
DOT National Transportation Integrated Search
2012-09-01
Recent changes to the Texas hot-mix asphalt (HMA) mix-design procedures, such as the adaption of the higher-stiffer performance-grade asphalt-binder grades and the Hamburg test, have ensured that the mixes that are routinely used on Texas highways ar...
Stochastic models to study the impact of mixing on a fed-batch culture of Saccharomyces cerevisiae.
Delvigne, F; Lejeune, A; Destain, J; Thonart, P
2006-01-01
The mechanisms of interaction between microorganisms and their environment in a stirred bioreactor can be modeled by a stochastic approach. The procedure comprises two submodels: a classical stochastic model for the microbial cell circulation and a Markov chain model for the concentration gradient calculus. The advantage lies in the fact that the core of each submodel, i.e., the transition matrix (which contains the probabilities to shift from a perfectly mixed compartment to another in the bioreactor representation), is identical for the two cases. That means that both the particle circulation and fluid mixing process can be analyzed by use of the same modeling basis. This assumption has been validated by performing inert tracer (NaCl) and stained yeast cells dispersion experiments that have shown good agreement with simulation results. The stochastic model has been used to define a characteristic concentration profile experienced by the microorganisms during a fermentation test performed in a scale-down reactor. The concentration profiles obtained in this way can explain the scale-down effect in the case of a Saccharomyces cerevisiae fed-batch process. The simulation results are analyzed in order to give some explanations about the effect of the substrate fluctuation dynamics on S. cerevisiae.
Nassan, M A; Mohamed, E H
2014-01-01
Recent studies showed prominent antimicrobial activity of various plant extracts on certain pathogenic microorganisms, therefore we prepared crude aqueous extracts of black pepper, ginger and thyme and carried out an in vitro study by measuring antimicrobial activity of these extracts using the agar well diffusion method. An in vivo study was carried out on 50 adult healthy male albino rats which were divided into 5 groups, 10 rats each. Group 1: negative control group which received saline solution intragastrically daily; Group 2: Positive control group, injected with mixed bacterial suspension of S.aureus and E.coli as a model of pyelonephritis, then received saline solution intragastrically daily; Group 3: injected with the same dose of mixed bacterial suspension, then received 100 mg/kg/day black pepper extract intragastrically; Group 4: injected with mixed bacterial suspension then received 500 mg/kg/day ginger extract intragastrically. Group 5: injected with mixed bacterial suspension then received 500 mg/kg/day thyme extract intragastrically. All groups were sacrificed after either 1 or 4 weeks. Serum and blood samples were collected for lysozyme activity estimation using agarose lysoplate, measurement of nitric oxide production, and lymphocyte transformation test as well as for counting both total and differential leukocytes and erythrocytes. Kidney samples were tested histopathologically. Both in vivo and in vitro results confirm the efficacy of these extracts as natural antimicrobials and suggest the possibility of using them in treatment procedures.
LVC interaction within a mixed-reality training system
NASA Astrophysics Data System (ADS)
Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio
2012-03-01
The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.
Safiuddin, Md.; Raman, Sudharshan N.; Abdus Salam, Md.; Jumaat, Mohd. Zamin
2016-01-01
Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination (R2) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN. PMID:28773520
Safiuddin, Md; Raman, Sudharshan N; Abdus Salam, Md; Jumaat, Mohd Zamin
2016-05-20
Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination ( R ²) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN.
Plasma-enhanced mixing and flameholding in supersonic flow.
Firsov, Alexander; Savelkin, Konstantin V; Yarantsev, Dmitry A; Leonov, Sergey B
2015-08-13
The results of experimental study of plasma-based mixing, ignition and flameholding in a supersonic model combustor are presented in the paper. The model combustor has a length of 600 mm and cross section of 72 mm width and 60 mm height. The fuel is directly injected into supersonic airflow (Mach number M=2, static pressure P(st)=160-250 Torr) through wall orifices. Two series of tests are focused on flameholding and mixing correspondingly. In the first series, the near-surface quasi-DC electrical discharge is generated by flush-mounted electrodes at electrical power deposition of W(pl)=3-24 kW. The scope includes parametric study of ignition and flame front dynamics, and comparison of three schemes of plasma generation: the first and the second layouts examine the location of plasma generators upstream and downstream from the fuel injectors. The third pattern follows a novel approach of combined mixing/ignition technique, where the electrical discharge distributes along the fuel jet. The last pattern demonstrates a significant advantage in terms of flameholding limit. In the second series of tests, a long discharge of submicrosecond duration is generated across the flow and along the fuel jet. A gasdynamic instability of thermal cavity developed after a deposition of high-power density in a thin plasma filament promotes the air-fuel mixing. The technique studied in this work has weighty potential for high-speed combustion applications, including cold start/restart of scramjet engines and support of transition regime in dual-mode scramjet and at off-design operation. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Temporal Variability and Statistics of the Strehl Ratio in Adaptive-Optics Images
2010-01-01
with the appropriate models and the residuals were extracted. This was done using the ARIMA modelling (Box & Jenkins 1970). ARIMA stands for...It was used here for the opposite goal – to obtain the values of the i.i.d. “noise” and test its distribution. Mixed ARIMA models of order 2 were...often sufficient to ensure non- significant autocorrelation of the residuals. Table 2 lists the stationary sequences with their respective ARIMA models
An, Shengli; Zhang, Yanhong; Chen, Zheng
2012-12-01
To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.
The Relationship Between Emotional Intelligence and Leader Performance
2002-03-01
independence, and self-actualization), (2) Interpersonal EQ (comprising empathy, social responsibility, and interpersonal relationships), (3) Stress ...management EQ (comprising stress tolerance and impulse control), (4) Adaptability EQ (comprising reality testing, flexibility, and problem solving), and (5...explain the significance of the model or its particular sub- scales or categories. Thus, mixed models, and the claims associated with them have been
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tayar, Jamie; Somers, Garrett; Pinsonneault, Marc H.
2017-05-01
In the updated APOGEE- Kepler catalog, we have asteroseismic and spectroscopic data for over 3000 first ascent red giants. Given the size and accuracy of this sample, these data offer an unprecedented test of the accuracy of stellar models on the post-main-sequence. When we compare these data to theoretical predictions, we find a metallicity dependent temperature offset with a slope of around 100 K per dex in metallicity. We find that this effect is present in all model grids tested, and that theoretical uncertainties in the models, correlated spectroscopic errors, and shifts in the asteroseismic mass scale are insufficient tomore » explain this effect. Stellar models can be brought into agreement with the data if a metallicity-dependent convective mixing length is used, with Δ α {sub ML,YREC} ∼ 0.2 per dex in metallicity, a trend inconsistent with the predictions of three-dimensional stellar convection simulations. If this effect is not taken into account, isochrone ages for red giants from the Gaia data will be off by as much as a factor of two even at modest deviations from solar metallicity ([Fe/H] = −0.5).« less
Physical Controls on Biogeochemical Processes in Intertidal Zones of Beach Aquifers
NASA Astrophysics Data System (ADS)
Heiss, James W.; Post, Vincent E. A.; Laattoe, Tariq; Russoniello, Christopher J.; Michael, Holly A.
2017-11-01
Marine ecosystems are sensitive to inputs of chemicals from submarine groundwater discharge. Tidally influenced saltwater-freshwater mixing zones in beach aquifers can host biogeochemical transformations that modify chemical loads prior to discharge. A numerical variable-density groundwater flow and reactive transport model was used to evaluate the physical controls on reactivity for mixing-dependent and mixing-independent reactions in beach aquifers, represented as denitrification and sulfate reduction, respectively. A sensitivity analysis was performed across typical values of tidal amplitude, hydraulic conductivity, terrestrial freshwater flux, beach slope, dispersivity, and DOC reactivity. For the model setup and conditions tested, the simulations demonstrate that denitrification can remove up to 100% of terrestrially derived nitrate, and sulfate reduction can transform up to 8% of seawater-derived sulfate prior to discharge. Tidally driven mixing between saltwater and freshwater promotes denitrification along the boundary of the intertidal saltwater circulation cell in pore water between 1 and 10 ppt. The denitrification zone occupies on average 49% of the mixing zone. Denitrification rates are highest on the landward side of the circulation cell and decrease along circulating flow paths. Reactivity for mixing-dependent reactions increases with the size of the mixing zone and solute supply, while mixing-independent reactivity is controlled primarily by solute supply. The results provide insights into the types of beaches most efficient in altering fluxes of chemicals prior to discharge and could be built upon to help engineer beaches to enhance reactivity. The findings have implications for management to protect coastal ecosystems and the estimation of chemical fluxes to the ocean.
NASA Technical Reports Server (NTRS)
Chatfield, Robert B.; Sorek Hamer, Meytar; Esswein, Robert F.
2017-01-01
The Western US and many regions globally present daunting difficulties in understanding and mapping PM2.5 episodes. We evaluate extensions of a method independent of source-description and transport/transformation. These regions suffer frequent few-day episodes due to shallow mixing; low satellite AOT and bright surfaces complicate the description. Nevertheless, we expect residual errors in our maps of less than 8 ug/m^3 in episodes reaching 60-100 ug/m^3; maps which detail pollution from Interstate 5. Our current success is due to use of physically meaningful functions of MODIS-MAIAC-derived AOD, afternoon mixed-layer height, and relative humidity for a basin in which the latter are correlated. A mixed-effects model then describes a daily AOT-to-PM2.5 relationship. (Note: in other published mixed-effects models, AOT contributes minimally. We seek to extend on these to develop useful estimation methods for similar situations. We evaluate existing but more spotty information on size distribution (AERONET, MISR, MAIA, CALIPSO, other remote sensing). We also describe the usefulness of an equivalent mixing depth for water vapor vs meteorological boundary layer height. Each has virtues and limitations. Finally, we begin to evaluate methods for removing the complications due to detached but polluted layers (which don't mix to the surface) using geographical, meteorological, and remotely sensed data.
On the Chemical Mixing Induced by Internal Gravity Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, T. M.; McElwaine, J. N.
Detailed modeling of stellar evolution requires a better understanding of the (magneto)hydrodynamic processes that mix chemical elements and transport angular momentum. Understanding these processes is crucial if we are to accurately interpret observations of chemical abundance anomalies, surface rotation measurements, and asteroseismic data. Here, we use two-dimensional hydrodynamic simulations of the generation and propagation of internal gravity waves in an intermediate-mass star to measure the chemical mixing induced by these waves. We show that such mixing can generally be treated as a diffusive process. We then show that the local diffusion coefficient does not depend on the local fluid velocity,more » but rather on the wave amplitude. We then use these findings to provide a simple parameterization for this diffusion, which can be incorporated into stellar evolution codes and tested against observations.« less
Mixed-Mode Decohesion Finite Elements for the Simulation of Delamination in Composite Materials
NASA Technical Reports Server (NTRS)
Camanho, Pedro P.; Davila, Carlos G.
2002-01-01
A new decohesion element with mixed-mode capability is proposed and demonstrated. The element is used at the interface between solid finite elements to model the initiation and non-self-similar growth of delaminations. A single relative displacement-based damage parameter is applied in a softening law to track the damage state of the interface and to prevent the restoration of the cohesive state during unloading. The softening law for mixed-mode delamination propagation can be applied to any mode interaction criterion such as the two-parameter power law or the three-parameter Benzeggagh-Kenane criterion. To demonstrate the accuracy of the predictions and the irreversibility capability of the constitutive law, steady-state delamination growth is simulated for quasistatic loading-unloading cycles of various single mode and mixed-mode delamination test specimens.
Retrospective Binary-Trait Association Test Elucidates Genetic Architecture of Crohn Disease
Jiang, Duo; Zhong, Sheng; McPeek, Mary Sara
2016-01-01
In genetic association testing, failure to properly control for population structure can lead to severely inflated type 1 error and power loss. Meanwhile, adjustment for relevant covariates is often desirable and sometimes necessary to protect against spurious association and to improve power. Many recent methods to account for population structure and covariates are based on linear mixed models (LMMs), which are primarily designed for quantitative traits. For binary traits, however, LMM is a misspecified model and can lead to deteriorated performance. We propose CARAT, a binary-trait association testing approach based on a mixed-effects quasi-likelihood framework, which exploits the dichotomous nature of the trait and achieves computational efficiency through estimating equations. We show in simulation studies that CARAT consistently outperforms existing methods and maintains high power in a wide range of population structure settings and trait models. Furthermore, CARAT is based on a retrospective approach, which is robust to misspecification of the phenotype model. We apply our approach to a genome-wide analysis of Crohn disease, in which we replicate association with 17 previously identified regions. Moreover, our analysis on 5p13.1, an extensively reported region of association, shows evidence for the presence of multiple independent association signals in the region. This example shows how CARAT can leverage known disease risk factors to shed light on the genetic architecture of complex traits. PMID:26833331
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Johnston, William M., Jr.
2014-01-01
Mixed mode I-mode II interlaminar tests were conducted on IM7/8552 tape laminates using the mixed-mode bending test. Three mixed mode ratios, G(sub II)/G(sub T) = 0.2, 0.5, and 0.8, were considered. Tests were performed at all three mixed-mode ratios under quasi-static and cyclic loading conditions, where the former static tests were used to determine initial loading levels for the latter fatigue tests. Fatigue tests at each mixed-mode ratio were performed at four loading levels, Gmax, equal to 0.5G(sub c), 0.4G(sub c), 0.3G(sub c), and 0.2G(sub c), where G(sub c) is the interlaminar fracture toughness of the corresponding mixed-mode ratio at which a test was performed. All fatigue tests were performed using constant-amplitude load control and delamination growth was automatically documented using compliance solutions obtained from the corresponding quasi-static tests. Static fracture toughness data yielded a mixed-mode delamination criterion that exhibited monotonic increase in Gc with mixed-mode ratio, G(sub II)/G(sub T). Fatigue delamination onset parameters varied monotonically with G(sub II)/G(sub T), which was expected based on the fracture toughness data. Analysis of non-normalized data yielded a monotonic change in Paris law exponent with mode ratio. This was not the case when normalized data were analyzed. Fatigue data normalized by the static R-curve were most affected in specimens tested at G(sub II)/G(sub T)=0.2 (this process has little influence on the other data). In this case, the normalized data yielded a higher delamination growth rate compared to the raw data for a given loading level. Overall, fiber bridging appeared to be the dominant mechanism, affecting delamination growth rates in specimens tested at different load levels and differing mixed-mode ratios.
Overflow Simulations using MPAS-Ocean in Idealized and Realistic Domains
NASA Astrophysics Data System (ADS)
Reckinger, S.; Petersen, M. R.; Reckinger, S. J.
2016-02-01
MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Also, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates. Additionally, preliminary measurements of overflow diagnostics on global simulations using a realistic oceanic domain are presented.
MixSIAR: advanced stable isotope mixing models in R
Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...
Engineering characterisation of epoxidized natural rubber-modified hot-mix asphalt
Al-Mansob, Ramez A.; Ismail, Amiruddin; Yusoff, Nur Izzi Md.; Rahmat, Riza Atiq O. K.; Borhan, Muhamad Nazri; Albrka, Shaban Ismael; Azhari, Che Husna; Karim, Mohamed Rehan
2017-01-01
Road distress results in high maintenance costs. However, increased understandings of asphalt behaviour and properties coupled with technological developments have allowed paving technologists to examine the benefits of introducing additives and modifiers. As a result, polymers have become extremely popular as modifiers to improve the performance of the asphalt mix. This study investigates the performance characteristics of epoxidized natural rubber (ENR)-modified hot-mix asphalt. Tests were conducted using ENR–asphalt mixes prepared using the wet process. Mechanical testing on the ENR–asphalt mixes showed that the resilient modulus of the mixes was greatly affected by testing temperature and frequency. On the other hand, although rutting performance decreased at high temperatures because of the increased elasticity of the ENR–asphalt mixes, fatigue performance improved at intermediate temperatures as compared to the base mix. However, durability tests indicated that the ENR–asphalt mixes were slightly susceptible to the presence of moisture. In conclusion, the performance of asphalt pavement can be enhanced by incorporating ENR as a modifier to counter major road distress. PMID:28182724
Engineering characterisation of epoxidized natural rubber-modified hot-mix asphalt.
Al-Mansob, Ramez A; Ismail, Amiruddin; Yusoff, Nur Izzi Md; Rahmat, Riza Atiq O K; Borhan, Muhamad Nazri; Albrka, Shaban Ismael; Azhari, Che Husna; Karim, Mohamed Rehan
2017-01-01
Road distress results in high maintenance costs. However, increased understandings of asphalt behaviour and properties coupled with technological developments have allowed paving technologists to examine the benefits of introducing additives and modifiers. As a result, polymers have become extremely popular as modifiers to improve the performance of the asphalt mix. This study investigates the performance characteristics of epoxidized natural rubber (ENR)-modified hot-mix asphalt. Tests were conducted using ENR-asphalt mixes prepared using the wet process. Mechanical testing on the ENR-asphalt mixes showed that the resilient modulus of the mixes was greatly affected by testing temperature and frequency. On the other hand, although rutting performance decreased at high temperatures because of the increased elasticity of the ENR-asphalt mixes, fatigue performance improved at intermediate temperatures as compared to the base mix. However, durability tests indicated that the ENR-asphalt mixes were slightly susceptible to the presence of moisture. In conclusion, the performance of asphalt pavement can be enhanced by incorporating ENR as a modifier to counter major road distress.
Three tests and three corrections: Comment on Koen and Yonelinas (2010)
Jang, Yoonhee; Mickes, Laura; Wixted, John T.
2012-01-01
The slope of the z-transformed receiver-operating characteristic (zROC) in recognition memory experiments is usually less than 1, which has long been interpreted to mean that the variance of the target distribution is greater than the variance of the lure distribution. The greater variance of the target distribution could arise because the different items on a list receive different increments in memory strength during study (the “encoding variability” hypothesis). In a test of that interpretation, J. Koen and A. Yonelinas (2010, K&Y) attempted to further increase encoding variability to see if it would further decrease the slope of the zROC. To do so, they presented items on a list for two different durations and then mixed the weak and strong targets together. After performing three tests on the mixed-strength data, K&Y concluded that encoding variability does not explain why the slope of the zROC is typically less than one. However, we show that their tests have no bearing on the encoding variability account. Instead, they bear on the mixture-UVSD model that corresponds to their experimental design. On the surface, the results reported by K&Y appear to be inconsistent with the predictions of the mixture-UVSD model (though they were taken to be inconsistent with the predictions of the encoding variability hypothesis). However, all three of the tests they performed contained errors. When those errors are corrected, the same three tests show that their data support, rather than contradict, the mixture-UVSD model (but they still have no bearing on the encoding variability hypothesis). PMID:22390323
ERIC Educational Resources Information Center
Paquette, Kelli R.
2009-01-01
A mixed methodological approach was used to examine the effect of a cross-age tutoring writing program among second- and fourth-grade students in a rural elementary school in Delaware. Pre-test and post-test writing prompts were administered and evaluated using the 6+1 traits writing assessment rubric. Students were assessed qualitatively through…
2016-08-01
catastrophic effects on facilities, infrastructure, and military testing and training. Permafrost temperature , thickness, and geographic continuity...and fire severity (~0 to ~100% SOL consumption ), they provide an excellent suite of sites to test and quantify the effects of fire severity on plant...59 Table 6.1. Variables included in explanatory matrix for black spruce dominance ............68 Table 6.2. Mixed effect model
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-03-01
This volume presents the following appendices: ceramic test specimen drawings and schematics, mixed-mode and biaxial stress fracture of structural ceramics for advanced vehicular heat engines (U. Utah), mode I/mode II fracture toughness and tension/torsion fracture strength of NT154 Si nitride (Brown U.), summary of strength test results and fractography, fractography photographs, derivations of statistical models, Weibull strength plots for fast fracture test specimens, and size functions.
Hydrothermal contamination of public supply wells in Napa and Sonoma Valleys, California
Forrest, Matthew J.; Kulongoski, Justin T.; Edwards, Matthew S.; Farrar, Christopher D.; Belitz, Kenneth; Norris, Richard D.
2013-01-01
Groundwater chemistry and isotope data from 44 public supply wells in the Napa and Sonoma Valleys, California were determined to investigate mixing of relatively shallow groundwater with deeper hydrothermal fluids. Multivariate analyses including Cluster Analyses, Multidimensional Scaling (MDS), Principal Components Analyses (PCA), Analysis of Similarities (ANOSIM), and Similarity Percentage Analyses (SIMPER) were used to elucidate constituent distribution patterns, determine which constituents are significantly associated with these hydrothermal systems, and investigate hydrothermal contamination of local groundwater used for drinking water. Multivariate statistical analyses were essential to this study because traditional methods, such as mixing tests involving single species (e.g. Cl or SiO2) were incapable of quantifying component proportions due to mixing of multiple water types. Based on these analyses, water samples collected from the wells were broadly classified as fresh groundwater, saline waters, hydrothermal fluids, or mixed hydrothermal fluids/meteoric water wells. The Multivariate Mixing and Mass-balance (M3) model was applied in order to determine the proportion of hydrothermal fluids, saline water, and fresh groundwater in each sample. Major ions, isotopes, and physical parameters of the waters were used to characterize the hydrothermal fluids as Na–Cl type, with significant enrichment in the trace elements As, B, F and Li. Five of the wells from this study were classified as hydrothermal, 28 as fresh groundwater, two as saline water, and nine as mixed hydrothermal fluids/meteoric water wells. The M3 mixing-model results indicated that the nine mixed wells contained between 14% and 30% hydrothermal fluids. Further, the chemical analyses show that several of these mixed-water wells have concentrations of As, F and B that exceed drinking-water standards or notification levels due to contamination by hydrothermal fluids.
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Yu, Xiao-Ying; Barnett, J. Matthew; Amidan, Brett G.; ...
2017-12-12
The ANSI/HPS N13.1–2011 standard requires gaseous tracer uniformity testing for sampling associated with stacks used in radioactive air emissions. Sulfur hexafluoride (SF 6), a greenhouse gas with a high global warming potential, has long been the gas tracer used in such testing. To reduce the impact of gas tracer tests on the environment, nitrous oxide (N 2O) was evaluated as a potential replacement to SF 6. The physical evaluation included the development of a test plan to record percent coefficient of variance and the percent maximum deviation between the two gases while considering variables such as fan configuration, injection position,more » and flow rate. Statistical power was calculated to determine how many sample sets were needed, and computational fluid dynamic modeling was utilized to estimate overall mixing in stacks. Results show there are no significant differences between the behaviors of the two gases, and SF 6 modeling corroborated N 2O test results. Although, in principle, all tracer gases should behave in an identical manner for measuring mixing within a stack, the series of physical tests guided by statistics was performed to demonstrate the equivalence of N 2O testing to SF 6 testing in the context of stack qualification tests. In conclusion, the results demonstrate that N 2O is a viable choice leading to a four times reduction in global warming impacts for future similar compliance driven testing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Xiao-Ying; Barnett, J. Matthew; Amidan, Brett G.
The ANSI/HPS N13.1–2011 standard requires gaseous tracer uniformity testing for sampling associated with stacks used in radioactive air emissions. Sulfur hexafluoride (SF 6), a greenhouse gas with a high global warming potential, has long been the gas tracer used in such testing. To reduce the impact of gas tracer tests on the environment, nitrous oxide (N 2O) was evaluated as a potential replacement to SF 6. The physical evaluation included the development of a test plan to record percent coefficient of variance and the percent maximum deviation between the two gases while considering variables such as fan configuration, injection position,more » and flow rate. Statistical power was calculated to determine how many sample sets were needed, and computational fluid dynamic modeling was utilized to estimate overall mixing in stacks. Results show there are no significant differences between the behaviors of the two gases, and SF 6 modeling corroborated N 2O test results. Although, in principle, all tracer gases should behave in an identical manner for measuring mixing within a stack, the series of physical tests guided by statistics was performed to demonstrate the equivalence of N 2O testing to SF 6 testing in the context of stack qualification tests. In conclusion, the results demonstrate that N 2O is a viable choice leading to a four times reduction in global warming impacts for future similar compliance driven testing.« less
Janssens, K; Van Brecht, A; Zerihun Desta, T; Boonen, C; Berckmans, D
2004-06-01
The present paper outlines a modeling approach, which has been developed to model the internal dynamics of heat and moisture transfer in an imperfectly mixed ventilated airspace. The modeling approach, which combines the classical heat and moisture balance differential equations with the use of experimental time-series data, provides a physically meaningful description of the process and is very useful for model-based control purposes. The paper illustrates how the modeling approach has been applied to a ventilated laboratory test room with internal heat and moisture production. The results are evaluated and some valuable suggestions for future research are forwarded. The modeling approach outlined in this study provides an ideal form for advanced model-based control system design. The relatively low number of parameters makes it well suited for model-based control purposes, as a limited number of identification experiments is sufficient to determine these parameters. The model concept provides information about the air quality and airflow pattern in an arbitrary building. By using this model as a simulation tool, the indoor air quality and airflow pattern can be optimized.
Depreter, Barbara; Devreese, Katrien M J
2016-09-01
Lupus anticoagulant (LAC) testing includes a screening, mixing and confirmation step. Although recently published guidelines on LAC testing are a useful step towards standardization, a lack of consensus remains whether to express mixing tests in clotting time (CT) or index of circulating anticoagulant (ICA). The influence of anticoagulant therapy, e.g. vitamin K antagonists (VKA) or direct oral anticoagulants (DOAC) on both methods of interpretation remains to be investigated. The objective of this study was to contribute to a simplification and standardization of the LAC three-step interpretation on the level of the mixing test. Samples from 148 consecutive patients with LAC request and prolonged screening step, and 77 samples from patients non-suspicious for LAC treated with VKA (n=37) or DOAC (n=30) were retrospectively evaluated. An activated partial thromboplastin time (aPTT) and dilute Russell's viper venom time (dRVVT) were used for routine LAC testing. The supplemental anticoagulant samples were tested with dRVVT only. We focused on the interpretation differences for mixing tests expressed as CT or ICA and compared the final LAC conclusion within each distinct group of concordant and discordant mixing test results. Mixing test interpretation by CT resulted in 10 (dRVVT) and 16 (aPTT) more LAC positive patients compared to interpretation with ICA. Isolated prolonged dRVVT screen mix ICA results were exclusively observed in samples from VKA-treated patients without suspicion for LAC. We recommend using CT in respect to the 99th percentile cut-off for interpretation of mixing steps in order to reach the highest sensitivity and specificity in LAC detection.
DOT National Transportation Integrated Search
2012-10-01
Recent changes to the Texas hot mix asphalt (HMA) mix-design procedures such as adaption of the higher-stiffer PG asphalt-binder grades and the Hamburg test have ensured that the mixes that are routinely used on the Texas highways are not prone to ru...
NASA Technical Reports Server (NTRS)
Harrington, Douglas E.
1998-01-01
The aerospace industry is currently investigating the effect of installing mixer/ejector nozzles on the core flow exhaust of high-bypass-ratio turbofan engines. This effort includes both full-scale engine tests at sea level conditions and subscale tests in static test facilities. Subscale model tests are to be conducted prior to full-scale testing. With this approach, model results can be analyzed and compared with analytical predications. Problem areas can then be identified and design changes made and verified in subscale prior to committing to any final design configurations for engine ground tests. One of the subscale model test programs for the integrated mixer/ejector development was a joint test conducted by the NASA Lewis Research Center and Pratt & Whitney Aircraft. This test was conducted to study various mixer/ejector nozzle configurations installed on the core flow exhaust of advanced, high-bypass-ratio turbofan engines for subsonic, commercial applications. The mixer/ejector concept involves the introduction of largescale, low-loss, streamwise vortices that entrain large amounts of secondary air and rapidly mix it with the primary stream. This results in increased ejector pumping relative to conventional ejectors and in more complete mixing within the ejector shroud. The latter improves thrust performance through the efficient energy exchange between the primary and secondary streams. This experimental program was completed in April 1997 in Lewis' CE-22 static test facility. Variables tested included the nozzle area ratio (A9/A8), which ranged from 1.6 to 3.0. This ratio was varied by increasing or decreasing the nozzle throat area, A8. Primary nozzles tested included both lobed mixers and conical primaries. These configurations were tested with and without an outer shroud, and the shroud position was varied by inserting spacers in it. In addition, data were acquired with and without secondary flow.
An analysis of the adoption of managerial innovation: cost accounting systems in hospitals.
Glandon, G L; Counte, M A
1995-11-01
The adoption of new medical technologies has received significant attention in the hospital industry, in part, because of its observed relation to hospital cost increases. However, few comprehensive studies exist regarding the adoption of non-medical technologies in the hospital setting. This paper develops and tests a model of the adoption of a managerial innovation, new to the hospital industry, that of cost accounting systems based upon standard costs. The conceptual model hypothesizes that four organizational context factors (size, complexity, ownership and slack resources) and two environmental factors (payor mix and interorganizational dependency) influence hospital adoption of cost accounting systems. Based on responses to a mail survey of hospitals in the Chicago area and AHA annual survey information for 1986, a sample of 92 hospitals was analyzed. Greater hospital size, complexity, slack resources, and interorganizational dependency all were associated with adoption. Payor mix had no significant influence and the hospital ownership variables had a mixed influence. The logistic regression model was significant overall and explained over 15% of the variance in the adoption decision.
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
2017-02-04
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
Sun, WaiChing; Cai, Zhijun; Choo, Jinhyun
2016-11-18
An Arlequin poromechanics model is introduced to simulate the hydro-mechanical coupling effects of fluid-infiltrated porous media across different spatial scales within a concurrent computational framework. A two-field poromechanics problem is first recast as the twofold saddle point of an incremental energy functional. We then introduce Lagrange multipliers and compatibility energy functionals to enforce the weak compatibility of hydro-mechanical responses in the overlapped domain. Here, to examine the numerical stability of this hydro-mechanical Arlequin model, we derive a necessary condition for stability, the twofold inf–sup condition for multi-field problems, and establish a modified inf–sup test formulated in the product space ofmore » the solution field. We verify the implementation of the Arlequin poromechanics model through benchmark problems covering the entire range of drainage conditions. Finally, through these numerical examples, we demonstrate the performance, robustness, and numerical stability of the Arlequin poromechanics model.« less
Koller, Ingrid; Levenson, Michael R.; Glück, Judith
2017-01-01
The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777
NASA Astrophysics Data System (ADS)
Dang, Haizheng; Zhao, Yibo
2016-09-01
This paper presents the CFD modeling and experimental verifications of a single-stage inertance tube coaxial Stirling-type pulse tube cryocooler operating at 30-35 K using mixed stainless steel mesh regenerator matrices without either double-inlet or multi-bypass. A two-dimensional axis-symmetric CFD model with the thermal non-equilibrium mode is developed to simulate the internal process, and the underlying mechanism of significantly reducing the regenerator losses with mixed matrices is discussed in detail based on the given six cases. The modeling also indicates that the combination of the given different mesh segments can be optimized to achieve the highest cooling efficiency or the largest exergy ratio, and then the verification experiments are conducted in which the satisfactory agreements between simulated and tested results are observed. The experiments achieve a no-load temperature of 27.2 K and the cooling power of 0.78 W at 35 K, or 0.29 W at 30 K, with an input electric power of 220 W and a reject temperature of 300 K.
NASA Technical Reports Server (NTRS)
Luneva, M. V.; Clayson, C. A.; Dubovikov, Mikhail
2015-01-01
In eddy resolving simulations, we test a mixed layer mesoscale parametrisation, developed recently by Canuto and Dubovikov [Ocean Model., 2011, 39, 200-207]. With no adjustable parameters, the parametrisation yields the horizontal and vertical mesoscale fluxes in terms of coarse-resolution fields and eddy kinetic energy (EKE). We compare terms of the parametrisation diagnosed from coarse-grained fields with the eddy mesoscale fluxes diagnosed directly from the high resolution model. An expression for the EKE in terms of mean fields has also been found to get a closed parametrisation in terms of the mean fields only. In 40 numerical experiments we simulated two types of flows: idealised flows driven by baroclinic instabilities only, and more realistic flows, driven by wind and surface fluxes as well as by inflow-outflow. The diagnosed quasi-instantaneous horizontal and vertical mesoscale buoyancy fluxes (averaged over 1-2 degrees and 10 days) demonstrate a strong scatter typical for turbulent flows, however, the fluxes are positively correlated with the parametrisation with higher (0.5-0.74) correlations at the experiments with larger baroclinic radius Rossby. After being averaged over 3-4 months, diffusivities diagnosed from the eddy resolving simulations are consistent with the parametrisation for a broad range of parameters. Diagnosed vertical mesoscale fluxes restratify mixed layer and are in a good agreement with the parametrisation unless vertical turbulent mixing in the upper layer becomes strong enough in comparison with mesoscale advection. In the latter case, numerical simulations demonstrate that the deviation of the fluxes from the parametrisation is controlled by dimensionless parameter estimating the ratio of vertical turbulent mixing term to mesoscale advection. An analysis using a modified omega-equation reveals that the effects of the vertical mixing of vorticity is responsible for the two-three fold amplification of vertical mesoscale flux. Possible physical mechanisms, responsible for the amplification of vertical mesoscale flux are discussed.
Coherent Anti-Stokes Raman Scattering (CARS) as a Probe for Supersonic Hydrogen-Fuel/Air Mixing
NASA Technical Reports Server (NTRS)
Danehy, P. M.; O'Byrne, S.; Cutler, A. D.; Rodriguez, C. G.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method was used to measure temperature and the absolute mole fractions of N2, O2 and H2 in a supersonic non-reacting fuel-air mixing experiment. Experiments were conducted in NASA Langley Research Center s Direct Connect Supersonic Combustion Test Facility. Under normal operation of this facility, hydrogen and air burn to increase the enthalpy of the test gas and O2 is added to simulate air. This gas is expanded through a Mach 2 nozzle and into a combustor model where fuel is then injected, mixes and burns. In the present experiment the O2 of the test gas is replaced by N2. The lack of oxidizer inhibited combustion of the injected H2 fuel jet allowing the fuel/air mixing process to be studied. CARS measurements were performed 427 mm downstream of the nozzle exit and 260 mm downstream of the fuel injector. Maps were obtained of the mean temperature, as well as the N2, O2 and H2 mean mole fraction fields. A map of mean H2O vapor mole fraction was also inferred from these measurements. Correlations between different measured parameters and their fluctuations are presented. The CARS measurements are compared with a preliminary computational prediction of the flow.
Zhao, Yue; Hambleton, Ronald K.
2017-01-01
In item response theory (IRT) models, assessing model-data fit is an essential step in IRT calibration. While no general agreement has ever been reached on the best methods or approaches to use for detecting misfit, perhaps the more important comment based upon the research findings is that rarely does the research evaluate IRT misfit by focusing on the practical consequences of misfit. The study investigated the practical consequences of IRT model misfit in examining the equating performance and the classification of examinees into performance categories in a simulation study that mimics a typical large-scale statewide assessment program with mixed-format test data. The simulation study was implemented by varying three factors, including choice of IRT model, amount of growth/change of examinees’ abilities between two adjacent administration years, and choice of IRT scaling methods. Findings indicated that the extent of significant consequences of model misfit varied over the choice of model and IRT scaling methods. In comparison with mean/sigma (MS) and Stocking and Lord characteristic curve (SL) methods, separate calibration with linking and fixed common item parameter (FCIP) procedure was more sensitive to model misfit and more robust against various amounts of ability shifts between two adjacent administrations regardless of model fit. SL was generally the least sensitive to model misfit in recovering equating conversion and MS was the least robust against ability shifts in recovering the equating conversion when a substantial degree of misfit was present. The key messages from the study are that practical ways are available to study model fit, and, model fit or misfit can have consequences that should be considered when choosing an IRT model. Not only does the study address the consequences of IRT model misfit, but also it is our hope to help researchers and practitioners find practical ways to study model fit and to investigate the validity of particular IRT models for achieving a specified purpose, to assure that the successful use of the IRT models are realized, and to improve the applications of IRT models with educational and psychological test data. PMID:28421011
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, L. M.; Roche, K. R.; Xie, M.; Packman, A. I.
2014-12-01
Important biological, physical and chemical processes, such as fluxes of oxygen, nutrients and contaminants, occur across sediment-water interfaces. These processes are influenced by bioturbation activities of benthic animals. Bioturbation is thought to be significant in releasing metals to the water column from contaminated sediments, but metals contamination also affects organism activity. Consequently, the aim of this study was to consider the interactions of biological activity, sediment chemistry, pore water transport, and chemical reactions in sediment mixing and the flux and toxicity of metals in sediments. Prior studies have modeled bioturbation as a diffusive process. However, diffusion models often do not describe accurately sediment mixing due to bioturbation. To this end, we used the continuous time random walk (CTRW) model to assess sediment mixing caused by bioturbation activity of Lumbriculus variegatus worms. We performed experiments using fine-grained sediments with different levels of zinc contamination from Lake DePue, which is a Superfund Site in Illinois. The tests were conducted in an aerated fresh water chamber. Fluorescent particulate tracers were added to the sediment surface to quantify mixing processes and the influence of metals contaminants on L. variegatus bioturbation activity. We observed sediment mixing and organism activity by time-lapse photography over 14 days. Then, we analyzed the images to characterize the fluorescent particle concentration as a function of sediment depth and time. Results reveal that sediment mixing caused by L. variegatus is subdiffusive in time and superdiffusive in space. These results suggest that anomalous sediment mixing is probably a ubiquitous process, as this behavior has only been observed previously in marine sediments. Also, the experiments indicate that bioturbation and sediment mixing decreased in the presence of higher metals concentrations in sediments. This process is expected to decrease efflux of metals from highly contaminated sediments by reducing biological activity.
Test Data Analysis of a Spray Bar Zero-Gravity Liquid Hydrogen Vent System for Upper Stages
NASA Technical Reports Server (NTRS)
Hedayat, A.; Bailey, J. W.; Hastings, L. J.; Flachbart, R. H.
2003-01-01
To support development of a zero-gravity pressure control capability for liquid hydrogen (LH2), a series of thermodynamic venting system (TVS) tests was conducted in 1996 and 1998 using the Marshall Space Flight Center (MSFC) multipurpose hydrogen test bed (MHTB). These tests were performed with ambient heat leaks =20 and 50 W for tank fill levels of 90%, 50%, and 25%. TVS performance testing revealed that the spray bar was highly effective in providing tank pressure control within a 7-kPa band (131-138 Wa), and complete destratification of the liquid and the ullage was achieved with all test conditions. Seven of the MHTB tests were correlated with the TVS performance analytical model. The tests were selected to encompass the range of tank fill levels, ambient heat leaks, operational modes, and ullage pressurants. The TVS model predicted ullage pressure and temperature and bulk liquid saturation pressure and temperature obtained from the TVS model were compared with the test data. During extended self-pressurization periods, following tank lockup, the model predicted faster pressure rise rates than were measured. However, once the system entered the cyclic mixing/venting operational mode, the modeled and measured data were quite similar.
Extracting a mix parameter from 2D radiography of variable density flow
NASA Astrophysics Data System (ADS)
Kurien, Susan; Doss, Forrest; Livescu, Daniel
2017-11-01
A methodology is presented for extracting quantities related to the statistical description of the mixing state from the 2D radiographic image of a flow. X-ray attenuation through a target flow is given by the Beer-Lambert law which exponentially damps the incident beam intensity by a factor proportional to the density, opacity and thickness of the target. By making reasonable assumptions for the mean density, opacity and effective thickness of the target flow, we estimate the contribution of density fluctuations to the attenuation. The fluctuations thus inferred may be used to form the correlation of density and specific-volume, averaged across the thickness of the flow in the direction of the beam. This correlation function, denoted by b in RANS modeling, quantifies turbulent mixing in variable density flows. The scheme is tested using DNS data computed for variable-density buoyancy-driven mixing. We quantify the deficits in the extracted value of b due to target thickness, Atwood number, and modeled noise in the incident beam. This analysis corroborates the proposed scheme to infer the mix parameter from thin targets at moderate to low Atwood numbers. The scheme is then applied to an image of counter-shear flow obtained from experiments at the National Ignition Facility. US Department of Energy.
NASA Astrophysics Data System (ADS)
Sotner, R.; Kartci, A.; Jerabek, J.; Herencsar, N.; Dostal, T.; Vrba, K.
2012-12-01
Several behavioral models of current active elements for experimental purposes are introduced in this paper. These models are based on commercially available devices. They are suitable for experimental tests of current- and mixed-mode filters, oscillators, and other circuits (employing current-mode active elements) frequently used in analog signal processing without necessity of onchip fabrication of proper active element. Several methods of electronic control of intrinsic resistance in the proposed behavioral models are discussed. All predictions and theoretical assumptions are supported by simulations and experiments. This contribution helps to find a cheaper and more effective way to preliminary laboratory tests without expensive on-chip fabrication of special active elements.
NASA Astrophysics Data System (ADS)
Rhodes, J. M.; Weis, D.; Norman, M. D.; Garcia, M. O.
2007-12-01
The long held notion that basaltic magmas are produced by decompressional melting of peridotite is under challenge. Recent models for the Hawaiian and other plumes argue that they consist of a heterogeneous mix of peridotite and discrete eclogite blobs, the latter derived from recycled subducted crust. Eclogite melting produces relatively siliceous magmas (dacite to andesite) which either mix with picritic melts from the peridotite, or, more plausibly, react with the peridotite to produce pyroxenite. Melting of varying proportions of the peridotite/pyroxenite mix is thought to produce the correlated compositional and isotopic characteristics of Hawaiian volcanoes. Magmas from Mauna Loa and Koolau volcanoes are thought to contain more of the recycled component; those from Loihi and Kilauea volcanoes contain less. A simple test of these mixed source models examines whether isotopic changes within the long magmatic history of a single volcano are accompanied by corresponding changes in major and trace element characteristics. Mauna Loa, where we have sampled around 400 - 500 ka of the volcano's eruptive history, provides an excellent opportunity for such a test. During this time, Mauna Loa will have traversed almost half the Hawaiian plume. According to the models, it should have erupted magmas produced from a range of pyroxenite/peridotite mixes with corresponding differences in both isotopic ratios and major and trace elements. Our data show that there is only minor isotopic (Sr, Pb, Nd, Hf) diversity in young lavas (<100 ka), but older lavas are highly diverse, ranging from modern values to those that are close to, and overlap with, those of Loihi volcano. If this isotopic diversity is a consequence of different proportions of pyroxenite and peridotite in the plume source, as the new models predict, we should expect to see correlated changes in bulk composition, particularly. in normalized SiO2, CaO/Al2O3, FeO/MgO and Ni - MgO relationships, as well as changes in Ni - Sc - V relationships. We do not. These parameters remain remarkably uniform over the 400 to 500 ka magmatic history of the volcano, with no correlated variation with isotopic ratios. We conclude that the isotopic heterogeneity within the Hawaiian plume is intrinsic to the peridotite plume source and not dependent on variable contributions from entrained, lithologically-discrete units.
Spatially-Resolved Analyses of Aerodynamic Fallout from a Uranium-Fueled Nuclear Test
Lewis, L. A.; Knight, K. B.; Matzel, J. E.; ...
2015-07-28
The fiive silicate fallout glass spherules produced in a uranium-fueled, near-surface nuclear test were characterized by secondary ion mass spectrometry, electron probe microanalysis, autoradiography, scanning electron microscopy, and energy-dispersive x-ray spectroscopy. Several samples display compositional heterogeneity suggestive of incomplete mixing between major elements and natural U ( 238U/ 235U = 0.00725) and enriched U. Samples exhibit extreme spatial heterogeneity in U isotopic composition with 0.02 < 235U/ 238U < 11.84 among all five spherules and 0.02 < 235U/ 238U < 7.41 within a single spherule. Moreover, in two spherules, the 235U/ 238U ratio is correlated with changes in major elementmore » composition, suggesting the agglomeration of chemically and isotopically distinct molten precursors. Two samples are nearly homogenous with respect to major element and uranium isotopic composition, suggesting extensive mixing possibly due to experiencing higher temperatures or residing longer in the fireball. Linear correlations between 234U/ 238U, 235U/ 238U, and 236U/ 238U ratios are consistent with a two-component mixing model, which is used to illustrate the extent of mixing between natural and enriched U end members.« less
NASA Technical Reports Server (NTRS)
Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.
2012-01-01
The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.
Carter, Nathan T; Dalal, Dev K; Boyce, Anthony S; O'Connell, Matthew S; Kung, Mei-Chuan; Delgado, Kristin M
2014-07-01
The personality trait of conscientiousness has seen considerable attention from applied psychologists due to its efficacy for predicting job performance across performance dimensions and occupations. However, recent theoretical and empirical developments have questioned the assumption that more conscientiousness always results in better job performance, suggesting a curvilinear link between the 2. Despite these developments, the results of studies directly testing the idea have been mixed. Here, we propose this link has been obscured by another pervasive assumption known as the dominance model of measurement: that higher scores on traditional personality measures always indicate higher levels of conscientiousness. Recent research suggests dominance models show inferior fit to personality test scores as compared to ideal point models that allow for curvilinear relationships between traits and scores. Using data from 2 different samples of job incumbents, we show the rank-order changes that result from using an ideal point model expose a curvilinear link between conscientiousness and job performance 100% of the time, whereas results using dominance models show mixed results, similar to the current state of the literature. Finally, with an independent cross-validation sample, we show that selection based on predicted performance using ideal point scores results in more favorable objective hiring outcomes. Implications for practice and future research are discussed.
Low Emissions RQL Flametube Combustor Component Test Results
NASA Technical Reports Server (NTRS)
Holdeman, James D.; Chang, Clarence T.
2001-01-01
This report describes and summarizes elements of the High Speed Research (HSR) Low Emissions Rich burn/Quick mix/Lean burn (RQL) flame tube combustor test program. This test program was performed at NASA Glenn Research Center circa 1992. The overall objective of this test program was to demonstrate and evaluate the capability of the RQL combustor concept for High Speed Civil Transport (HSCT) applications with the goal of achieving NOx emission index levels of 5 g/kg-fuel at representative HSCT supersonic cruise conditions. The specific objectives of the tests reported herein were to investigate component performance of the RQL combustor concept for use in the evolution of ultra-low NOx combustor design tools. Test results indicated that the RQL combustor emissions and performance at simulated supersonic cruise conditions were predominantly sensitive to the quick mixer subcomponent performance and not sensitive to fuel injector performance. Test results also indicated the mixing section configuration employing a single row of circular holes was the lowest NOx mixer tested probably due to the initial fast mixing characteristics of this mixing section. However, other quick mix orifice configurations such as the slanted slot mixer produced substantially lower levels of carbon monoxide emissions most likely due to the enhanced circumferential dispersion of the air addition. Test results also suggested that an optimum momentum-flux ratio exists for a given quick mix configuration. This would cause undesirable jet under- or over-penetration for test conditions with momentum-flux ratios below or above the optimum value. Tests conducted to assess the effect of quick mix flow area indicated that reduction in the quick mix flow area produced lower NOx emissions at reduced residence time, but this had no effect on NOx emissions measured at similar residence time for the configurations tested.
Description and evaluation of an interference assessment for a slotted-wall wind tunnel
NASA Technical Reports Server (NTRS)
Kemp, William B., Jr.
1991-01-01
A wind-tunnel interference assessment method applicable to test sections with discrete finite-length wall slots is described. The method is based on high order panel method technology and uses mixed boundary conditions to satisfy both the tunnel geometry and wall pressure distributions measured in the slotted-wall region. Both the test model and its sting support system are represented by distributed singularities. The method yields interference corrections to the model test data as well as surveys through the interference field at arbitrary locations. These results include the equivalent of tunnel Mach calibration, longitudinal pressure gradient, tunnel flow angularity, wall interference, and an inviscid form of sting interference. Alternative results which omit the direct contribution of the sting are also produced. The method was applied to the National Transonic Facility at NASA Langley Research Center for both tunnel calibration tests and tests of two models of subsonic transport configurations.
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The Study on Development of Light-Weight Foamed Mortar for Tunnel Backfill
NASA Astrophysics Data System (ADS)
Ma, Sang-Joon; Kang, Eun-Gu; Kim, Dong-Min
This study was intended to develop the Light-Weight Foamed Mortar which is used for NATM Composite lining backfill. In the wake of the study, the mixing method which satisfies the requirements for compressive strength, permeability coefficient, fluidity, specific gravity and settlement was developed and moreover field applicability was verified through the model test. Thus the mixing of Light-Weight Foamed Mortar developed in this study is expected to be applicable to NATM Composite lining, thereby making commitment to improving the stability and drainage performance of lining.
Hwang, Jaewon; Yoon, Taeshik; Jin, Sung Hwan; Lee, Jinsup; Kim, Taek-Soo; Hong, Soon Hyung; Jeon, Seokwoo
2013-12-10
RGO flakes are homogeneously dispersed in a Cu matrix through a molecular-level mixing process. This novel fabrication process prevents the agglomeration of the RGO and enhances adhesion between the RGO and the Cu. The yield strength of the 2.5 vol% RGO/Cu nanocomposite is 1.8 times higher than that of pure Cu. The strengthening mechanism of the RGO is investigated by a double cantilever beam test using the graphene/Cu model structure. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Laboratory study - laboratory testing of bridge deck mixes
DOT National Transportation Integrated Search
2003-03-01
The purpose of this investigation was to develop bridge deck mixes that will improve field performance and minimize cracking potential compared to MoDOT's current (B-2) bridge deck mix design. The mix designs developed in this study were tested and c...
NASA Astrophysics Data System (ADS)
Howells, A. E.; Oiler, J.; Fecteau, K.; Boyd, E. S.; Shock, E.
2014-12-01
The parameters influencing species diversity in natural ecosystems are difficult to assess due to the long and experimentally prohibitive timescales needed to develop causative relationships among measurements. Ecological diversity-disturbance models suggest that disturbance is a mechanism for increased species diversity, allowing for coexistence of species at an intermediate level of disturbance. Observing this mechanism often requires long timescales, such as the succession of a forest after a fire. In this study we evaluated the effect of mixing of two end member hydrothermal fluids on the diversity and structure of a microbial community where disturbance occurs on small temporal and spatial scales. Outflow channels from two hot springs of differing geochemical composition in Yellowstone National Park, one pH 3.3 and 36 °C and the other pH 7.6 and 61 °C flow together to create a mixing zone on the order of a few meters. Geochemical measurements were made at both in-coming streams and at a site of complete mixing downstream of the mixing zone, at pH 6.5 and 46 °C. Compositions were estimated across the mixing zone at 1 cm intervals using microsensor temperature and conductivity measurements and a mixing model. Qualitatively, there are four distinct ecotones existing over ranges in temperature and pH across the mixing zone. Community analysis of the 16S rRNA genes of these ecotones show a peak in diversity at maximal mixing. Principle component analysis of community 16S rRNA genes reflects coexistence of species with communities at maximal mixing plotting intermediate to communities at distal ends of the mixing zone. These spatial biological and geochemical observations suggest that the mixing zone is a dynamic ecosystem where geochemistry and biological diversity are governed by changes in the flow rate and geochemical composition of the two hot spring sources. In ecology, understanding how environmental disruption increases species diversity is a foundation for ecosystem conservation. By studying a hot spring environment where detailed measurements of geochemical variation and community diversity can be made at small spatial scales, the mechanisms by which maximal diversity is achieved can be tested and may assist in applications of diversity-disturbance models for larger ecosystems.
Transport relaxation processes in supercritical fluids
NASA Astrophysics Data System (ADS)
Jonas, J.
The technique for solubility measurements of solids in compressed supercritical fluids using NMR and theoretical analysis of experimental data on collision induced scattering were examined. Initial tests for a determination of solid solubilities in supercritical fluids without mixing were previously described and these preparations have continued. Super critical carbon dioxide dissolving naphthalene, for which solubility data is already available (M. McHugh, M.E. Paulaitis, J. Chem. Eng. Data, Vol. 25 (4), 1980) is being studied. This initial testing of the NMR technique for measuring solubilities in a well characterized system should prove very valuable for our later determinations with the proposed mixing probe. Systematic experimental studies of collision induced spectra in several supercritical fluids using both Raman and Rayleigh scattering are continued. The experimental work on SF6 and CH4 was finished and the experimental data testing of the various theoretical models for collision induced scattering is being analyzed.
A mathematical model for the transfer of soil solutes to runoff under water scouring.
Yang, Ting; Wang, Quanjiu; Wu, Laosheng; Zhang, Pengyu; Zhao, Guangxu; Liu, Yanli
2016-11-01
The transfer of nutrients from soil to runoff often causes unexpected pollution in water bodies. In this study, a mathematical model that relates to the detachment of soil particles by water flow and the degree of mixing between overland flow and soil nutrients was proposed. The model assumes that the mixing depth is an integral of average water flow depth, and it was evaluated by experiments with three water inflow rates to bare soil surfaces and to surfaces with eight treatments of different stone coverages. The model predicted outflow rates were compared with the experimentally observed data to test the accuracy of the infiltration parameters obtained by curve fitting the models to the data. Further analysis showed that the comprehensive mixing coefficient (ke) was linearly correlated with Reynolds' number Re (R(2)>0.9), and this relationship was verified by comparing the simulated potassium concentration and cumulative mass with observed data, respectively. The best performance with the bias error analysis (Nash Sutcliffe coefficient of efficiency (NS), relative error (RE) and the coefficient of determination (R(2))) showed that the predicted data by the proposed model was in good agreement with the measured data. Thus the model can be used to guide soil-water and fertilization management to minimize nutrient runoff from cropland. Copyright © 2016 Elsevier B.V. All rights reserved.
Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode
NASA Astrophysics Data System (ADS)
Seibert, P.; Frank, A.
2003-04-01
A method for the calculation of source-receptor (s-r) relationships (sensitivity of a trace substance concentration at some place and time to emission at some place and time) with Lagrangian particle models has been derived and presented previously (Air Pollution Modeling and its Application XIV, Proc. of ITM Boulder 2000). Now, the generalisation to any linear s-r relationship, including dry and wet deposition, decay etc., is presented. It was implemented in the model FLEXPART and tested extensively in idealised set-ups. These tests turned out to be very useful for finding minor model bugs and inaccuracies, and can be recommended generally for model testing. Recently, a convection scheme has been integrated in FLEXPART which was also tested. Both source and receptor can be specified in mass mixing ratio or mass units. Properly taking care of this is quite relevant for sources and receptors at different levels in the atmosphere. Furthermore, we present a test with the transport of aerosol-bound Caesium-137 from the areas contaminated by the Chernobyl disaster to Stockholm during one month.
NASA Astrophysics Data System (ADS)
Luan, Deyu; Zhang, Shengfeng; Wei, Xing; Duan, Zhenya
The aim of this work is to investigate the effect of the shaft eccentricity on the flow field and mixing characteristics in a stirred tank with the novel stirrer composed of perturbed six-bent-bladed turbine (6PBT). The difference between coaxial and eccentric agitations is studied using computational fluid dynamics (CFD) simulations combined with standard k-ε turbulent equations, that offer a complete image of the three-dimensional flow field. In order to determine the capability of CFD to forecast the mixing process, particle image velocimetry (PIV), which provide an accurate representation of the time-averaged velocity, was used to measure fluid velocity. The test liquid used was 1.25% (wt) xanthan gum solution, a pseudoplastic fluid with a yield stress. The comparison of the experimental and simulated mean flow fields has demonstrated that calculations based on Reynolds-averaged Navier-Stokes equations are suitable for obtaining accurate results. The effects of the shaft eccentricity and the stirrer off-bottom distance on the flow model, mixing time and mixing efficiency were extensively analyzed. It is observed that the microstructure of the flow field has a significant effect on the tracer mixing process. The eccentric agitation can lead to the flow model change and the non-symmetric flow structure, which would possess an obvious superiority of mixing behavior. Moreover, the mixing rate and mixing efficiency are dependent on the shaft eccentricity and the stirrer off-bottom distance, showing the corresponding increase of the eccentricity with the off-bottom distance. The efficient mixing process of pseudoplastic fluid stirred by 6PBT impeller is obtained with the considerably low mixing energy per unit volume when the stirrer off-bottom distance, C, is T/3 and the eccentricity, e, is 0.2. The research results provide valuable references for the improvement of pseudoplastic fluid agitation technology.
Numerical experiments with a wind- and buoyancy-driven two-and-a-half-layer upper ocean model
NASA Astrophysics Data System (ADS)
Cherniawsky, J. Y.; Yuen, C. W.; Lin, C. A.; Mysak, L. A.
1990-09-01
We describe numerical experiments with a limited domain (15°-67°N, 65° west to east) coarse-resolution two-and-a-half-layer upper ocean model. The model consists of two active variable density layers: a Niiler and Kraus (1977) type mixed layer and a pycnocline layer, which overlays a semipassive deep ocean. The mixed layer is forced with a cosine wind stress and Haney type heat and precipitation-evaporation fluxes, which were derived from zonally averaged climatological (Levitus, 1982) surface temperatures and salinities for the North Atlantic. The second layer is forced from below with (1) Newtonian cooling to climatological temperatures and salinities at the lower boundary, (2) convective adjustment, which occurs whenever the density of the second layer is unstable with respect to climatology, and (3) mass entrainment in areas of strong upwelling, when the deep ocean ventilates through the bottom surface. The sensitivity of this model to changes in its internal (mixed layer) and external (e.g., a Newtonian coupling coefficient) parameters is investigated and compared to the results from a control experiment. We find that the model is not overly sensitive to changes in most of the parameters that were tested, albeit these results may depend to some extent on the choice of the control experiment.
Thermal stratification potential in rocket engine coolant channels
NASA Technical Reports Server (NTRS)
Kacynski, Kenneth J.
1992-01-01
The potential for rocket engine coolant channel flow stratification was computationally studied. A conjugate, 3-D, conduction/advection analysis code (SINDA/FLUINT) was used. Core fluid temperatures were predicted to vary by over 360 K across the coolant channel, at the throat section, indicating that the conventional assumption of a fully mixed fluid may be extremely inaccurate. Because of the thermal stratification of the fluid, the walls exposed to the rocket engine exhaust gases will be hotter than an assumption of full mixing would imply. In this analysis, wall temperatures were 160 K hotter in the turbulent mixing case than in the full mixing case. The discrepancy between the full mixing and turbulent mixing analyses increased with increasing heat transfer. Both analysis methods predicted identical channel resistances at the coolant inlet, but in the stratified analysis the thermal resistance was negligible. The implications are significant. Neglect of thermal stratification could lead to underpredictions in nozzle wall temperatures. Even worse, testing at subscale conditions may be inadequate for modeling conditions that would exist in a full scale engine.
Chaotic Lagrangian models for turbulent relative dispersion.
Lacorata, Guglielmo; Vulpiani, Angelo
2017-04-01
A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.
Chaotic Lagrangian models for turbulent relative dispersion
NASA Astrophysics Data System (ADS)
Lacorata, Guglielmo; Vulpiani, Angelo
2017-04-01
A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.
0-6744 : new HMA shear resistance and rutting test for Texas mixes.
DOT National Transportation Integrated Search
2014-08-01
Traditionally run at one test temperature (122F), the : Hamburg wheel tracking test (HWTT) (Figure 1) has a : proven history of successfully identifying and screening : hot-mix asphalt (HMA) mixes that are prone to rutting : and/or susceptible to m...
Investigation of fatigue failure in bituminous base mixes.
DOT National Transportation Integrated Search
1980-01-01
A correlation between the results obtained with the fatigue test and those from the indirect tensile test on two base mixes was attempted in anticipation of the possible use of the latter test to design base mixes for maximum fatigue life. Two base m...
An Efficient Alternative Mixed Randomized Response Procedure
ERIC Educational Resources Information Center
Singh, Housila P.; Tarray, Tanveer A.
2015-01-01
In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…
The Contribution of Emotional Intelligence to Decisional Styles among Italian High School Students
ERIC Educational Resources Information Center
Di Fabio, Annamaria; Kenny, Maureen E.
2012-01-01
This study examined the relationship between emotional intelligence (EI) and styles of decision making. Two hundred and six Italian high school students completed two measures of EI, the Bar-On EI Inventory, based on a mixed model of EI, and the Mayer Salovey Caruso EI Test, based on an ability-based model of EI, in addition to the General…
Optimum use of air tankers in initial attack: selection, basing, and transfer rules
Francis E. Greulich; William G. O' Regan
1982-01-01
Fire managers face two interrelated problems in deciding the most efficient use of air tankers: where best to base them, and how best to reallocate them each day in anticipation of fire occurrence. A computerized model based on a mixed integer linear program can help in assigning air tankers throughout the fire season. The model was tested using information from...
2005-10-01
interaction • Turbulence/ flow chemistry plus combustion interaction • Transpiration Cooling and ablation – Ram/Scramjet Technology – Ignition, mixing...turbulence models for separated regions of shock wave/turbulent boundary layer interaction – Modeling turbulence/ flow chemistry /combustion...Minutes FLOW DURATION Flow velocity Reynolds number Mach number Velocity Temperature Vehicle length NoneLengthVelocity Flow Chemistry Total temperature
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn
2018-05-01
The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.
NASA Astrophysics Data System (ADS)
Gray, H. J.; Tucker, G. E.; Mahan, S.
2017-12-01
Luminescence is a property of matter that can be used to obtain depositional ages from fine sand. Luminescence generates due to exposure to background ionizing radiation and is removed by sunlight exposure in a process known as bleaching. There is evidence to suggest that luminescence can also serve as a sediment tracer in fluvial and hillslope environments. For hillslope environments, it has been suggested that the magnitude of luminescence as a function of soil depth is related to the strength of soil mixing. Hillslope soils with a greater extent of mixing will have previously surficial sand grains moved to greater depths in a soil column. These previously surface-exposed grains will contain a lower luminescence than those which have never seen the surface. To attempt to connect luminescence profiles with soil mixing rate, here defined as the soil vertical diffusivity, I conduct numerical modelling of particles in hillslope soils coupled with equations describing the physics of luminescence. I use recently published equations describing the trajectories of particles under both exponential and uniform soil velocity soils profiles and modify them to include soil diffusivity. Results from the model demonstrates a strong connection between soil diffusivity and luminescence. Both the depth profiles of luminescence and the total percent of surface exposed grains will change drastically based on the magnitude of the diffusivity. This suggests that luminescence could potentially be used to infer the magnitude of soil diffusivity. However, I test other variables such as the soil production rate, e-folding length of soil velocity, background dose rate, and soil thickness, and I find these other variables can also affect the relationship between luminescence and diffusivity. This suggests that these other variables may need to be constrained prior to any inferences of soil diffusivity from luminescence measurements. Further field testing of the model in areas where the soil vertical diffusivity and other parameters are independently known will provide a test of this potential new method.
Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun
2013-09-01
By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.
Antecedents and Consequences of Federal Bid Protests
2015-04-30
contractor performance. While these effects have been anecdotally espoused by practitioners, this research is the first to quantitatively test the... Research design : Qualitative, quantitative , and mixed methods approaches (2nd ed.). Thousand Oaks, CA: Sage. DoD Inspector General (DoDIG). (2009a, April...contracting personnel, this research tests a model of antecedents to and consequences of the fear of a protest. Survey data was obtained from a sample of 350
2013-01-01
Background Despite the widespread use of multiple-choice assessments in medical education assessment, current practice and published advice concerning the number of response options remains equivocal. This article describes an empirical study contrasting the quality of three 60 item multiple-choice test forms within the Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZCOG) Fetal Surveillance Education Program (FSEP). The three forms are described below. Methods The first form featured four response options per item. The second form featured three response options, having removed the least functioning option from each item in the four-option counterpart. The third test form was constructed by retaining the best performing version of each item from the first two test forms. It contained both three and four option items. Results Psychometric and educational factors were taken into account in formulating an approach to test construction for the FSEP. The four-option test performed better than the three-option test overall, but some items were improved by the removal of options. The mixed-option test demonstrated better measurement properties than the fixed-option tests, and has become the preferred test format in the FSEP program. The criteria used were reliability, errors of measurement and fit to the item response model. Conclusions The position taken is that decisions about the number of response options be made at the item level, with plausible options being added to complete each item on both psychometric and educational grounds rather than complying with a uniform policy. The point is to construct the better performing item in providing the best psychometric and educational information. PMID:23453056
Zoanetti, Nathan; Beaves, Mark; Griffin, Patrick; Wallace, Euan M
2013-03-04
Despite the widespread use of multiple-choice assessments in medical education assessment, current practice and published advice concerning the number of response options remains equivocal. This article describes an empirical study contrasting the quality of three 60 item multiple-choice test forms within the Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZCOG) Fetal Surveillance Education Program (FSEP). The three forms are described below. The first form featured four response options per item. The second form featured three response options, having removed the least functioning option from each item in the four-option counterpart. The third test form was constructed by retaining the best performing version of each item from the first two test forms. It contained both three and four option items. Psychometric and educational factors were taken into account in formulating an approach to test construction for the FSEP. The four-option test performed better than the three-option test overall, but some items were improved by the removal of options. The mixed-option test demonstrated better measurement properties than the fixed-option tests, and has become the preferred test format in the FSEP program. The criteria used were reliability, errors of measurement and fit to the item response model. The position taken is that decisions about the number of response options be made at the item level, with plausible options being added to complete each item on both psychometric and educational grounds rather than complying with a uniform policy. The point is to construct the better performing item in providing the best psychometric and educational information.
HMA shear resistance, permanent deformation, and rutting tests for Texas mixes : year-1 report.
DOT National Transportation Integrated Search
2014-04-01
Traditionally run at one test temperature (122F), the Hamburg Wheel Tracking Test (HWTT) has a proven : history of identifying hot-mix asphalt (HMA) mixes that are moisture susceptible and/or prone to rutting. However, : with the record summer temp...
DOT National Transportation Integrated Search
2014-11-01
Traditionally run at one test temperature (122F), the Hamburg wheel tracking test (HWTT) has a proven : history of identifying hot mix asphalt (HMA) mixes that are moisture susceptible and/or prone to rutting. However, : with the record summer temp...
New generation mix-designs : laboratory testing and construction of the APT test sections.
DOT National Transportation Integrated Search
2010-03-01
Recent changes to the Texas HMA mix-design procedures such as adaption of the higher PG asphalt-binder grades and the Hamburg test have ensured that the mixes routinely used on the Texas highways are not prone to rutting. However, performance concern...
A mobile-mobile transport model for simulating reactive transport in connected heterogeneous fields
NASA Astrophysics Data System (ADS)
Lu, Chunhui; Wang, Zhiyuan; Zhao, Yue; Rathore, Saubhagya Singh; Huo, Jinge; Tang, Yuening; Liu, Ming; Gong, Rulan; Cirpka, Olaf A.; Luo, Jian
2018-05-01
Mobile-immobile transport models can be effective in reproducing heavily tailed breakthrough curves of concentration. However, such models may not adequately describe transport along multiple flow paths with intermediate velocity contrasts in connected fields. We propose using the mobile-mobile model for simulating subsurface flow and associated mixing-controlled reactive transport in connected fields. This model includes two local concentrations, one in the fast- and the other in the slow-flow domain, which predict both the concentration mean and variance. The normalized total concentration variance within the flux is found to be a non-monotonic function of the discharge ratio with a maximum concentration variance at intermediate values of the discharge ratio. We test the mobile-mobile model for mixing-controlled reactive transport with an instantaneous, irreversible bimolecular reaction in structured and connected random heterogeneous domains, and compare the performance of the mobile-mobile to the mobile-immobile model. The results indicate that the mobile-mobile model generally predicts the concentration breakthrough curves (BTCs) of the reactive compound better. Particularly, for cases of an elliptical inclusion with intermediate hydraulic-conductivity contrasts, where the travel-time distribution shows bimodal behavior, the prediction of both the BTCs and maximum product concentration is significantly improved. Our results exemplify that the conceptual model of two mobile domains with diffusive mass transfer in between is in general good for predicting mixing-controlled reactive transport, and particularly so in cases where the transfer in the low-conductivity zones is by slow advection rather than diffusion.
A Monte-Carlo Analysis of Organic Volatility with Aerosol Microphysics
NASA Astrophysics Data System (ADS)
Gao, Chloe; Tsigaridis, Kostas; Bauer, Susanne E.
2017-04-01
A newly developed box model, MATRIX-VBS, includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves aerosol mass and number concentrations and aerosol mixing state. The new scheme advanced the representation of organic aerosols in models by improving the traditional and simplistic treatment of organic aerosols as non-volatile and with a fixed size distribution. Further development includes adding the condensation of organics on coarse mode aerosols - dust and sea salt, thus making all organics in the system semi-volatile. To test and simplify the model, a Monte-Carlo analysis is performed to pin point which processes affect organics the most under varied chemical and meteorological conditions. Since the model's parameterizations have the ability to capture a very wide range of conditions, all possible scenarios on Earth across the whole parameter space, including temperature, humidity, location, emissions and oxidant levels, are examined. The Monte-Carlo simulations provide quantitative information on the sensitivity of the newly developed model and help us understand how organics are affecting the size distribution, mixing state and volatility distribution at varying levels of meteorological conditions and pollution levels. In addition, these simulations give information on which parameters play a critical role in the aerosol distribution and evolution in the atmosphere and which do not, that will facilitate the simplification of the box model, an important step in its implementation in the global model GISS ModelE as a module.
DOT National Transportation Integrated Search
2009-07-01
"Considerable data exists for soils that were tested and documented, both for native properties and : properties with pozzolan stabilization. While the data exists there was no database for the Nebraska : Department of Roads to retrieve this data for...
Vertical eddy diffusivity as a control parameter in the tropical Pacific
NASA Astrophysics Data System (ADS)
Martinez Avellaneda, N.; Cornuelle, B.
2011-12-01
Ocean models suffer from errors in the treatment of turbulent sub-grid-scale motions responsible for mixing and energy dissipation. Unrealistic small-scale physics in models can have large-scale consequences, such as biases in the upper ocean temperature, a symptom of poorly-simulated upwelling, currents and air-sea interactions. This is of special importance in the tropical Pacific Ocean (TP), which is home to energetic air-sea interactions that affect global climate. It has been shown in a number of studies that the simulated ENSO variability is highly dependent on the state of the ocean (e.g.: background mixing). Moreover, the magnitude of the vertical numerical diffusion is of primary importance in properly reproducing the Pacific equatorial thermocline. This work is part of a NASA-funded project to estimate the space- and time-varying ocean mixing coefficients in an eddy-permitting (1/3dgr) model of the TP to obtain an improved estimate of its time-varying circulation and its underlying dynamics. While an estimation procedure for the TP (26dgr S - 30dgr N) in underway using the MIT general circulation model, complementary adjoint-based sensitivity studies have been carried out for the starting ocean state from Forget (2010). This analysis aids the interpretation of the estimated mixing coefficients and possible error compensation. The focus of the sensitivity tests is the Equatorial Undercurrent and sub-thermocline jets (i.e., Tsuchiya Jets), which have been thought to have strong dependence on vertical diffusivity and should provide checks on the estimated mixing parameters. In order to build intuition for the vertical diffusivity adjoint results in the TP, adjoint and forward perturbed simulations were carried out for an idealized sharp thermocline in a rectangular domain.
ERIC Educational Resources Information Center
Wang, Wei
2013-01-01
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…
Pore-scale and continuum simulations of solute transport micromodel benchmark experiments
Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...
2014-06-18
Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less
Quantifying Diapycnal Mixing in an Energetic Ocean
NASA Astrophysics Data System (ADS)
Ivey, Gregory N.; Bluteau, Cynthia E.; Jones, Nicole L.
2018-01-01
Turbulent diapycnal mixing controls global circulation and the distribution of tracers in the ocean. For turbulence in stratified shear flows, we introduce a new turbulent length scale Lρ dependent on χ. We show the flux Richardson number Rif is determined by the dimensionless ratio of three length scales: the Ozmidov scale LO, the Corrsin shear scale LS, and Lρ. This new model predicts that Rif varies from 0 to 0.5, which we test primarily against energetic field observations collected in 100 m of water on the Australian North West Shelf (NWS), in addition to laboratory observations. The field observations consisted of turbulence microstructure vertical profiles taken near moored temperature and velocity turbulence time series. Irrespective of the value of the gradient Richardson number Ri, both instruments yielded a median Rif=0.17, while the observed Rif ranged from 0.01 to 0.50, in agreement with the predicted range of Rif. Using a Prandtl mixing length model, we show that diapycnal mixing Kρ can be predicted from Lρ and the background vertical shear S. Using field and laboratory observations, we show that Lρ=0.3LE where LE is the Ellison length scale. The diapycnal diffusivity can thus be calculated from Kρ=0.09LES2. This prediction agrees very well with the diapycnal mixing estimates obtained from our moored turbulence instruments for observed diffusivities as large as 10-1 m2s-1. Moorings with relatively low sampling rates can thus provide long time series estimates of diapycnal mixing rates, significantly increasing the number of diapycnal mixing estimates in the ocean.
Coughtrie, A R; Borman, D J; Sleigh, P A
2013-06-01
Flow in a gas-lift digester with a central draft-tube was investigated using computational fluid dynamics (CFD) and different turbulence closure models. The k-ω Shear-Stress-Transport (SST), Renormalization-Group (RNG) k-∊, Linear Reynolds-Stress-Model (RSM) and Transition-SST models were tested for a gas-lift loop reactor under Newtonian flow conditions validated against published experimental work. The results identify that flow predictions within the reactor (where flow is transitional) are particularly sensitive to the turbulence model implemented; the Transition-SST model was found to be the most robust for capturing mixing behaviour and predicting separation reliably. Therefore, Transition-SST is recommended over k-∊ models for use in comparable mixing problems. A comparison of results obtained using multiphase Euler-Lagrange and singlephase approaches are presented. The results support the validity of the singlephase modelling assumptions in obtaining reliable predictions of the reactor flow. Solver independence of results was verified by comparing two independent finite-volume solvers (Fluent-13.0sp2 and OpenFOAM-2.0.1). Copyright © 2013 Elsevier Ltd. All rights reserved.
Implication of correlations among some common stability statistics - a Monte Carlo simulations.
Piepho, H P
1995-03-01
Stability analysis of multilocation trials is often based on a mixed two-way model. Two stability measures in frequent use are the environmental variance (S i (2) )and the ecovalence (W i). Under the two-way model the rank orders of the expected values of these two statistics are identical for a given set of genotypes. By contrast, empirical rank correlations among these measures are consistently low. This suggests that the two-way mixed model may not be appropriate for describing real data. To check this hypothesis, a Monte Carlo simulation was conducted. It revealed that the low empirical rank correlation amongS i (2) and W i is most likely due to sampling errors. It is concluded that the observed low rank correlation does not invalidate the two-way model. The paper also discusses tests for homogeneity of S i (2) as well as implications of the two-way model for the classification of stability statistics.
A continuous mixing model for pdf simulations and its applications to combusting shear flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Chen, J.-Y.
1991-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.
Fent, Kenneth W.; Gaines, Linda G. Trelles; Thomasen, Jennifer M.; Flack, Sheila L.; Ding, Kai; Herring, Amy H.; Whittaker, Stephen G.; Nylander-French, Leena A.
2009-01-01
We conducted a repeated exposure-assessment survey for task-based breathing-zone concentrations (BZCs) of monomeric and polymeric 1,6-hexamethylene diisocyanate (HDI) during spray painting on 47 automotive spray painters from North Carolina and Washington State. We report here the use of linear mixed modeling to identify the primary determinants of the measured BZCs. Both one-stage (N = 98 paint tasks) and two-stage (N = 198 paint tasks) filter sampling was used to measure concentrations of HDI, uretidone, biuret, and isocyanurate. The geometric mean (GM) level of isocyanurate (1410 μg m−3) was higher than all other analytes (i.e. GM < 7.85 μg m−3). The mixed models were unique to each analyte and included factors such as analyte-specific paint concentration, airflow in the paint booth, and sampler type. The effect of sampler type was corroborated by side-by-side one- and two-stage personal air sampling (N = 16 paint tasks). According to paired t-tests, significantly higher concentrations of HDI (P = 0.0363) and isocyanurate (P = 0.0035) were measured using one-stage samplers. Marginal R2 statistics were calculated for each model; significant fixed effects were able to describe 25, 52, 54, and 20% of the variability in BZCs of HDI, uretidone, biuret, and isocyanurate, respectively. Mixed models developed in this study characterize the processes governing individual polyisocyanate BZCs. In addition, the mixed models identify ways to reduce polyisocyanate BZCs and, hence, protect painters from potential adverse health effects. PMID:19622637
Temporal dynamics of catchment transit times from stable isotope data
NASA Astrophysics Data System (ADS)
Klaus, Julian; Chun, Kwok P.; McGuire, Kevin J.; McDonnell, Jeffrey J.
2015-06-01
Time variant catchment transit time distributions are fundamental descriptors of catchment function but yet not fully understood, characterized, and modeled. Here we present a new approach for use with standard runoff and tracer data sets that is based on tracking of tracer and age information and time variant catchment mixing. Our new approach is able to deal with nonstationarity of flow paths and catchment mixing, and an irregular shape of the transit time distribution. The approach extracts information on catchment mixing from the stable isotope time series instead of prior assumptions of mixing or the shape of transit time distribution. We first demonstrate proof of concept of the approach with artificial data; the Nash-Sutcliffe efficiencies in tracer and instantaneous transit times were >0.9. The model provides very accurate estimates of time variant transit times when the boundary conditions and fluxes are fully known. We then tested the model with real rainfall-runoff flow and isotope tracer time series from the H.J. Andrews Watershed 10 (WS10) in Oregon. Model efficiencies were 0.37 for the 18O modeling for a 2 year time series; the efficiencies increased to 0.86 for the second year underlying the need of long time tracer time series with a long overlap of tracer input and output. The approach was able to determine time variant transit time of WS10 with field data and showed how it follows the storage dynamics and related changes in flow paths where wet periods with high flows resulted in clearly shorter transit times compared to dry low flow periods.
Formation of parametric images using mixed-effects models: a feasibility study.
Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh
2016-03-01
Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Fragon: rapid high-resolution structure determination from ideal protein fragments.
Jenkins, Huw T
2018-03-01
Correctly positioning ideal protein fragments by molecular replacement presents an attractive method for obtaining preliminary phases when no template structure for molecular replacement is available. This has been exploited in several existing pipelines. This paper presents a new pipeline, named Fragon, in which fragments (ideal α-helices or β-strands) are placed using Phaser and the phases calculated from these coordinates are then improved by the density-modification methods provided by ACORN. The reliable scoring algorithm provided by ACORN identifies success. In these cases, the resulting phases are usually of sufficient quality to enable automated model building of the entire structure. Fragon was evaluated against two test sets comprising mixed α/β folds and all-β folds at resolutions between 1.0 and 1.7 Å. Success rates of 61% for the mixed α/β test set and 30% for the all-β test set were achieved. In almost 70% of successful runs, fragment placement and density modification took less than 30 min on relatively modest four-core desktop computers. In all successful runs the best set of phases enabled automated model building with ARP/wARP to complete the structure.
A model for prediction of STOVL ejector dynamics
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1989-01-01
A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.
Trial type mixing substantially reduces the response set effect in the Stroop task.
Hasshim, Nabil; Parris, Benjamin A
2017-03-20
The response set effect refers to the finding that an irrelevant incongruent colour-word produces greater interference when it is one of the response options (referred to as a response set trial), compared to when it is not (a non-response set trial). Despite being a key effect for models of selective attention, the magnitude of the effect varies considerably across studies. We report two within-subjects experiments that tested the hypothesis that presentation format modulates the magnitude of the response set effect. Trial types (e.g. response set, non-response set, neutral) were either presented in separate blocks (pure) or in blocks containing trials from all conditions presented randomly (mixed). In the first experiment we show that the response set effect is substantially reduced in the mixed block context as a result of a decrease in RTs to response set trials. By demonstrating the modulation of the response set effect under conditions of trial type mixing we present evidence that is difficult for models of the effect based on strategic, top-down biasing of attention to explain. In a second experiment we tested a stimulus-driven account of the response set effect by manipulating the number of colour-words that make up the non-response set of distractors. The results show that the greater the number of non-response set colour concepts, the smaller the response set effect. Alternative accounts of the data and its implications for research debating the automaticity of reading are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
Effect of shock interactions on mixing layer between co-flowing supersonic flows in a confined duct
NASA Astrophysics Data System (ADS)
Rao, S. M. V.; Asano, S.; Imani, I.; Saito, T.
2018-03-01
Experiments are conducted to observe the effect of shock interactions on a mixing layer generated between two supersonic streams of Mach number M _{1} = 1.76 and M _{2} = 1.36 in a confined duct. The development of this mixing layer within the duct is observed using high-speed schlieren and static pressure measurements. Two-dimensional, compressible Reynolds averaged Navier-Stokes equations are solved using the k-ω SST turbulence model in Fluent. Further, adverse pressure gradients are imposed by placing inserts of small (<7% of duct height) but finite (> boundary layer thickness) thickness on the walls of the test section. The unmatched pressures cause the mixing layer to bend and lead to the formation of shock structures that interact with the mixing layer. The mixing layer growth rate is found to increase after the shock interaction (nearly doubles). The strongest shock is observed when a wedge insert is placed in the M _{2} flow. This shock interacts with the mixing layer exciting flow modes that produce sinusoidal flapping structures which enhance the mixing layer growth rate to the maximum (by 1.75 times). Shock fluctuations are characterized, and it is observed that the maximum amplitude occurs when a wedge insert is placed in the M _{2} flow.
Williams, Nathalie E.
2015-01-01
Historically, legal, policy, and academic communities largely ascribed to a dichotomy between forced and voluntary migration, creating a black and white vision that was convenient for legal and policy purposes. More recently, discussions have begun addressing the possibility of mixed migration, acknowledging that there is likely a wide continuum between forced and voluntary, and most migrants likely move with some amount of compulsion and some volition, even during armed conflict. While the mixed migration hypothesis is well-received, empirical evidence is disparate and somewhat blunt at this point. In this article, I contribute a direct theoretical and causal pathway discussion of mixed migration. I also propose the complex mixed migration hypothesis, which argues that not only do non-conflict related factors influence migration during conflict, but they do so differently than during periods of relative peace. I empirically test both hypotheses in the context of the recent armed conflict in Nepal. Using detailed survey data and event history models, results provide strong evidence for both mixed migration and complex mixed migration during conflict hypotheses. These hypotheses and evidence suggest that armed conflict might have substantial impacts on long-term population growth and change, with significant relevance in both academic and policy spheres. PMID:26366007
Experimental Supersonic Combustion Research at NASA Langley
NASA Technical Reports Server (NTRS)
Rogers, R. Clayton; Capriotti, Diego P.; Guy, R. Wayne
1998-01-01
Experimental supersonic combustion research related to hypersonic airbreathing propulsion has been actively underway at NASA Langley Research Center (LaRC) since the mid-1960's. This research involved experimental investigations of fuel injection, mixing, and combustion in supersonic flows and numerous tests of scramjet engine flowpaths in LaRC test facilities simulating flight from Mach 4 to 8. Out of this research effort has come scramjet combustor design methodologies, ground test techniques, and data analysis procedures. These technologies have progressed steadily in support of the National Aero-Space Plane (NASP) program and the current Hyper-X flight demonstration program. During NASP nearly 2500 tests of 15 scramjet engine models were conducted in LaRC facilities. In addition, research supporting the engine flowpath design investigated ways to enhance mixing, improve and apply nonintrusive diagnostics, and address facility operation. Tests of scramjet combustor operation at conditions simulating hypersonic flight at Mach numbers up to 17 also have been performed in an expansion tube pulse facility. This paper presents a review of the LaRC experimental supersonic combustion research efforts since the late 1980's, during the NASP program, and into the Hyper-X Program.
The fragrance mix and its constituents: a 14-year material.
Johansen, J D; Menné, T
1995-01-01
Results from 14 years of patch testing with the fragrance mix and its constituents are reviewed. From 1979-1992, 8215 consecutive patients were patch tested with the fragrance mix and 449 (5.5%) had a positive reaction. An increase in the frequency of reactions to fragrance mix was seen from the first 5-year period to the last. Only 54.4% of the patients tested in the last 5-year period with the individual constituents of the mix had at least 1 positive reaction. The results of testing with the constituents are the basis for a discussion of methodological problems. A significant decrease in the frequency of reaction to cinnamic aldehyde was registered, at the same time as the test concentration was reduced from 2% to 1% pet. However, no significant variations in the frequency of reactions to oak moss were seen, notwithstanding a similar reduction in test concentration.
Unifying error structures in commonly used biotracer mixing models.
Stock, Brian C; Semmens, Brice X
2016-10-01
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.
Lagrangian mixed layer modeling of the western equatorial Pacific
NASA Technical Reports Server (NTRS)
Shinoda, Toshiaki; Lukas, Roger
1995-01-01
Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.
NASA Technical Reports Server (NTRS)
Rind, David H.; Lerner, Jean; Shah, Kathy; Suozzo, Robert
1999-01-01
A key component of climate/chemistry modeling is how to handle the influx into (and egress from) the troposphere. This is especially important when considering tropospheric ozone, and its precursors (e.g., NO(x) from aircraft). A study has been conducted with various GISS models to determine the minimum requirements necessary for producing realistic troposphere-stratosphere exchange. Four on-line tracers are employed: CFC-11 and SF6 for mixing from the troposphere into the stratosphere, Rn222 for vertical mixing within the troposphere, and 14C for mixing from the stratosphere into the troposphere. Four standard models are tested, with varying vertical resolution, gravity wave drag and location of the model top, and additional subsidiary models are employed to examine specific features. The results show that proper vertical transport between the troposphere and stratosphere in the GISS models requires lifting the top of the model considerably out of the stratosphere, and including gravity wave drag in the lower stratosphere. Increased vertical resolution without these aspects does not improve troposphere-stratosphere exchange. The transport appears to be driven largely by the residual circulation within the stratosphere; associated E-P flux convergences require both realistic upward propagating energy from the troposphere, and realistic pass-through possibilities. A 23 layer version with a top at the mesopause and incorporating gravity wave drag appears to have reasonable stratospheric-tropospheric exchange, in terms of both the resulting tracer distributions and atmospheric mass fluxes.
MHDL CAD tool with fault circuit handling
NASA Astrophysics Data System (ADS)
Espinosa Flores-Verdad, Guillermo; Altamirano Robles, Leopoldo; Osorio Roque, Leticia
2003-04-01
Behavioral modeling and simulation, with Analog Hardware and Mixed Signal Description High Level Languages (MHDLs), have generated the development of diverse simulation tools that allow handling the requirements of the modern designs. These systems have million of transistors embedded and they are radically diverse between them. This tendency of simulation tools is exemplified by the development of languages for modeling and simulation, whose applications are the re-use of complete systems, construction of virtual prototypes, realization of test and synthesis. This paper presents the general architecture of a Mixed Hardware Description Language, based on the standard 1076.1-1999 IEEE VHDL Analog and Mixed-Signal Extensions known as VHDL-AMS. This architecture is novel by consider the modeling and simulation of faults. The main modules of the CAD tool are briefly described in order to establish the information flow and its transformations, starting from the description of a circuit model, going throw the lexical analysis, mathematical models generation and the simulation core, ending at the collection of the circuit behavior as simulation"s data. In addition, the incorporated mechanisms to the simulation core are explained in order to realize the handling of faults into the circuit models. Currently, the CAD tool works with algebraic and differential descriptions for the circuit models, nevertheless the language design is open to be able to handle different model types: Fuzzy Models, Differentials Equations, Transfer Functions and Tables. This applies for fault models too, in this sense the CAD tool considers the inclusion of mutants and saboteurs. To exemplified the results obtained until now, the simulated behavior of a circuit is shown when it is fault free and when it has been modified by the inclusion of a fault as a mutant or a saboteur. The obtained results allow the realization of a virtual diagnosis for mixed circuits. This language works in a UNIX system; it was developed with an object-oriented methodology and programmed in C++.
Simulations of NOx Emissions from Low Emissions Discrete Jet Injector Combustor Tests
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Breisacher, Kevin
2014-01-01
An experimental and computational study was conducted to evaluate the performance and emissions characteristics of a candidate Lean Direct Injection (LDI) combustor configuration with a mix of simplex and airblast injectors. The National Combustion Code (NCC) was used to predict the experimentally measured EINOx emissions for test conditions representing low power, medium power, and high-power engine cycle conditions. Of the six cases modeled with the NCC using a reduced-kinetics finite-rate mechanism and lagrangian spray modeling, reasonable predictions of combustor exit temperature and EINOx were obtained at two high-power cycle conditions.
An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1973-01-01
The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.
Performance Evaluation of Hot Mix Asphalt with Different Proportions of RAP Content
NASA Astrophysics Data System (ADS)
Kamil Arshad, Ahmad; Awang, Haryati; Shaffie, Ekarizan; Hashim, Wardati; Rahman, Zanariah Abd
2018-03-01
Reclaimed Asphalt Pavement (RAP) is old asphalt pavement that has been removed from a road by milling or full depth removal. The use of RAP in hot mix asphalt (HMA) eliminates the need to dispose old asphalt pavements and conserves asphalt binders and aggregates, resulting in significant cost savings and benefits to society. This paper presents a study on HMA with different RAP proportions carried out to evaluate the volumetric properties and performance of asphalt mixes containing different proportions of RAP. Marshall Mix Design Method was used to produce control mix (0% RAP) and asphalt mixes containing 15% RAP, 25% RAP and 35% RAP in accordance with Specifications for Road Works of Public Works Department, Malaysia for AC14 dense graded asphalt gradation. Volumetric analysis was performed to ensure that the result is compliance with specification requirements. The resilient modulus test was performed to measure the stiffness of the mixes while the Modified Lottman test was conducted to evaluate the moisture susceptibility of these mixes. The Hamburg wheel tracking test was used to evaluate the rutting performance of these mixes. The results obtained showed that there were no substantial difference in Marshall Properties, moisture susceptibility, resilient modulus and rutting resistance between asphalt mixes with RAP and the control mix. The test results indicated that recycled mixes performed as good as the performance of conventional HMA in terms of moisture susceptibility and resilient modulus. It is recommended that further research be carried out for asphalt mixes containing more than 35% RAP material.
USDA-ARS?s Scientific Manuscript database
Colletotrichum gloeosporioides f. sp. salsolae (Penz.) Penz. & Sacc. in Penz. (CGS) is a facultative parasitic fungus being evaluated as a classical biological control agent of Russian thistle or tumbleweed (Salsola tragus L.). In initial host range determination tests, Henderson’s mixed model equat...
Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...
s-Processing from MHD-induced mixing and isotopic abundances in presolar SiC grains
NASA Astrophysics Data System (ADS)
Palmerini, S.; Trippella, O.; Busso, M.; Vescovi, D.; Petrelli, M.; Zucchini, A.; Frondini, F.
2018-01-01
In the past years the observational evidence that s-process elements from Sr to Pb are produced by stars ascending the so-called Asymptotic Giant Branch (or "AGB") could not be explained by self-consistent models, forcing researchers to extensive parameterizations. The crucial point is to understand how protons can be injected from the envelope into the He-rich layers, yielding the formation of 13C and then the activation of the 13C (α,n)16O reaction. Only recently, attempts to solve this problem started to consider quantitatively physically-based mixing mechanisms. Among them, MHD processes in the plasma were suggested to yield mass transport through magnetic buoyancy. In this framework, we compare results of nucleosynthesis models for Low Mass AGB Stars (M≲ 3M⊙), developed from the MHD scenario, with the record of isotopic abundance ratios of s-elements in presolar SiC grains, which were shown to offer precise constraints on the 13C reservoir. We find that n-captures driven by magnetically-induced mixing can indeed account for the SiC data quite well and that this is due to the fact that our 13C distribution fulfils the above constraints rather accurately. We suggest that similar tests should be now performed using different physical models for mixing. Such comparisons would indeed improve decisively our understanding of the formation of the neutron source.
POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models
Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.
2014-01-01
The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516
Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.; Makaryants, G. M.
2018-01-01
There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.
Methods for Scaling Icing Test Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.
1995-01-01
This report presents the results of tests at NASA Lewis to evaluate several methods to establish suitable alternative test conditions when the test facility limits the model size or operating conditions. The first method was proposed by Olsen. It can be applied when full-size models are tested and all the desired test conditions except liquid-water content can be obtained in the facility. The other two methods discussed are: a modification of the French scaling law and the AEDC scaling method. Icing tests were made with cylinders at both reference and scaled conditions representing mixed and glaze ice in the NASA Lewis Icing Research Tunnel. Reference and scale ice shapes were compared to evaluate each method. The Olsen method was tested with liquid-water content varying from 1.3 to .8 g/m(exp3). Over this range, ice shapes produced using the Olsen method were unchanged. The modified French and AEDC methods produced scaled ice shapes which approximated the reference shapes when model size was reduced to half the reference size for the glaze-ice cases tested.
Improved Robustness and Efficiency for Automatic Visual Site Monitoring
2009-09-01
the space of expected poses. To avoid having to compare each test window with the whole training corpus, he builds a template hierarchy by...directions of motion. In a second layer of clustering, it also learns how the low-level clusters co-occur with each other. An infinite mix- ture model is used...implementation. We demonstrate the utility of this detector by modeling scene-level activities with a Hierarchical
ASTEC—the Aarhus STellar Evolution Code
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, Jørgen
2008-08-01
The Aarhus code is the result of a long development, starting in 1974, and still ongoing. A novel feature is the integration of the computation of adiabatic oscillations for specified models as part of the code. It offers substantial flexibility in terms of microphysics and has been carefully tested for the computation of solar models. However, considerable development is still required in the treatment of nuclear reactions, diffusion and convective mixing.
Novel Model of Somatosensory Nerve Transfer in the Rat.
Paskal, Adriana M; Paskal, Wiktor; Pelka, Kacper; Podobinska, Martyna; Andrychowski, Jaroslaw; Wlodarski, Pawel K
2018-05-09
Nerve transfer (neurotization) is a reconstructive procedure in which the distal denervated nerve is joined with a proximal healthy nerve of a less significant function. Neurotization models described to date are limited to avulsed roots or pure motor nerve transfers, neglecting the clinically significant mixed nerve transfer. Our aim was to determine whether femoral-to-sciatic nerve transfer could be a feasible model of mixed nerve transfer. Three Sprague Dawley rats were subjected to unilateral femoral-to-sciatic nerve transfer. After 50 days, functional recovery was evaluated with a prick test. At the same time, axonal tracers were injected into each sciatic nerve distally to the lesion site, to determine nerve fibers' regeneration. In the prick test, the rats retracted their hind limbs after stimulation, although the reaction was moderately weaker on the operated side. Seven days after injection of axonal tracers, dyes were visualized by confocal microscopy in the spinal cord. Innervation of the recipient nerve originated from higher segments of the spinal cord than that on the untreated side. The results imply that the femoral nerve axons, ingrown into the damaged sciatic nerve, reinnervate distal targets with a functional outcome.
A Study on the Heat Flow Characteristics of IRSS
NASA Astrophysics Data System (ADS)
Cho, Yong-Jin; Ko, Dae-Eun
2017-11-01
The infrared signatures emitted from the hot waste gas generated by the combustion engine and generator of a naval ship and from the metal surface around the funnel are the targets of the enemy threatening weapon system, thereby reducing the survivability of the ship. Such infrared signatures are reduced by installing an infrared signature suppression system (IRSS) in the naval ship. An IRSS consists of three parts: an eductor that creates a turbulent flow in the waste gas, a mixing tube that mixes the waste gas with the ambient air, and a diffuser that forms an air film using the pressure difference between the waste gas and the outside air. This study analyzed the test model of the IRSS developed by an advanced company and, based on this, conducted heat flow analyses as a basic study to improve the performance of the IRSS. The results were compared and analyzed considering various turbulence models. As a result, the temperatures and velocities of the waste gas at the eductor inlet and the diffuser outlet as well as the temperature of the diffuser metal surface were obtained. It was confirmed that these results were in good agreement with the measurement results of the model test.
Test Problem: Tilted Rayleigh-Taylor for 2-D Mixing Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Malcolm J.; Livescu, Daniel; Youngs, David L.
2012-08-14
The 'tilted-rig' test problem originates from a series of experiments (Smeeton & Youngs, 1987, Youngs, 1989) performed at AWE in the late 1980's, that followed from the 'rocket-rig' experiments (Burrows et al., 1984; Read & Youngs, 1983), and exploratory experiments performed at Imperial College (Andrews, 1986; Andrews and Spalding, 1990). A schematic of the experiment is shown in Figure 1, and comprises a tank filled with light fluid above heavy, and then 'tilted' on one side of the apparatus, thus causing an 'angled interface' to the acceleration history due to rockets. Details of the configuration given in the next chaptermore » include: fluids, dimensions, and other necessary details to simulate the experiment. Figure 2 shows results from two experiments, Case 110 (which is the source for this test problem) that has an Atwood number of 0.5, and Case 115 (a secondary source described in Appendix B), with Atwood of 0.9 Inspection of the photograph in Figure 2 (the main experimental diagnostic) for Case 110. reveals two main areas for mix development; 1) a large-scale overturning motion that produces a rising plume (spike) on the left, and falling plume (bubble) on the right, that are almost symmetric; and 2) a Rayleigh-Taylor driven mixing central mixing region that has a large-scale rotation associated with the rising and falling plumes, and also experiences lateral strain due to stretching of the interface by the plumes, and shear across the interface due to upper fluid moving downward and to the right, and lower fluid moving upward and to the left. Case 115 is similar but differs by a much larger Atwood of 0.9 that drives a strong asymmetry between a left side heavy spike penetration and a right side light bubble penetration. Case 110 is chosen as the source for the present test problem as the fluids have low surface tension (unlike Case 115) due the addition of a surfactant, the asymmetry small (no need to have fine grids for the spike), and there is extensive reasonable quality photographic data. The photographs in Figure 2 also reveal the appearance of a boundary layer at the left and right walls; this boundary layer has not been included in the test problem as preliminary calculations suggested it had a negligible effect on plume penetration and RT mixing. The significance of this test problem is that, unlike planar RT experiments such as the Rocket-Rig (Youngs, 1984), Linear Electric Motor - LEM (Dimonte, 1990), or the Water Tunnel (Andrews, 1992), the Tilted-Rig is a unique two-dimensional RT mixing experiment that has experimental data and now (in this TP) Direct Numerical Simulation data from Livescu and Wei. The availability of DNS data for the tilted-rig has made this TP viable as it provides detailed results for comparison purposes. The purpose of the test problem is to provide 3D simulation results, validated by comparison with experiment, which can be used for the development and validation of 2D RANS models. When such models are applied to 2D flows, various physics issues are raised such as double counting, combined buoyancy and shear, and 2-D strain, which have not yet been adequately addressed. The current objective of the test problem is to compare key results, which are needed for RANS model validation, obtained from high-Reynolds number DNS, high-resolution ILES or LES with explicit sub-grid-scale models. The experiment is incompressible and so is directly suitable for algorithms that are designed for incompressible flows (e.g. pressure correction algorithms with multi-grid); however, we have extended the TP so that compressible algorithms, run at low Mach number, may also be used if careful consideration is given to initial pressure fields. Thus, this TP serves as a useful tool for incompressible and compressible simulation codes, and mathematical models. In the remainder of this TP we provide a detailed specification; the next section provides the underlying assumptions for the TP, fluids, geometry details, boundary conditions (and alternative set-ups), initial conditions, and acceleration history (and ways to treat the acceleration ramp at the start of the experiment). This is followed by a section that defines data to be collected from the simulations, with results from the experiments and DNS from Livescu using the CFDNS code, and ILES simulations from Youngs using the compressible TURMOIL code and Andrews using the incompressible RTI3D code. We close the TP with concluding remarks, and Appendices that includes details of the sister Case 115, initial condition specifications for density and pressure fields. The Tilted-Rig Test Problem is intended to serve as a validation problem for RANS models, and as such we have provided ILES and DNS simulations in support of the test problem definition. The generally good agreement between experiment, ILES and DNS supports our assertion that the Tilted-Rig is useful, and the only 2-D TP that can be used to validate RANS models.« less
Durability and Strength of Sustainable Self-Consolidating Concrete Containing Fly Ash
NASA Astrophysics Data System (ADS)
Mohamed, O.; Hawat, W. Al
2018-03-01
In this paper, the durability and strength of self-consolidating concrete (SCC) is assessed through development and testing of six binary mixes at fixed water-to-binder (w/b) ratio of 0.36. In each of the six SCC mixes, a different percentage of cement is replaced with fly ash. The development of compressive strength for each of the mixes is assessed by testing samples after 3, 7, and 28 days of curing. Durability of each of the six SCC mixes is assessed by measuring the charge passed in Rapid Chloride Permeability (RCP) test. Charge passed was measured in samples cured for 1, 3, 7, 14, 28, and 40 days of curing. All mixes out-performed the control mix in terms of resistance to chloride penetration. Binary mix in which 20% of cement is replaced with fly ash exhibited 28-day strength slightly surpassing the control mix.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baird, Benjamin; Loebick, Codruta; Roychoudhury, Subir
During Phase I both experimental evaluation and computational validation of an advanced Spouted Bed Reactor (SBR) approach for biomass and coal combustion was completed. All Phase I objectives were met and some exceeded. Comprehensive insight on SBR operation was achieved via design, fabrication, and testing of a small demonstration unit with pulverized coal and biomass as feedstock at University of Connecticut (UCONN). A scale-up and optimization tool for the next generation of coal and biomass co-firing for reducing GHG emissions was also developed. The predictive model was implemented with DOE’s MFIX computational model and was observed to accurately mimic evenmore » unsteady behavior. An updated Spouted Bed Reactor was fabricated, based on model feedback, and experimentally displayed near ideal behavior. This predictive capability based upon first principles and experimental correlation allows realistic simulation of mixed fuel combustion in these newly proposed power boiler designs. Compared to a conventional fluidized bed the SBR facilitates good mixing of coal and biomass, with relative insensitivity to particle size and densities, resulting in improved combustion efficiency. Experimental data with mixed coal and biomass fuels demonstrated complete oxidation at temperatures as low as 500ºC. This avoids NOx formation and residual carbon in the waste ash. Operation at stoichiometric conditions without requiring cooling or sintering of the carrier was also observed. Oxygen-blown operation were tested and indicated good performance. This highlighted the possibility of operating the SBR at a wide range of conditions suitable for power generation and partial oxidation byproducts. It also supports the possibility of implementing chemical looping (for readily capturing CO 2 and SO x).« less
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baird, Benjamin; Loebick, Codruta; Roychoudhury, Subir
During Phase I both experimental evaluation and computational validation of an advanced Spouted Bed Reactor (SBR) approach for biomass and coal combustion was completed. All Phase I objectives were met and some exceeded. Comprehensive insight on SBR operation was achieved via design, fabrication, and testing of a small demonstration unit with pulverized coal and biomass as feedstock at University of Connecticut (UCONN). A scale-up and optimization tool for the next generation of coal and biomass co-firing for reducing GHG emissions was also developed. The predictive model was implemented with DOE’s MFIX computational model and was observed to accurately mimic evenmore » unsteady behavior. An updated Spouted Bed Reactor was fabricated, based on model feedback, and experimentally displayed near ideal behavior. This predictive capability based upon first principles and experimental correlation allows realistic simulation of mixed fuel combustion in these newly proposed power boiler designs. Compared to a conventional fluidized bed the SBR facilitates good mixing of coal and biomass, with relative insensitivity to particle size and densities, resulting in improved combustion efficiency. Experimental data with mixed coal and biomass fuels demonstrated complete oxidation at temperatures as low as 500C. This avoids NOx formation and residual carbon in the waste ash. Operation at stoichiometric conditions without requiring cooling or sintering of the carrier was also observed. Oxygen-blown operation were tested and indicated good performance. This highlighted the possibility of operating the SBR at a wide range of conditions suitable for power generation and partial oxidation byproducts. It also supports the possibility of implementing chemical looping (for readily capturing CO2 and SOx).« less
Polak, David; Naddaf, Raja; Shapira, Lior; Weiss, Ervin I; Houri-Haddad, Yael
2013-07-01
Periodontitis is a polymicrobial infectious disease. A novel potential chemical treatment modality may lie in bacterial anti-adhesive materials, such as cranberry juice fractions. The aim of this study is to explore the effect of high molecular weight cranberry constituent (non-dialyzable material [NDM]) on the virulence of a mixed infection with Porphyromonas gingivalis and Fusobacterium nucleatum in mice. In vitro, the anti-adhesive property of NDM was validated on epithelial cell culture, and inhibition of coaggregation was tested using a coaggregation assay. The in vivo effect was tested on the outcome of experimental periodontitis induced by a P. gingivalis and F. nucleatum mixed infection, and also on the local host response using the subcutaneous chamber model of infection. Phagocytosis was also tested on RAW macrophages by the use of fluorescent-labeled bacteria. NDM was found to inhibit the adhesion of both species of bacteria onto epithelial cells and to inhibit coaggregation in a dose-dependent manner. NDM consumption by mice attenuated the severity of experimental periodontitis compared with a mixed infection without NDM treatment. In infected subcutaneous chambers, NDM alone reduced tumor necrosis factor-α (TNF-α) levels induced by the mixed infection. In vitro, NDM eliminated TNF-α expression by macrophages that were exposed to P. gingivalis and F. nucleatum, without impairing their viability. Furthermore, NDM increased the phagocytosis of P. gingivalis. The results indicate that the use of NDM may hold potential protective and/or preventive modalities in periodontal disease. Underlying mechanisms for this trait may perhaps be the anti-adhesive properties of NDM or its potential effect on inflammation.
A detailed study of ice nucleation by feldspar minerals
NASA Astrophysics Data System (ADS)
Whale, T. F.; Murray, B. J.; Wilson, T. W.; Carpenter, M. A.; Harrison, A.; Holden, M. A.; Vergara Temprado, J.; Morris, J.; O'Sullivan, D.
2015-12-01
Immersion mode heterogeneous ice nucleation plays a crucial role in controlling the composition of mixed phase clouds, which contain both supercooled liquid water and ice particles. The amount of ice in mixed phase clouds can affect cloud particle size, lifetime and extent and so affects radiative properties and precipitation. Feldspar minerals are probably the most important minerals for ice nucleation in mixed phase clouds because they nucleate ice more efficiently than other components of atmospheric mineral dust (Atkinson et al. 2013). The feldspar class of minerals is complex, containing numerous chemical compositions, several crystal polymorphs and wide variations in microscopic structure. Here we present the results of a study into ice nucleation by a wide range of different feldspars. We found that, in general, alkali feldspars nucleate ice more efficiently than plagioclase feldspars. However, we also found that particular alkali feldspars nucleate ice relatively inefficiently, suggesting that chemical composition is not the only important factor that dictates the ice nucleation efficiency of feldspar minerals. Ice nucleation by feldspar is described well by the singular model and is probably site specific in nature. The alkali feldspars that do not nucleate ice efficiently possess relatively homogenous structure on the micrometre scale suggesting that the important sites for nucleation are related to surface topography. Ice nucleation active site densities for the majority of tested alkali feldspars are similar to those found by Atkinson et al (2013), meaning that the validity of global aerosol modelling conducted in that study is not affected. Additionally, we have found that ice nucleation by feldspars is strongly influenced, both positively and negatively, by the solute content of droplets. Most other nucleants we have tested are unaffected by solutes. This provides insight into the mechanism of ice nucleation by feldspars and could be of importance when modelling ice nucleation in mixed phase clouds. Atkinson, J. D., Murray, B. J., Woodhouse, M. T., Carslaw, K. S., Whale, T. F., Baustian, K. J., Dobbie, S., O'Sullivan, D., and Malkin, T. L.: The importance of feldspar for ice nucleation by mineral dust in mixed-phase clouds, Nature, 10.1038/nature12278, (2013).
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Case study of flexure and shear strengthening of RC beams by CFRP using FEA
NASA Astrophysics Data System (ADS)
Jankowiak, Iwona
2018-01-01
In the paper the preliminary results of study on strengthening RC beams by means of CFRP materials under mixed shear-flexural work condition are presented. The Finite Element Method analyses were performed using numerical models proposed and verified earlier by the results of laboratory tests [4, 5] for estimation of effectiveness of CFRP strengthening of RC beams under flexure. The currently conducted analyses deal with 3D models of RC beams under mixed shear-flexural loading conditions. The symmetry of analyzed beams was taken into account (in both directions). The application of Concrete Damage Plasticity (CDP) model of RC beam allowed to predict a layout and propagation of cracks leading to failure. Different cases of strengthening were analyzed: with the use of CFRP strip or CFRP closed hoops as well as with the combination of above mentioned. The preliminary study was carried out and the first results were presented.
Functional Nonlinear Mixed Effects Models For Longitudinal Image Data
Luo, Xinchao; Zhu, Lixing; Kong, Linglong; Zhu, Hongtu
2015-01-01
Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FN-MEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders. PMID:26213453
Developments in the Gung Ho dynamical core
NASA Astrophysics Data System (ADS)
Melvin, Thomas
2017-04-01
Gung Ho is the new dynamical core being developed for the next generation Met Office weather and climate model, suitable for meeting the exascale challenge on emerging computer architectures. It builds upon the earlier collaborative project between the Met Office, NERC and STFC Daresbury of the same name to investigate suitable numerical methods for dynamical cores. A mixed-finite element approach is used, where different finite element spaces are used to represent various fields. This method provides a number of beneficial improvements over the current model, such a compatibility and inherent conservation on quasi-uniform unstructured meshes, whilst maintaining the accuracy and good dispersion properties of the staggered grid currently used. Furthermore, the mixed finite element approach allows a large degree of flexibility in the type of mesh, order of approximation and discretisation, providing a simple way to test alternative options to obtain the best model possible.
Toropov, Andrey A; Toropova, Alla P
2014-06-01
The experimental data on the bacterial reverse mutation test on C60 nanoparticles (TA100) is examined as an endpoint. By means of the optimal descriptors calculated with the Monte Carlo method a mathematical model of the endpoint has been built up. The model is the mathematical function of (i) dose (g/plate); (ii) metabolic activation (i.e. with S9 mix or without S9 mix); and (iii) illumination (i.e. dark or irradiation). The statistical quality of the model is the following: n=10, r(2)=0.7549, q(2)=0.5709, s=7.67, F=25 (Training set); n=5, r(2)=0.8987, s=18.4 (Calibration set); and n=5, r(2)=0.6968, s=10.9 (Validation set). Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Oikonomakis, Emmanouil; Aksoyoglu, Sebnem; Ciarelli, Giancarlo; Baltensperger, Urs; Prévôt, André Stephan Henry
2018-02-01
High surface ozone concentrations, which usually occur when photochemical ozone production takes place, pose a great risk to human health and vegetation. Air quality models are often used by policy makers as tools for the development of ozone mitigation strategies. However, the modeled ozone production is often not or not enough evaluated in many ozone modeling studies. The focus of this work is to evaluate the modeled ozone production in Europe indirectly, with the use of the ozone-temperature correlation for the summer of 2010 and to analyze its sensitivity to precursor emissions and meteorology by using the regional air quality model, the Comprehensive Air Quality Model with Extensions (CAMx). The results show that the model significantly underestimates the observed high afternoon surface ozone mixing ratios (≥ 60 ppb) by 10-20 ppb and overestimates the lower ones (< 40 ppb) by 5-15 ppb, resulting in a misleading good agreement with the observations for average ozone. The model also underestimates the ozone-temperature regression slope by about a factor of 2 for most of the measurement stations. To investigate the impact of emissions, four scenarios were tested: (i) increased volatile organic compound (VOC) emissions by a factor of 1.5 and 2 for the anthropogenic and biogenic VOC emissions, respectively, (ii) increased nitrogen oxide (NOx) emissions by a factor of 2, (iii) a combination of the first two scenarios and (iv) increased traffic-only NOx emissions by a factor of 4. For southern, eastern, and central (except the Benelux area) Europe, doubling NOx emissions seems to be the most efficient scenario to reduce the underestimation of the observed high ozone mixing ratios without significant degradation of the model performance for the lower ozone mixing ratios. The model performance for ozone-temperature correlation is also better when NOx emissions are doubled. In the Benelux area, however, the third scenario (where both NOx and VOC emissions are increased) leads to a better model performance. Although increasing only the traffic NOx emissions by a factor of 4 gave very similar results to the doubling of all NOx emissions, the first scenario is more consistent with the uncertainties reported by other studies than the latter, suggesting that high uncertainties in NOx emissions might originate mainly from the road-transport sector rather than from other sectors. The impact of meteorology was examined with three sensitivity tests: (i) increased surface temperature by 4 °C, (ii) reduced wind speed by 50 % and (iii) doubled wind speed. The first two scenarios led to a consistent increase in all surface ozone mixing ratios, thus improving the model performance for the high ozone values but significantly degrading it for the low ozone values, while the third scenario had exactly the opposite effects. Overall, the modeled ozone is predicted to be more sensitive to its precursor emissions (especially traffic NOx) and therefore their uncertainties, which seem to be responsible for the model underestimation of the observed high ozone mixing ratios and ozone production.
Rawlik, Mateusz; Kasprowicz, Marek; Jagodziński, Andrzej M; Kaźmierowski, Cezary; Łukowiak, Remigiusz; Grzebisz, Witold
2018-09-01
According facilitative models of succession, trees are great forest ecosystem engineers. The strength of tree stand influences on habitat were tested in rather homogenous conditions where heterogeneity of site condition was not an important influence. We hypothesized that canopy composition affects total aboveground vascular herb layer biomass (THB) and species composition of herb layer plant biomass (SCHB) more significantly than primary soil fertility or slope exposure. The study was conducted in 227 randomly selected research plots in seven types of forest stands: pure with Alnus glutinosa, Betula pendula, Pinus sylvestris, Quercus petraea and Robinia pseudoacacia, and mixed with dominance of Acer pseudoplatanus or Betula pendula located on hilltop and northern, eastern, western, and southern slopes on a reclaimed, afforested post-mining spoil heap of the Bełchatów Brown Coal Mine (Poland). Generalized linear models (GLZ) showed that tree stand species were the best predictors of THB. Non-parametric variance tests showed significantly higher (nearly four times) THB under canopies of A. glutinosa, R. pseudoacacia, B. pendula and Q. petraea, compared to the lowest THB found under canopies of P. sylvestris and mixed with A. pseudoplatanus. Redundancy Analysis (RDA) showed that SCHB was significantly differentiated along gradients of light-nutrient herb layer species requirements. RDA and non-parametric variance tests showed that SCHB under canopies of A. glutinosa, R. pseudoacacia and mixed with A. pseudoplatanus had large shares of nitrophilous ruderal species (32%, 31% and 11%, respectively), whereas SCHB under B. pendula, Q. petraea, mixed with B. pendula and P. sylvestris were dominated by light-demanding meadow (49%, 51%, 51% and 36%, respectively) and Poaceae species. The results indicated the dominant role of tree stand composition in habitat-forming processes, and although primary site properties had minor importance, they were also modified by tree stand species. Copyright © 2018. Published by Elsevier B.V.
Validating induced seismicity forecast models—Induced Seismicity Test Bench
NASA Astrophysics Data System (ADS)
Király-Proag, Eszter; Zechar, J. Douglas; Gischig, Valentin; Wiemer, Stefan; Karvounis, Dimitrios; Doetsch, Joseph
2016-08-01
Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. In this study, we propose an Induced Seismicity Test Bench to test and rank such models; this test bench can be used for model development, model selection, and ensemble model building. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models: Shapiro and Smoothed Seismicity (SaSS) and Hydraulics and Seismics (HySei). These models incorporate a different mix of physics-based elements and stochastic representation of the induced sequences. Our results show that neither model is fully superior to the other. Generally, HySei forecasts the seismicity rate better after shut-in but is only mediocre at forecasting the spatial distribution. On the other hand, SaSS forecasts the spatial distribution better and gives better seismicity rate estimates before shut-in. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in.
Sexual segregation in Roosevelt Elk: Cropping rates and aggression in mixed sex groups
Weckerly, Floyd F.; Ricca, Mark A.; Meyer, Katherin P.
2001-01-01
Few studies of sexual segregation in ruminants have tested widely invoked mechanisms of segregation in mixed-sex groups. In a sexually segregated population of Roosevelt elk (Cervus elaphus roosevelti), we examined if adult males had reduced intake of forage when in mixed-sex groups and if intersexual differences in aggression caused females to avoid males. Based on a mechanistic model of forage intake, animals with lower instantaneous feed intake should have higher cropping rates. Focal animal sampling indicated that adult males and females in summer and winter had similar cropping rates in mixed-sex groups, whereas males in male-only groups had lower rates of cropping than males in mixed-sex groups. Outside the mating season, males in male groups spent proportionally less time ≤1 body length of congenders than females in female groups, and the rate of aggression ≤1 body length was higher for males. Female–female aggression was higher in mixed-sex groups that contained more males than the median proportion of males in mixed-sex groups. Female and mixed-sex groups walked away when groups of males numbering >6 were ≤50 m but did not walk away when male groups ≤50 m had ≤5 individuals. Sexual segregation was associated with behaviors of sexes in mixed-sex groups: reduced intake of forage by males and increased female–female aggression with more males.
A flavor symmetry model for bilarge leptonic mixing and the lepton masses
NASA Astrophysics Data System (ADS)
Ohlsson, Tommy; Seidl, Gerhart
2002-11-01
We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.
NASA Technical Reports Server (NTRS)
Lin, P.; Pratt, D. T.
1987-01-01
A hybrid method has been developed for the numerical prediction of turbulent mixing in a spatially-developing, free shear layer. Most significantly, the computation incorporates the effects of large-scale structures, Schmidt number and Reynolds number on mixing, which have been overlooked in the past. In flow field prediction, large-eddy simulation was conducted by a modified 2-D vortex method with subgrid-scale modeling. The predicted mean velocities, shear layer growth rates, Reynolds stresses, and the RMS of longitudinal velocity fluctuations were found to be in good agreement with experiments, although the lateral velocity fluctuations were overpredicted. In scalar transport, the Monte Carlo method was extended to the simulation of the time-dependent pdf transport equation. For the first time, the mixing frequency in Curl's coalescence/dispersion model was estimated by using Broadwell and Breidenthal's theory of micromixing, which involves Schmidt number, Reynolds number and the local vorticity. Numerical tests were performed for a gaseous case and an aqueous case. Evidence that pure freestream fluids are entrained into the layer by large-scale motions was found in the predicted pdf. Mean concentration profiles were found to be insensitive to Schmidt number, while the unmixedness was higher for higher Schmidt number. Applications were made to mixing layers with isothermal, fast reactions. The predicted difference in product thickness of the two cases was in reasonable quantitative agreement with experimental measurements.
Guler, Umut; Budak, Yasemin; Ruh, Emrah; Ocal, Yesim; Canay, Senay; Akyon, Yakut
2013-01-01
Objective: The aim of this study was 2-fold. The first aim was to evaluate the effects of mixing technique (hand-mixing or auto-mixing) on bacterial attachment to polyether impression materials. The second aim was to determine whether bacterial attachment to these materials was affected by length of exposure to disinfection solutions. Materials and Methods: Polyether impression material samples (n = 144) were prepared by hand-mixing or auto-mixing. Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa were used in testing. After incubation, the bacterial colonies were counted and then disinfectant solution was applied. The effect of disinfection solution was evaluated just after the polymerization of impression material and 30 min after polymerization. Differences in adherence of bacteria to the samples prepared by hand-mixing and to those prepared by auto-mixing were assessed by Kruskal-Wallis and Mann-Whitney U-tests. For evaluating the efficiency of the disinfectant, Kruskal-Wallis multiple comparisons test was used. Results: E. coli counts were higher in hand-mixed materials (P < 0.05); no other statistically significant differences were found between hand- and auto-mixed materials. According to the Kruskal-Wallis test, significant differences were found between the disinfection procedures (Z > 2.394). Conclusion: The methods used for mixing polyether impression material did not affect bacterial attachment to impression surfaces. In contrast, the disinfection procedure greatly affects decontamination of the impression surface. PMID:24966729
Guler, Umut; Budak, Yasemin; Ruh, Emrah; Ocal, Yesim; Canay, Senay; Akyon, Yakut
2013-09-01
The aim of this study was 2-fold. The first aim was to evaluate the effects of mixing technique (hand-mixing or auto-mixing) on bacterial attachment to polyether impression materials. The second aim was to determine whether bacterial attachment to these materials was affected by length of exposure to disinfection solutions. Polyether impression material samples (n = 144) were prepared by hand-mixing or auto-mixing. Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa were used in testing. After incubation, the bacterial colonies were counted and then disinfectant solution was applied. The effect of disinfection solution was evaluated just after the polymerization of impression material and 30 min after polymerization. Differences in adherence of bacteria to the samples prepared by hand-mixing and to those prepared by auto-mixing were assessed by Kruskal-Wallis and Mann-Whitney U-tests. For evaluating the efficiency of the disinfectant, Kruskal-Wallis multiple comparisons test was used. E. coli counts were higher in hand-mixed materials (P < 0.05); no other statistically significant differences were found between hand- and auto-mixed materials. According to the Kruskal-Wallis test, significant differences were found between the disinfection procedures (Z > 2.394). The methods used for mixing polyether impression material did not affect bacterial attachment to impression surfaces. In contrast, the disinfection procedure greatly affects decontamination of the impression surface.
Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage
ERIC Educational Resources Information Center
Galyardt, April
2012-01-01
This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…
Bruze, Magnus; Andersen, Klaus Ejner; Goossens, An
2008-03-01
The currently used fragrance mix in the European baseline patch test series (baseline series) fails to detect a substantial number of clinically relevant fragrance allergies. To investigate whether it is justified to include hydroxyisohexyl 3-cyclohexene carboxaldehyde (Lyral) and fragrance mix 2 containing hydroxyisohexyl 3-cyclohexene carboxaldehyde, citral, farnesol, coumarin, citronellol, and alpha-hexyl cinnamal in the European baseline patch test series. Survey of the literature on reported frequencies of contact allergy and allergic contact dermatitis from fragrance mix 2 and hydroxyisohexyl 3-cyclohexene carboxaldehyde (Lyral) as well as reported results of experimental provocation test. Fragrance mix 2 has been demonstrated to be a useful additional marker of fragrance allergy with contact allergy rates up to 5% when included in various national baseline patch test series. Of the fragrance substances present in fragrance mix 2, hydroxyisohexyl 3-cyclohexene carboxaldehyde is the most common sensitizer. Contact allergy rates between 1.5% and 3% have been reported for hydroxyisohexyl 3-cyclohexene carboxaldehyde in petrolatum (pet.) at 5% from various European centres when tested in consecutive dermatitis patients. From 2008, pet. preparations of fragrance mix 2 at 14% w/w (5.6 mg/cm(2)) and hydroxyisohexyl 3-cyclohexene carboxaldehyde at 5% w/w (2.0 mg/cm(2)) are recommended for inclusion in the baseline series. With the Finn Chamber technique, a dose of 20 mg pet. preparation is recommended. Whenever there is a positive reaction to fragrance mix 2, additional patch testing with the 6 ingredients, 5 if there are simultaneous positive reactions to hydroxyisohexyl 3-cyclohexene carboxaldehyde and fragrance mix 2, is recommended.
1974-04-01
described in Section 2.3. 2.1 MODEL FABRICATION AND MOUNTING Camphor and camphor with distributed glass particles were the materials for the low...temperature ablator shape-change models tested in Series I. The models were fabricated by molding the camphor at room temperature and high pressure (20,000 psi...distributed glass particles were produced by thoroughly mixing glass beads, having diameters of 7.5 t 1.5 mils, with the camphor gran- ules prior to
BLENDING ANALYSIS FOR RADIOACTIVE SALT WASTE PROCESSING FACILITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2012-05-10
Savannah River National Laboratory (SRNL) evaluated methods to mix and blend the contents of the blend tanks to ensure the contents are properly blended before they are transferred from the blend tank such as Tank 21 and Tank 24 to the Salt Waste Processing Facility (SWPF) feed tank. The tank contents consist of three forms: dissolved salt solution, other waste salt solutions, and sludge containing settled solids. This paper focuses on developing the computational model and estimating the operation time of submersible slurry pump when the tank contents are adequately blended prior to their transfer to the SWPF facility. Amore » three-dimensional computational fluid dynamics approach was taken by using the full scale configuration of SRS Type-IV tank, Tank 21H. Major solid obstructions such as the tank wall boundary, the transfer pump column, and three slurry pump housings including one active and two inactive pumps were included in the mixing performance model. Basic flow pattern results predicted by the computational model were benchmarked against the SRNL test results and literature data. Tank 21 is a waste tank that is used to prepare batches of salt feed for SWPF. The salt feed must be a homogeneous solution satisfying the acceptance criterion of the solids entrainment during transfer operation. The work scope described here consists of two modeling areas. They are the steady state flow pattern calculations before the addition of acid solution for tank blending operation and the transient mixing analysis during miscible liquid blending operation. The transient blending calculations were performed by using the 95% homogeneity criterion for the entire liquid domain of the tank. The initial conditions for the entire modeling domain were based on the steady-state flow pattern results with zero second phase concentration. The performance model was also benchmarked against the SRNL test results and literature data.« less
2017-10-01
perturbations in the energetic material to study their effects on the blast wave formation. The last case also makes use of the same PBX, however, the...configuration, Case A: Spore cloud located on the top of the charge at an angle 45 degree, Case B: Spore cloud located at an angle 45 degree from the charge...theoretical validation. The first is the Sedov case where the pressure decay and blast wave front are validated based on analytical solutions. In this test
Application of thermal scanning to the study of transverse mixing in rivers
NASA Technical Reports Server (NTRS)
Eheart, J. W.
1975-01-01
Remote sensing has shown itself to be a valuable research tool in the study of transverse mixing in rivers. It is desirable, for a number of reasons, to study and predict the two-dimensional movement of pollutants in the region just downstream of a pollutant discharge point. While many of the more common pollutants do not exhibit a spectral signature, it was shown that the temperature difference between the pollutant and the receiving water could be successfully exploited by applying a mathematical model of mass transport processes to heat transport, and testing and calibrating it with thermal scanning data.
Model study of atmospheric transport using carbon 14 and strontium 90 as inert tracers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinnison, D.E.; Johnston, H.S.; Wuebbles, D.J.
1994-10-01
The observed excess carbon 14 in the atmosphere from 1963 to 1970 provides unique, but limited, data up to an altitude of about 35 km for testing the air motions calculated by 11 multidimensional atmospheric models. Strontium 90 measurements in the atmosphere from 1964 to mid-1967 provide data that have more latitude coverage than those of carbon 14 and are useful for testing combined models of air motions and aerosol settling. Model calculations for carbon 14 begin at October 1963, 9 months after the conclusion of the nuclear bomb tests; the initial conditions for the calculations are derived by threemore » methods, each of which agrees fairly well with measured carbon 14 in October 1963 and each of which has widely different values in regions of the stratosphere where there were no carbon 14 measurements. The model results are compared to the stratospheric measurements, not as if the observed data were absolute standards, but in an effort to obtain new insight about the models and about the atmosphere. The measured carbon 14 vertical profiles at 31 deg N are qualitatively different from all of the models; the measured vertical profiles show a maximum mixing ratio in the altitude range of 20 to 25 km from October 1963 through July 1966, but all modeled profiles show mixing ratio maxima that increase in altitude from 20 km in October 1963 to greater than 40 km by April 1966. Both carbon 14 and strontium 90 data indicate that the models differ substantially among themselves with respect to stratosphere-troposphere exchange rate, but the modeled carbon 14 stratospheric residence times indicate that differences among the models are small with respect to transport rate between the middle stratosphere and the lower stratosphere. Strontium 90 data indicate that aerosol settling is important up to at least 35 km altitude. (Abstract Truncated)« less
NASA Technical Reports Server (NTRS)
Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.
1999-01-01
A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.