Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
ERIC Educational Resources Information Center
de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.
2010-01-01
We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…
M. M. Clark; T. H. Fletcher; R. R. Linn
2010-01-01
The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixtureâ fraction model relying on thermodynamic...
Multiscale Constitutive Modeling of Asphalt Concrete
NASA Astrophysics Data System (ADS)
Underwood, Benjamin Shane
Multiscale modeling of asphalt concrete has become a popular technique for gaining improved insight into the physical mechanisms that affect the material's behavior and ultimately its performance. This type of modeling considers asphalt concrete, not as a homogeneous mass, but rather as an assemblage of materials at different characteristic length scales. For proper modeling these characteristic scales should be functionally definable and should have known properties. Thus far, research in this area has not focused significant attention on functionally defining what the characteristic scales within asphalt concrete should be. Instead, many have made assumptions on the characteristic scales and even the characteristic behaviors of these scales with little to no support. This research addresses these shortcomings by directly evaluating the microstructure of the material and uses these results to create materials of different characteristic length scales as they exist within the asphalt concrete mixture. The objectives of this work are to; 1) develop mechanistic models for the linear viscoelastic (LVE) and damage behaviors in asphalt concrete at different length scales and 2) develop a mechanistic, mechanistic/empirical, or phenomenological formulation to link the different length scales into a model capable of predicting the effects of microstructural changes on the linear viscoelastic behaviors of asphalt concrete mixture, e.g., a microstructure association model for asphalt concrete mixture. Through the microstructural study it is found that asphalt concrete mixture can be considered as a build-up of three different phases; asphalt mastic, fine aggregate matrix (FAM), and finally the coarse aggregate particles. The asphalt mastic is found to exist as a homogenous material throughout the mixture and FAM, and the filler content within this material is consistent with the volumetric averaged concentration, which can be calculated from the job mix formula. It is also found that the maximum aggregate size of the FAM is mixture dependent, but consistent with a gradation parameter from the Baily Method of mixture design. Mechanistic modeling of these different length scales reveals that although many consider asphalt concrete to be a LVE material, it is in fact only quasi-LVE because it shows some tendencies that are inconsistent with LVE theory. Asphalt FAM and asphalt mastic show similar nonlinear tendencies although the exact magnitude of the effect differs. These tendencies can be ignored for damage modeling in the mixture and FAM scales as long as the effects are consistently ignored, but it is found that they must be accounted for in mastic and binder damage modeling. The viscoelastic continuum damage (VECD) model is used for damage modeling in this research. To aid in characterization and application of the VECD model for cyclic testing, a simplified version (S-VECD) is rigorously derived and verified. Through the modeling efforts at each scale, various factors affecting the fundamental and engineering properties at each scale are observed and documented. A microstructure association model that accounts for particle interaction through physico-chemical processes and the effects of aggregate structuralization is developed to links the moduli at each scale. This model is shown to be capable of upscaling the mixture modulus from either the experimentally determined mastic modulus or FAM modulus. Finally, an initial attempt at upscaling the damage and nonlinearity phenomenon is shown.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Evaluating Mixture Modeling for Clustering: Recommendations and Cautions
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…
An introduction to mixture item response theory models.
De Ayala, R J; Santiago, S Y
2017-02-01
Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Arshadi, Amir
Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.
Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong
2018-01-10
This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Mixedness determination of rare earth-doped ceramics
NASA Astrophysics Data System (ADS)
Czerepinski, Jennifer H.
The lack of chemical uniformity in a powder mixture, such as clustering of a minor component, can lead to deterioration of materials properties. A method to determine powder mixture quality is to correlate the chemical homogeneity of a multi-component mixture with its particle size distribution and mixing method. This is applicable to rare earth-doped ceramics, which require at least 1-2 nm dopant ion spacing to optimize optical properties. Mixedness simulations were conducted for random heterogeneous mixtures of Nd-doped LaF3 mixtures using the Concentric Shell Model of Mixedness (CSMM). Results indicate that when the host to dopant particle size ratio is 100, multi-scale concentration variance is optimized. In order to verify results from the model, experimental methods that probe a mixture at the micro, meso, and macro scales are needed. To directly compare CSMM results experimentally, an image processing method was developed to calculate variance profiles from electron images. An in-lens (IL) secondary electron image is subtracted from the corresponding Everhart-Thornley (ET) secondary electron image in a Field-Emission Scanning Electron Microscope (FESEM) to produce two phases and pores that can be quantified with 50 nm spatial resolution. A macro was developed to quickly analyze multi-scale compositional variance from these images. Results for a 50:50 mixture of NdF3 and LaF3 agree with the computational model. The method has proven to be applicable only for mixtures with major components and specific particle morphologies, but the macro is useful for any type of imaging that produces excellent phase contrast, such as confocal microscopy. Fluorescence spectroscopy was used as an indirect method to confirm computational results for Nd-doped LaF3 mixtures. Fluorescence lifetime can be used as a quantitative method to indirectly measure chemical homogeneity when the limits of electron microscopy have been reached. Fluorescence lifetime represents the compositional fluctuations of a dopant on the nanoscale while accounting for billions of particles in a fast, non-destructive manner. The significance of this study will show how small-scale fluctuations in homogeneity limit the optimization of optical properties, which can be improved by the proper selection of particle size and mixing method.
Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics
NASA Astrophysics Data System (ADS)
Zaharieva, Roussislava; Hanagud, Sathya
2009-06-01
Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.
Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.
2012-01-01
Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Detecting Social Desirability Bias Using Factor Mixture Models
ERIC Educational Resources Information Center
Leite, Walter L.; Cooper, Lou Ann
2010-01-01
Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…
Iverson, R.M.; Denlinger, R.P.
2001-01-01
Rock avalanches, debris flows, and related phenomena consist of grain-fluid mixtures that move across three-dimensional terrain. In all these phenomena the same basic forces, govern motion, but differing mixture compositions, initial conditions, and boundary conditions yield varied dynamics and deposits. To predict motion of diverse grain-fluid masses from initiation to deposition, we develop a depth-averaged, threedimensional mathematical model that accounts explicitly for solid- and fluid-phase forces and interactions. Model input consists of initial conditions, path topography, basal and internal friction angles of solid grains, viscosity of pore fluid, mixture density, and a mixture diffusivity that controls pore pressure dissipation. Because these properties are constrained by independent measurements, the model requires little or no calibration and yields readily testable predictions. In the limit of vanishing Coulomb friction due to persistent high fluid pressure the model equations describe motion of viscous floods, and in the limit of vanishing fluid stress they describe one-phase granular avalanches. Analysis of intermediate phenomena such as debris flows and pyroclastic flows requires use of the full mixture equations, which can simulate interaction of high-friction surge fronts with more-fluid debris that follows. Special numerical methods (described in the companion paper) are necessary to solve the full equations, but exact analytical solutions of simplified equations provide critical insight. An analytical solution for translational motion of a Coulomb mixture accelerating from rest and descending a uniform slope demonstrates that steady flow can occur only asymptotically. A solution for the asymptotic limit of steady flow in a rectangular channel explains why shear may be concentrated in narrow marginal bands that border a plug of translating debris. Solutions for static equilibrium of source areas describe conditions of incipient slope instability, and other static solutions show that nonuniform distributions of pore fluid pressure produce bluntly tapered vertical profiles at the margins of deposits. Simplified equations and solutions may apply in additional situations identified by a scaling analysis. Assessment of dimensionless scaling parameters also reveals that miniature laboratory experiments poorly simulate the dynamics of full-scale flows in which fluid effects are significant. Therefore large geophysical flows can exhibit dynamics not evident at laboratory scales.
NASA Astrophysics Data System (ADS)
Iverson, Richard M.; Denlinger, Roger P.
2001-01-01
Rock avalanches, debris flows, and related phenomena consist of grain-fluid mixtures that move across three-dimensional terrain. In all these phenomena the same basic forces govern motion, but differing mixture compositions, initial conditions, and boundary conditions yield varied dynamics and deposits. To predict motion of diverse grain-fluid masses from initiation to deposition, we develop a depth-averaged, three-dimensional mathematical model that accounts explicitly for solid- and fluid-phase forces and interactions. Model input consists of initial conditions, path topography, basal and internal friction angles of solid grains, viscosity of pore fluid, mixture density, and a mixture diffusivity that controls pore pressure dissipation. Because these properties are constrained by independent measurements, the model requires little or no calibration and yields readily testable predictions. In the limit of vanishing Coulomb friction due to persistent high fluid pressure the model equations describe motion of viscous floods, and in the limit of vanishing fluid stress they describe one-phase granular avalanches. Analysis of intermediate phenomena such as debris flows and pyroclastic flows requires use of the full mixture equations, which can simulate interaction of high-friction surge fronts with more-fluid debris that follows. Special numerical methods (described in the companion paper) are necessary to solve the full equations, but exact analytical solutions of simplified equations provide critical insight. An analytical solution for translational motion of a Coulomb mixture accelerating from rest and descending a uniform slope demonstrates that steady flow can occur only asymptotically. A solution for the asymptotic limit of steady flow in a rectangular channel explains why shear may be concentrated in narrow marginal bands that border a plug of translating debris. Solutions for static equilibrium of source areas describe conditions of incipient slope instability, and other static solutions show that nonuniform distributions of pore fluid pressure produce bluntly tapered vertical profiles at the margins of deposits. Simplified equations and solutions may apply in additional situations identified by a scaling analysis. Assessment of dimensionless scaling parameters also reveals that miniature laboratory experiments poorly simulate the dynamics of full-scale flows in which fluid effects are significant. Therefore large geophysical flows can exhibit dynamics not evident at laboratory scales.
Efficient implicit LES method for the simulation of turbulent cavitating flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan
2016-07-01
We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less
Charge frustration in complex fluids and in electronic systems
NASA Astrophysics Data System (ADS)
Carraro, Carlo
1997-02-01
The idea of charge frustration is applied to describe the properties of such diverse physical systems as oil-water-surfactant mixtures and metal-ammonia solutions. The minimalist charge-frustrated model possesses one energy scale and two length scales. For oil-water-surfactant mixtures, these parameters have been determined starting from the microscopic properties of the physical systems under study. Thus, microscopic properties are successfully related to the observed mesoscopic structure.
Boussinesq approximation of the Cahn-Hilliard-Navier-Stokes equations.
Vorobev, Anatoliy
2010-11-01
We use the Cahn-Hilliard approach to model the slow dissolution dynamics of binary mixtures. An important peculiarity of the Cahn-Hilliard-Navier-Stokes equations is the necessity to use the full continuity equation even for a binary mixture of two incompressible liquids due to dependence of mixture density on concentration. The quasicompressibility of the governing equations brings a short time-scale (quasiacoustic) process that may not affect the slow dynamics but may significantly complicate the numerical treatment. Using the multiple-scale method we separate the physical processes occurring on different time scales and, ultimately, derive the equations with the filtered-out quasiacoustics. The derived equations represent the Boussinesq approximation of the Cahn-Hilliard-Navier-Stokes equations. This approximation can be further employed as a universal theoretical model for an analysis of slow thermodynamic and hydrodynamic evolution of the multiphase systems with strongly evolving and diffusing interfacial boundaries, i.e., for the processes involving dissolution/nucleation, evaporation/condensation, solidification/melting, polymerization, etc.
Multi-scale Rule-of-Mixtures Model of Carbon Nanotube/Carbon Fiber/Epoxy Lamina
NASA Technical Reports Server (NTRS)
Frankland, Sarah-Jane V.; Roddick, Jaret C.; Gates, Thomas S.
2005-01-01
A unidirectional carbon fiber/epoxy lamina in which the carbon fibers are coated with single-walled carbon nanotubes is modeled with a multi-scale method, the atomistically informed rule-of-mixtures. This multi-scale model is designed to include the effect of the carbon nanotubes on the constitutive properties of the lamina. It included concepts from the molecular dynamics/equivalent continuum methods, micromechanics, and the strength of materials. Within the model both the nanotube volume fraction and nanotube distribution were varied. It was found that for a lamina with 60% carbon fiber volume fraction, the Young's modulus in the fiber direction varied with changes in the nanotube distribution, from 138.8 to 140 GPa with nanotube volume fractions ranging from 0.0001 to 0.0125. The presence of nanotube near the surface of the carbon fiber is therefore expected to have a small, but positive, effect on the constitutive properties of the lamina.
NASA Astrophysics Data System (ADS)
Mirzaev, Sirojiddin Z.; Kaatze, Udo
2016-09-01
Ultrasonic spectra of mixtures of nitrobenzene with n-alkanes, from n-hexane to n-nonane, are analyzed. They feature up to two Debye-type relaxation terms with discrete relaxation times and, near the critical point, an additional relaxation term due to the fluctuations in the local concentration. The latter can be well represented by the dynamic scaling theory. Its amplitude parameter reveals the adiabatic coupling constant of the mixtures of critical composition. The dependence of this thermodynamic parameter upon the length of the n-alkanes corresponds to that of the slope in the pressure dependence of the critical temperature and is thus taken another confirmation of the dynamic scaling model. The change in the variation of the coupling constant and of several other mixture parameters with alkane length probably reflects a structural change in the nitrobenzene- n-alkane mixtures when the number of carbon atoms per alkane exceeds eight.
Predicting the shock compression response of heterogeneous powder mixtures
NASA Astrophysics Data System (ADS)
Fredenburg, D. A.; Thadhani, N. N.
2013-06-01
A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.
A modified procedure for mixture-model clustering of regional geochemical data
Ellefsen, Karl J.; Smith, David B.; Horton, John D.
2014-01-01
A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Archambeau, Cédric; Verleysen, Michel
2007-01-01
A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chong
We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energymore » exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.« less
Pore-scale modeling of phase change in porous media
NASA Astrophysics Data System (ADS)
Juanes, Ruben; Cueto-Felgueroso, Luis; Fu, Xiaojing
2017-11-01
One of the main open challenges in pore-scale modeling is the direct simulation of flows involving multicomponent mixtures with complex phase behavior. Reservoir fluid mixtures are often described through cubic equations of state, which makes diffuse interface, or phase field theories, particularly appealing as a modeling framework. What is still unclear is whether equation-of-state-driven diffuse-interface models can adequately describe processes where surface tension and wetting phenomena play an important role. Here we present a diffuse interface model of single-component, two-phase flow (a van der Waals fluid) in a porous medium under different wetting conditions. We propose a simplified Darcy-Korteweg model that is appropriate to describe flow in a Hele-Shaw cell or a micromodel, with a gap-averaged velocity. We study the ability of the diffuse-interface model to capture capillary pressure and the dynamics of vaporization/condensation fronts, and show that the model reproduces pressure fluctuations that emerge from abrupt interface displacements (Haines jumps) and from the break-up of wetting films.
Development of a Scale-up Tool for Pervaporation Processes
Thiess, Holger; Strube, Jochen
2018-01-01
In this study, an engineering tool for the design and optimization of pervaporation processes is developed based on physico-chemical modelling coupled with laboratory/mini-plant experiments. The model incorporates the solution-diffusion-mechanism, polarization effects (concentration and temperature), axial dispersion, pressure drop and the temperature drop in the feed channel due to vaporization of the permeating components. The permeance, being the key model parameter, was determined via dehydration experiments on a mini-plant scale for the binary mixtures ethanol/water and ethyl acetate/water. A second set of experimental data was utilized for the validation of the model for two chemical systems. The industrially relevant ternary mixture, ethanol/ethyl acetate/water, was investigated close to its azeotropic point and compared to a simulation conducted with the determined binary permeance data. Experimental and simulation data proved to agree very well for the investigated process conditions. In order to test the scalability of the developed engineering tool, large-scale data from an industrial pervaporation plant used for the dehydration of ethanol was compared to a process simulation conducted with the validated physico-chemical model. Since the membranes employed in both mini-plant and industrial scale were of the same type, the permeance data could be transferred. The comparison of the measured and simulated data proved the scalability of the derived model. PMID:29342956
A two-fluid model for avalanche and debris flows.
Pitman, E Bruce; Le, Long
2005-07-15
Geophysical mass flows--debris flows, avalanches, landslides--can contain O(10(6)-10(10)) m(3) or more of material, often a mixture of soil and rocks with a significant quantity of interstitial fluid. These flows can be tens of meters in depth and hundreds of meters in length. The range of scales and the rheology of this mixture presents significant modelling and computational challenges. This paper describes a depth-averaged 'thin layer' model of geophysical mass flows containing a mixture of solid material and fluid. The model is derived from a 'two-phase' or 'two-fluid' system of equations commonly used in engineering research. Phenomenological modelling and depth averaging combine to yield a tractable set of equations, a hyperbolic system that describes the motion of the two constituent phases. If the fluid inertia is small, a reduced model system that is easier to solve may be derived.
Kinetic model of water disinfection using peracetic acid including synergistic effects.
Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D
2016-01-01
The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors.
Vaj, Claudia; Barmaz, Stefania; Sørensen, Peter Borgen; Spurgeon, David; Vighi, Marco
2011-11-01
Mixture toxicity is a real world problem and as such requires risk assessment solutions that can be applied within different geographic regions, across different spatial scales and in situations where the quantity of data available for the assessment varies. Moreover, the need for site specific procedures for assessing ecotoxicological risk for non-target species in non-target ecosystems also has to be recognised. The work presented in the paper addresses the real world effects of pesticide mixtures on natural communities. Initially, the location of risk hotspots is theoretically estimated through exposure modelling and the use of available toxicity data to predict potential community effects. The concept of Concentration Addition (CA) is applied to describe responses resulting from exposure of multiple pesticides The developed and refined exposure models are georeferenced (GIS-based) and include environmental and physico-chemical parameters, and site specific information on pesticide usage and land use. As a test of the risk assessment framework, the procedures have been applied on a suitable study areas, notably the River Meolo basin (Northern Italy), a catchment characterised by intensive agriculture, as well as comparative area for some assessments. Within the studied areas, the risks for individual chemicals and complex mixtures have been assessed on aquatic and terrestrial aboveground and belowground communities. Results from ecological surveys have been used to validate these risk assessment model predictions. Value and limitation of the approaches are described and the possibilities for larger scale applications in risk assessment are also discussed. Copyright © 2011 Elsevier Inc. All rights reserved.
Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes
Brown, Colin D.; Hamer, Mick; Jones, Russell; Maltby, Lorraine; Posthuma, Leo; Silberhorn, Eric; Teeter, Jerold Scott; Warne, Michael St J; Weltje, Lennart
2018-01-01
Abstract Environmental risk assessment of chemical mixtures is challenging because of the multitude of possible combinations that may occur. Aquatic risk from chemical mixtures in an agricultural landscape was evaluated prospectively in 2 exposure scenario case studies: at field scale for a program of 13 plant‐protection products applied annually for 20 yr and at a watershed scale for a mixed land‐use scenario over 30 yr with 12 plant‐protection products and 2 veterinary pharmaceuticals used for beef cattle. Risk quotients were calculated from regulatory exposure models with typical real‐world use patterns and regulatory acceptable concentrations for individual chemicals. The results could differentiate situations when there was concern associated with single chemicals from those when concern was associated with a mixture (based on concentration addition) with no single chemical triggering concern. Potential mixture risk was identified on 0.02 to 7.07% of the total days modeled, depending on the scenario, the taxa, and whether considering acute or chronic risk. Taxa at risk were influenced by receiving water body characteristics along with chemical use profiles and associated properties. The present study demonstrates that a scenario‐based approach can be used to determine whether mixtures of chemicals pose risks over and above any identified using existing approaches for single chemicals, how often and to what magnitude, and ultimately which mixtures (and dominant chemicals) cause greatest concern. Environ Toxicol Chem 2018;37:674–689. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. PMID:29193235
Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes.
Holmes, Christopher M; Brown, Colin D; Hamer, Mick; Jones, Russell; Maltby, Lorraine; Posthuma, Leo; Silberhorn, Eric; Teeter, Jerold Scott; Warne, Michael St J; Weltje, Lennart
2018-03-01
Environmental risk assessment of chemical mixtures is challenging because of the multitude of possible combinations that may occur. Aquatic risk from chemical mixtures in an agricultural landscape was evaluated prospectively in 2 exposure scenario case studies: at field scale for a program of 13 plant-protection products applied annually for 20 yr and at a watershed scale for a mixed land-use scenario over 30 yr with 12 plant-protection products and 2 veterinary pharmaceuticals used for beef cattle. Risk quotients were calculated from regulatory exposure models with typical real-world use patterns and regulatory acceptable concentrations for individual chemicals. The results could differentiate situations when there was concern associated with single chemicals from those when concern was associated with a mixture (based on concentration addition) with no single chemical triggering concern. Potential mixture risk was identified on 0.02 to 7.07% of the total days modeled, depending on the scenario, the taxa, and whether considering acute or chronic risk. Taxa at risk were influenced by receiving water body characteristics along with chemical use profiles and associated properties. The present study demonstrates that a scenario-based approach can be used to determine whether mixtures of chemicals pose risks over and above any identified using existing approaches for single chemicals, how often and to what magnitude, and ultimately which mixtures (and dominant chemicals) cause greatest concern. Environ Toxicol Chem 2018;37:674-689. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
Structure of turbulent non-premixed flames modeled with two-step chemistry
NASA Technical Reports Server (NTRS)
Chen, J. H.; Mahalingam, S.; Puri, I. K.; Vervisch, L.
1992-01-01
Direct numerical simulations of turbulent diffusion flames modeled with finite-rate, two-step chemistry, A + B yields I, A + I yields P, were carried out. A detailed analysis of the turbulent flame structure reveals the complex nature of the penetration of various reactive species across two reaction zones in mixture fraction space. Due to this two zone structure, these flames were found to be robust, resisting extinction over the parameter ranges investigated. As in single-step computations, mixture fraction dissipation rate and the mixture fraction were found to be statistically correlated. Simulations involving unequal molecular diffusivities suggest that the small scale mixing process and, hence, the turbulent flame structure is sensitive to the Schmidt number.
Spatio-temporal Bayesian model selection for disease mapping
Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K
2016-01-01
Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156
Scale Reliability Evaluation with Heterogeneous Populations
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…
An Overview of Mesoscale Modeling Software for Energetic Materials Research
2010-03-01
12 2.9 Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ...13 Table 10. LAMMPS summary...extensive reviews, lectures and workshops are available on multiscale modeling of materials applications (76-78). • Multi-phase mixtures of
Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teng, S.; Tebby, C.
Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.
Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R
2016-08-15
Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.
Connolly, John; Sebastià, Maria-Teresa; Kirwan, Laura; Finn, John Anthony; Llurba, Rosa; Suter, Matthias; Collins, Rosemary P; Porqueddu, Claudio; Helgadóttir, Áslaug; Baadshaug, Ole H; Bélanger, Gilles; Black, Alistair; Brophy, Caroline; Čop, Jure; Dalmannsdóttir, Sigridur; Delgado, Ignacio; Elgersma, Anjo; Fothergill, Michael; Frankow-Lindberg, Bodil E; Ghesquiere, An; Golinski, Piotr; Grieu, Philippe; Gustavsson, Anne-Maj; Höglind, Mats; Huguenin-Elie, Olivier; Jørgensen, Marit; Kadziuliene, Zydre; Lunnan, Tor; Nykanen-Kurki, Paivi; Ribas, Angela; Taube, Friedhelm; Thumm, Ulrich; De Vliegher, Alex; Lüscher, Andreas
2018-03-01
Grassland diversity can support sustainable intensification of grassland production through increased yields, reduced inputs and limited weed invasion. We report the effects of diversity on weed suppression from 3 years of a 31-site continental-scale field experiment.At each site, 15 grassland communities comprising four monocultures and 11 four-species mixtures based on a wide range of species' proportions were sown at two densities and managed by cutting. Forage species were selected according to two crossed functional traits, "method of nitrogen acquisition" and "pattern of temporal development".Across sites, years and sown densities, annual weed biomass in mixtures and monocultures was 0.5 and 2.0 t DM ha -1 (7% and 33% of total biomass respectively). Over 95% of mixtures had weed biomass lower than the average of monocultures, and in two-thirds of cases, lower than in the most suppressive monoculture (transgressive suppression). Suppression was significantly transgressive for 58% of site-years. Transgressive suppression by mixtures was maintained across years, independent of site productivity.Based on models, average weed biomass in mixture over the whole experiment was 52% less (95% confidence interval: 30%-75%) than in the most suppressive monoculture. Transgressive suppression of weed biomass was significant at each year across all mixtures and for each mixture.Weed biomass was consistently low across all mixtures and years and was in some cases significantly but not largely different from that in the equiproportional mixture. The average variability (standard deviation) of annual weed biomass within a site was much lower for mixtures (0.42) than for monocultures (1.77). Synthesis and applications . Weed invasion can be diminished through a combination of forage species selected for complementarity and persistence traits in systems designed to reduce reliance on fertiliser nitrogen. In this study, effects of diversity on weed suppression were consistently strong across mixtures varying widely in species' proportions and over time. The level of weed biomass did not vary greatly across mixtures varying widely in proportions of sown species. These diversity benefits in intensively managed grasslands are relevant for the sustainable intensification of agriculture and, importantly, are achievable through practical farm-scale actions.
Numerical investigation of a helicopter combustion chamber using LES and tabulated chemistry
NASA Astrophysics Data System (ADS)
Auzillon, Pierre; Riber, Eléonore; Gicquel, Laurent Y. M.; Gicquel, Olivier; Darabiha, Nasser; Veynante, Denis; Fiorina, Benoît
2013-01-01
This article presents Large Eddy Simulations (LES) of a realistic aeronautical combustor device: the chamber CTA1 designed by TURBOMECA. Under nominal operating conditions, experiments show hot spots observed on the combustor walls, in the vicinity of the injectors. These high temperature regions disappear when modifying the fuel stream equivalence ratio. In order to account for detailed chemistry effects within LES, the numerical simulation uses the recently developed turbulent combustion model F-TACLES (Filtered TAbulated Chemistry for LES). The principle of this model is first to generate a lookup table where thermochemical variables are computed from a set of filtered laminar unstrained premixed flamelets. To model the interactions between the flame and the turbulence at the subgrid scale, a flame wrinkling analytical model is introduced and the Filtered Density Function (FDF) of the mixture fraction is modeled by a β function. Filtered thermochemical quantities are stored as a function of three coordinates: the filtered progress variable, the filtered mixture fraction and the mixture fraction subgrid scale variance. The chemical lookup table is then coupled with the LES using a mathematical formalism that ensures an accurate prediction of the flame dynamics. The numerical simulation of the CTA1 chamber with the F-TACLES turbulent combustion model reproduces fairly the temperature fields observed in experiments. In particular the influence of the fuel stream equivalence ratio on the flame position is well captured.
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M. O.; Johnson, P. E.
1986-01-01
A Viking Lander 1 image was modeled as mixtures of reflectance spectra of palagonite dust, gray andesitelike rock, and a coarse rocklike soil. The rocks are covered to varying degrees by dust but otherwise appear unweathered. Rocklike soil occurs as lag deposits in deflation zones around stones and on top of a drift and as a layer in a trench dug by the lander. This soil probably is derived from the rocks by wind abrasion and/or spallation. Dust is the major component of the soil and covers most of the surface. The dust is unrelated spectrally to the rock but is equivalent to the global-scale dust observed telescopically. A new method was developed to model a multispectral image as mixtures of end-member spectra and to compare image spectra directly with laboratory reference spectra. The method for the first time uses shade and secondary illumination effects as spectral end-members; thus the effects of topography and illumination on all scales can be isolated or removed. The image was calibrated absolutely from the laboratory spectra, in close agreement with direct calibrations. The method has broad applications to interpreting multispectral images, including satellite images.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Motility versus fluctuations in mixtures of self-motile and passive agents.
Hinz, Denis F; Panchenko, Alexander; Kim, Tae-Yeon; Fried, Eliot
2014-12-07
Many biological systems consist of self-motile and passive agents both of which contribute to overall functionality. However, little is known about the properties of such mixtures. Here we formulate a model for mixtures of self-motile and passive agents and show that the model gives rise to three different dynamical phases: a disordered mesoturbulent phase, a polar flocking phase, and a vortical phase characterized by large-scale counter rotating vortices. We use numerical simulations to construct a phase diagram and compare the statistical properties of the different phases with observed features of self-motile bacterial suspensions. Our findings afford specific insights regarding the interaction of microorganisms and passive particles and provide novel strategic guidance for efficient technological realizations of artificial active matter.
Chemical mixtures in potable water in the U.S.
Ryker, Sarah J.
2014-01-01
In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.
Pressure and Chemical Potential: Effects Hydrophilic Soils Have on Adsorption and Transport
NASA Astrophysics Data System (ADS)
Bennethum, L. S.; Weinstein, T.
2003-12-01
Using the assumption that thermodynamic properties of fluid is affected by its proximity to the solid phase, a theoretical model has been developed based on upscaling and fundamental thermodynamic principles (termed Hybrid Mixture Theory). The theory indicates that Darcy's law and the Darcy-scale chemical potential (which determines the rate of adsorption and diffusion) need to be modified in order to apply to soils containing hydrophilic soils. In this talk we examine the Darcy-scale definition of pressure and chemical potential, especially as it applies to hydrophilic soils. To arrive at our model, we used hybrid mixture theory - first pioneered by Hassanizadeh and Gray in 1979. The technique involves averaging the field equations (i.e. conservation of mass, momentum balance, energy balance, etc.) to obtain macroscopic field equations, where each field variable is defined precisely in terms of its microscale counterpart. To close the system consistently with classical thermodynamics, the entropy inequality is exploited in the sense of Coleman and Noll. With the exceptions that the macroscale field variables are defined precisely in terms of their microscale counterparts and that microscopic interfacial equations can also be treated in a similar manner, the resulting system of equations is consistent with those derived using classical mixture theory. Hence the terminology, Hybrid Mixture Theory.
Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model
ERIC Educational Resources Information Center
Von Davier, Matthias; Yamamoto, Kentaro
2004-01-01
The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…
Kelley, Mary E.; Anderson, Stewart J.
2008-01-01
Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711
A Raman chemical imaging system for detection of contaminants in food
NASA Astrophysics Data System (ADS)
Chao, Kaunglin; Qin, Jianwei; Kim, Moon S.; Mo, Chang Yeon
2011-06-01
This study presented a preliminary investigation into the use of macro-scale Raman chemical imaging for the screening of dry milk powder for the presence of chemical contaminants. Melamine was mixed into dry milk at concentrations (w/w) of 0.2%, 0.5%, 1.0%, 2.0%, 5.0%, and 10.0% and images of the mixtures were analyzed by a spectral information divergence algorithm. Ammonium sulfate, dicyandiamide, and urea were each separately mixed into dry milk at concentrations of (w/w) of 0.5%, 1.0%, and 5.0%, and an algorithm based on self-modeling mixture analysis was applied to these sample images. The contaminants were successfully detected and the spatial distribution of the contaminants within the sample mixtures was visualized using these algorithms. Although further studies are necessary, macro-scale Raman chemical imaging shows promise for use in detecting contaminants in food ingredients and may also be useful for authentication of food ingredients.
A robust quantitative near infrared modeling approach for blend monitoring.
Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A
2018-01-30
This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.
Zadpoor, Amir A
2015-03-01
Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluating Vegetation Type Effects on Land Surface Temperature at the City Scale
NASA Astrophysics Data System (ADS)
Wetherley, E. B.; McFadden, J. P.; Roberts, D. A.
2017-12-01
Understanding the effects of different plant functional types and urban materials on surface temperatures has significant consequences for climate modeling, water management, and human health in cities. To date, doing so at the urban scale has been complicated by small-scale surface heterogeneity and limited data. In this study we examined gradients of land surface temperature (LST) across sub-pixel mixtures of different vegetation types and urban materials across the entire Los Angeles, CA, metropolitan area (4,283 km2). We used AVIRIS airborne hyperspectral imagery (36 m resolution, 224 bands, 0.35 - 2.5 μm) to estimate sub-pixel fractions of impervious, pervious, tree, and turfgrass surfaces, validating them with simulated mixtures constructed from image spectra. We then used simultaneously imaged LST retrievals collected at multiple times of day to examine how temperature changed along gradients of the sub-pixel mixtures. Diurnal in situ LST measurements were used to confirm image values. Sub-pixel fractions were well correlated with simulated validation data for turfgrass (r2 = 0.71), tree (r2 = 0.77), impervious (r2 = 0.77), and pervious (r2 = 0.83) surfaces. The LST of pure pixels showed the effects of both the diurnal cycle and the surface type, with vegetated classes having a smaller diurnal temperature range of 11.6°C whereas non-vegetated classes had a diurnal range of 16.2°C (similar to in situ measurements collected simultaneously with the imagery). Observed LST across fractional gradients of turf/impervious and tree/impervious sub-pixel mixtures decreased linearly with increasing vegetation fraction. The slopes of decreasing LST were significantly different between tree and turf mixtures, with steeper slopes observed for turf (p < 0.05). These results suggest that different physiological characteristics and different access to irrigation water of urban trees and turfgrass results in significantly different LST effects, which can be detected at large scales in fractional mixture analysis.
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-01-01
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-06-22
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.
Scaling studies of solar pumped lasers
NASA Astrophysics Data System (ADS)
Christiansen, W. H.; Chang, J.
1985-08-01
A progress report of scaling studies of solar pumped lasers is presented. Conversion of blackbody radiation into laser light has been demonstrated in this study. Parametric studies of the variation of laser mixture composition and laser gas temperature were carried out for CO2 and N2O gases. Theoretical analysis and modeling of the system have been performed. Reasonable agreement between predictions in the parameter variation and the experimental results have been obtained. Almost 200 mW of laser output at 10.6 micron was achieved by placing a small sapphire laser tube inside an oven at 1500 K the tube was filled with CO2 laser gas mixture and cooled by longitudinal nitrogen gas flow.
Scaling studies of solar pumped lasers
NASA Technical Reports Server (NTRS)
Christiansen, W. H.; Chang, J.
1985-01-01
A progress report of scaling studies of solar pumped lasers is presented. Conversion of blackbody radiation into laser light has been demonstrated in this study. Parametric studies of the variation of laser mixture composition and laser gas temperature were carried out for CO2 and N2O gases. Theoretical analysis and modeling of the system have been performed. Reasonable agreement between predictions in the parameter variation and the experimental results have been obtained. Almost 200 mW of laser output at 10.6 micron was achieved by placing a small sapphire laser tube inside an oven at 1500 K the tube was filled with CO2 laser gas mixture and cooled by longitudinal nitrogen gas flow.
Spectral mixture analyses of hyperspectral data acquired using a tethered balloon
Chen, Xuexia; Vierling, Lee
2006-01-01
Tethered balloon remote sensing platforms can be used to study radiometric issues in terrestrial ecosystems by effectively bridging the spatial gap between measurements made on the ground and those acquired via airplane or satellite. In this study, the Short Wave Aerostat-Mounted Imager (SWAMI) tethered balloon-mounted platform was utilized to evaluate linear and nonlinear spectral mixture analysis (SMA) for a grassland-conifer forest ecotone during the summer of 2003. Hyperspectral measurement of a 74-m diameter ground instantaneous field of view (GIFOV) attained by the SWAMI was studied. Hyperspectral spectra of four common endmembers, bare soil, grass, tree, and shadow, were collected in situ, and images captured via video camera were interpreted into accurate areal ground cover fractions for evaluating the mixture models. The comparison between the SWAMI spectrum and the spectrum derived by combining in situ spectral data with video-derived areal fractions indicated that nonlinear effects occurred in the near infrared (NIR) region, while nonlinear influences were minimal in the visible region. The evaluation of hyperspectral and multispectral mixture models indicated that nonlinear mixture model-derived areal fractions were sensitive to the model input data, while the linear mixture model performed more stably. Areal fractions of bare soil were overestimated in all models due to the increased radiance of bare soil resulting from side scattering of NIR radiation by adjacent grass and trees. Unmixing errors occurred mainly due to multiple scattering as well as close endmember spectral correlation. In addition, though an apparent endmember assemblage could be derived using linear approaches to yield low residual error, the tree and shade endmember fractions calculated using this technique were erroneous and therefore separate treatment of endmembers subject to high amounts of multiple scattering (i.e. shadows and trees) must be done with caution. Including the short wave infrared (SWIR) region in the hyperspectral and multispectral endmember data significantly reduced the Pearson correlation coefficient values among endmember spectra. Therefore, combination of visible, NIR, and SWIR information is likely to further improve the utility of SMA in understanding ecosystem structure and function and may help narrow uncertainties when utilizing remotely sensed data to extrapolate trace glas flux measurements from the canopy scale to the landscape scale.
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2017-07-01
A key part of emerging advanced additive manufacturing methods is the deposition of specialized particulate mixtures of materials on substrates. For example, in many cases these materials are polydisperse powder mixtures whereby one set of particles is chosen with the objective to electrically, thermally or mechanically functionalize the overall mixture material and another set of finer-scale particles serves as an interstitial filler/binder. Often, achieving controllable, precise, deposition is difficult or impossible using mechanical means alone. It is for this reason that electromagnetically-driven methods are being pursued in industry, whereby the particles are ionized and an electromagnetic field is used to guide them into place. The goal of this work is to develop a model and simulation framework to investigate the behavior of a deposition as a function of an applied electric field. The approach develops a modular discrete-element type method for the simulation of the particle dynamics, which provides researchers with a framework to construct computational tools for this growing industry.
Physiologically based pharmacokinetic modeling of tea catechin mixture in rats and humans.
Law, Francis C P; Yao, Meicun; Bi, Hui-Chang; Lam, Stephen
2017-06-01
Although green tea ( Camellia sinensis) (GT) contains a large number of polyphenolic compounds with anti-oxidative and anti-proliferative activities, little is known of the pharmacokinetics and tissue dose of tea catechins (TCs) as a chemical mixture in humans. The objectives of this study were to develop and validate a physiologically based pharmacokinetic (PBPK) model of tea catechin mixture (TCM) in rats and humans, and to predict an integrated or total concentration of TCM in the plasma of humans after consuming GT or Polyphenon E (PE). To this end, a PBPK model of epigallocatechin gallate (EGCg) consisting of 13 first-order, blood flow-limited tissue compartments was first developed in rats. The rat model was scaled up to humans by replacing its physiological parameters, pharmacokinetic parameters and tissue/blood partition coefficients (PCs) with human-specific values. Both rat and human EGCg models were then extrapolated to other TCs by substituting its physicochemical parameters, pharmacokinetic parameters, and PCs with catechin-specific values. Finally, a PBPK model of TCM was constructed by linking three rat (or human) tea catechin models together without including a description for pharmacokinetic interaction between the TCs. The mixture PBPK model accurately predicted the pharmacokinetic behaviors of three individual TCs in the plasma of rats and humans after GT or PE consumption. Model-predicted total TCM concentration in the plasma was linearly related to the dose consumed by humans. The mixture PBPK model is able to translate an external dose of TCM into internal target tissue doses for future safety assessment and dose-response analysis studies in humans. The modeling framework as described in this paper is also applicable to the bioactive chemical in other plant-based health products.
Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez
2017-02-28
Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10 -10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale.
Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez
2017-01-01
Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10−10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale. PMID:28772605
Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an
2013-01-01
Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present.
Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an
2013-01-01
Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present. PMID:23555040
Understanding the ignition mechanism of high-pressure spray flames
Dahms, Rainer N.; Paczko, Günter A.; Skeen, Scott A.; ...
2016-10-25
A conceptual model for turbulent ignition in high-pressure spray flames is presented. The model is motivated by first-principles simulations and optical diagnostics applied to the Sandia n-dodecane experiment. The Lagrangian flamelet equations are combined with full LLNL kinetics (2755 species; 11,173 reactions) to resolve all time and length scales and chemical pathways of the ignition process at engine-relevant pressures and turbulence intensities unattainable using classic DNS. The first-principles value of the flamelet equations is established by a novel chemical explosive mode-diffusion time scale analysis of the fully-coupled chemical and turbulent time scales. Contrary to conventional wisdom, this analysis reveals thatmore » the high Damköhler number limit, a key requirement for the validity of the flamelet derivation from the reactive Navier–Stokes equations, applies during the entire ignition process. Corroborating Rayleigh-scattering and formaldehyde PLIF with simultaneous schlieren imaging of mixing and combustion are presented. Our combined analysis establishes a characteristic temporal evolution of the ignition process. First, a localized first-stage ignition event consistently occurs in highest temperature mixture regions. This initiates, owed to the intense scalar dissipation, a turbulent cool flame wave propagating from this ignition spot through the entire flow field. This wave significantly decreases the ignition delay of lower temperature mixture regions in comparison to their homogeneous reference. This explains the experimentally observed formaldehyde formation across the entire spray head prior to high-temperature ignition which consistently occurs first in a broad range of rich mixture regions. There, the combination of first-stage ignition delay, shortened by the cool flame wave, and the subsequent delay until second-stage ignition becomes minimal. A turbulent flame subsequently propagates rapidly through the entire mixture over time scales consistent with experimental observations. As a result, we demonstrate that the neglect of turbulence-chemistry-interactions fundamentally fails to capture the key features of this ignition process.« less
Scalable clustering algorithms for continuous environmental flow cytometry.
Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill
2016-02-01
Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution
ERIC Educational Resources Information Center
Verkuilen, Jay; Smithson, Michael
2012-01-01
Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…
Kamarei, Fahimeh; Vajda, Péter; Guiochon, Georges
2013-09-20
This paper compares two methods used for the preparative purification of a mixture of (S)-, and (R)-naproxen on a Whelk-O1 column, using either high performance liquid chromatography or supercritical fluid chromatography. The adsorption properties of both enantiomers were measured by frontal analysis, using methanol-water and methanol-supercritical carbon dioxide mixtures as the mobile phases. The measured adsorption data were modeled, providing the adsorption isotherms and their parameters, which were derived from the nonlinear fit of the isotherm models to the experimental data points. The model used was a Bi-Langmuir isotherm, similar to the model used in many enantiomeric separations. These isotherms were used to calculate the elution profiles of overloaded elution bands, assuming competitive Bi-Langmuir behavior of the two enantiomers. The analysis of these profiles provides the basis for a comparison between supercritical fluid chromatographic and high performance liquid chromatographic preparative scale separations. It permits an illustration of the advantages and disadvantages of these methods and a discussion of their potential performance. Copyright © 2013 Elsevier B.V. All rights reserved.
Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.
Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K
2017-12-01
It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.
Scaling effect of fraction of vegetation cover retrieved by algorithms based on linear mixture model
NASA Astrophysics Data System (ADS)
Obata, Kenta; Miura, Munenori; Yoshioka, Hiroki
2010-08-01
Differences in spatial resolution among sensors have been a source of error among satellite data products, known as a scaling effect. This study investigates the mechanism of the scaling effect on fraction of vegetation cover retrieved by a linear mixture model which employs NDVI as one of the constraints. The scaling effect is induced by the differences in texture, and the differences between the true endmember spectra and the endmember spectra assumed during retrievals. A mechanism of the scaling effect was analyzed by focusing on the monotonic behavior of spatially averaged FVC as a function of spatial resolution. The number of endmember is limited into two to proceed the investigation analytically. Although the spatially-averaged NDVI varies monotonically along with spatial resolution, the corresponding FVC values does not always vary monotonically. The conditions under which the averaged FVC varies monotonically for a certain sequence of spatial resolutions, were derived analytically. The increasing and decreasing trend of monotonic behavior can be predicted from the true and assumed endmember spectra of vegetation and non-vegetation classes regardless the distributions of the vegetation class within a fixed area. The results imply that the scaling effect on FVC is more complicated than that on NDVI, since, unlike NDVI, FVC becomes non-monotonic under a certain condition determined by the true and assumed endmember spectra.
Collective effects in models for interacting molecular motors and motor-microtubule mixtures
NASA Astrophysics Data System (ADS)
Menon, Gautam I.
2006-12-01
Three problems in the statistical mechanics of models for an assembly of molecular motors interacting with cytoskeletal filaments are reviewed. First, a description of the hydrodynamical behaviour of density-density correlations in fluctuating ratchet models for interacting molecular motors is outlined. Numerical evidence indicates that the scaling properties of dynamical behaviour in such models belong to the KPZ universality class. Second, the generalization of such models to include boundary injection and removal of motors is provided. In common with known results for the asymmetric exclusion processes, simulations indicate that such models exhibit sharp boundary driven phase transitions in the thermodynamic limit. In the third part of this paper, recent progress towards a continuum description of pattern formation in mixtures of motors and microtubules is described, and a non-equilibrium “phase-diagram” for such systems discussed.
Model of Fluidized Bed Containing Reacting Solids and Gases
NASA Technical Reports Server (NTRS)
Bellan, Josette; Lathouwers, Danny
2003-01-01
A mathematical model has been developed for describing the thermofluid dynamics of a dense, chemically reacting mixture of solid particles and gases. As used here, "dense" signifies having a large volume fraction of particles, as for example in a bubbling fluidized bed. The model is intended especially for application to fluidized beds that contain mixtures of carrier gases, biomass undergoing pyrolysis, and sand. So far, the design of fluidized beds and other gas/solid industrial processing equipment has been based on empirical correlations derived from laboratory- and pilot-scale units. The present mathematical model is a product of continuing efforts to develop a computational capability for optimizing the designs of fluidized beds and related equipment on the basis of first principles. Such a capability could eliminate the need for expensive, time-consuming predesign testing.
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
DOT National Transportation Integrated Search
2012-07-01
With the use of supplementary cementing materials (SCMs) in concrete mixtures, salt scaling tests such as ASTM C672 have been found to be overly aggressive and do correlate well with field scaling performance. The reasons for this are thought to be b...
ERIC Educational Resources Information Center
Duarte, B. P. M.; Coelho Pinheiro, M. N.; Silva, D. C. M.; Moura, M. J.
2006-01-01
The experiment described is an excellent opportunity to apply theoretical concepts of distillation, thermodynamics of mixtures and process simulation at laboratory scale, and simultaneously enhance the ability of students to operate, control and monitor complex units.
ASHEE: a compressible, Equilibrium-Eulerian model for volcanic ash plumes
NASA Astrophysics Data System (ADS)
Cerminara, M.; Esposti Ongaro, T.; Berselli, L. C.
2015-10-01
A new fluid-dynamic model is developed to numerically simulate the non-equilibrium dynamics of polydisperse gas-particle mixtures forming volcanic plumes. Starting from the three-dimensional N-phase Eulerian transport equations (Neri et al., 2003) for a mixture of gases and solid dispersed particles, we adopt an asymptotic expansion strategy to derive a compressible version of the first-order non-equilibrium model (Ferry and Balachandar, 2001), valid for low concentration regimes (particle volume fraction less than 10-3) and particles Stokes number (St, i.e., the ratio between their relaxation time and flow characteristic time) not exceeding about 0.2. The new model, which is called ASHEE (ASH Equilibrium Eulerian), is significantly faster than the N-phase Eulerian model while retaining the capability to describe gas-particle non-equilibrium effects. Direct numerical simulation accurately reproduce the dynamics of isotropic, compressible turbulence in subsonic regime. For gas-particle mixtures, it describes the main features of density fluctuations and the preferential concentration and clustering of particles by turbulence, thus verifying the model reliability and suitability for the numerical simulation of high-Reynolds number and high-temperature regimes in presence of a dispersed phase. On the other hand, Large-Eddy Numerical Simulations of forced plumes are able to reproduce their observed averaged and instantaneous flow properties. In particular, the self-similar Gaussian radial profile and the development of large-scale coherent structures are reproduced, including the rate of turbulent mixing and entrainment of atmospheric air. Application to the Large-Eddy Simulation of the injection of the eruptive mixture in a stratified atmosphere describes some of important features of turbulent volcanic plumes, including air entrainment, buoyancy reversal, and maximum plume height. For very fine particles (St → 0, when non-equilibrium effects are negligible) the model reduces to the so-called dusty-gas model. However, coarse particles partially decouple from the gas phase within eddies (thus modifying the turbulent structure) and preferentially concentrate at the eddy periphery, eventually being lost from the plume margins due to the concurrent effect of gravity. By these mechanisms, gas-particle non-equilibrium processes are able to influence the large-scale behavior of volcanic plumes.
Automated deconvolution of structured mixtures from heterogeneous tumor genomic data
Roman, Theodore; Xie, Lu
2017-01-01
With increasing appreciation for the extent and importance of intratumor heterogeneity, much attention in cancer research has focused on profiling heterogeneity on a single patient level. Although true single-cell genomic technologies are rapidly improving, they remain too noisy and costly at present for population-level studies. Bulk sequencing remains the standard for population-scale tumor genomics, creating a need for computational tools to separate contributions of multiple tumor clones and assorted stromal and infiltrating cell populations to pooled genomic data. All such methods are limited to coarse approximations of only a few cell subpopulations, however. In prior work, we demonstrated the feasibility of improving cell type deconvolution by taking advantage of substructure in genomic mixtures via a strategy called simplicial complex unmixing. We improve on past work by introducing enhancements to automate learning of substructured genomic mixtures, with specific emphasis on genome-wide copy number variation (CNV) data, as well as the ability to process quantitative RNA expression data, and heterogeneous combinations of RNA and CNV data. We introduce methods for dimensionality estimation to better decompose mixture model substructure; fuzzy clustering to better identify substructure in sparse, noisy data; and automated model inference methods for other key model parameters. We further demonstrate their effectiveness in identifying mixture substructure in true breast cancer CNV data from the Cancer Genome Atlas (TCGA). Source code is available at https://github.com/tedroman/WSCUnmix PMID:29059177
Prediction of Agglomeration, Fouling, and Corrosion Tendency of Fuels in CFB Co-Combustion
NASA Astrophysics Data System (ADS)
Barišć, Vesna; Zabetta, Edgardo Coda; Sarkki, Juha
Prediction of agglomeration, fouling, and corrosion tendency of fuels is essential to the design of any CFB boiler. During the years, tools have been successfully developed at Foster Wheeler to help with such predictions for the most commercial fuels. However, changes in fuel market and the ever-growing demand for co-combustion capabilities pose a continuous need for development. This paper presents results from recently upgraded models used at Foster Wheeler to predict agglomeration, fouling, and corrosion tendency of a variety of fuels and mixtures. The models, subject of this paper, are semi-empirical computer tools that combine the theoretical basics of agglomeration/fouling/corrosion phenomena with empirical correlations. Correlations are derived from Foster Wheeler's experience in fluidized beds, including nearly 10,000 fuel samples and over 1,000 tests in about 150 CFB units. In these models, fuels are evaluated based on their classification, their chemical and physical properties by standard analyses (proximate, ultimate, fuel ash composition, etc.;.) alongside with Foster Wheeler own characterization methods. Mixtures are then evaluated taking into account the component fuels. This paper presents the predictive capabilities of the agglomeration/fouling/corrosion probability models for selected fuels and mixtures fired in full-scale. The selected fuels include coals and different types of biomass. The models are capable to predict the behavior of most fuels and mixtures, but also offer possibilities for further improvements.
Yoon, Dhongik S; Jo, HangJin; Corradini, Michael L
2017-04-01
Condensation of steam vapor is an important mode of energy removal from the reactor containment. The presence of noncondensable gas complicates the process and makes it difficult to model. MELCOR, one of the more widely used system codes for containment analyses, uses the heat and mass transfer analogy to model condensation heat transfer. To investigate previously reported nodalization-dependence in natural convection flow regime, MELCOR condensation model as well as other models are studied. The nodalization-dependence issue is resolved by using physical length from the actual geometry rather than node size of each control volume as the characteristic length scale formore » MELCOR containment analyses. At the transition to turbulent natural convection regime, the McAdams correlation for convective heat transfer produces a better prediction compared to the original MELCOR model. The McAdams correlation is implemented in MELCOR and the prediction is validated against a set of experiments on a scaled AP600 containment. The MELCOR with our implemented model produces improved predictions. For steam molar fractions in the gas mixture greater than about 0.58, the predictions are within the uncertainty margin of the measurements. The simulation results still underestimate the heat transfer from the gas-steam mixture, implying that conservative predictions are provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Dhongik S; Jo, HangJin; Corradini, Michael L
Condensation of steam vapor is an important mode of energy removal from the reactor containment. The presence of noncondensable gas complicates the process and makes it difficult to model. MELCOR, one of the more widely used system codes for containment analyses, uses the heat and mass transfer analogy to model condensation heat transfer. To investigate previously reported nodalization-dependence in natural convection flow regime, MELCOR condensation model as well as other models are studied. The nodalization-dependence issue is resolved by using physical length from the actual geometry rather than node size of each control volume as the characteristic length scale formore » MELCOR containment analyses. At the transition to turbulent natural convection regime, the McAdams correlation for convective heat transfer produces a better prediction compared to the original MELCOR model. The McAdams correlation is implemented in MELCOR and the prediction is validated against a set of experiments on a scaled AP600 containment. The MELCOR with our implemented model produces improved predictions. For steam molar fractions in the gas mixture greater than about 0.58, the predictions are within the uncertainty margin of the measurements. The simulation results still underestimate the heat transfer from the gas-steam mixture, implying that conservative predictions are provided.« less
Petit, Pascal; Maître, Anne; Persoons, Renaud; Bicout, Dominique J
2017-04-15
The health risk assessment associated with polycyclic aromatic hydrocarbon (PAH) mixtures faces three main issues: the lack of knowledge regarding occupational exposure mixtures, the accurate chemical characterization and the estimation of cancer risks. To describe industries in which PAH exposures are encountered and construct working context-exposure function matrices, to enable the estimation of both the PAH expected exposure level and chemical characteristic profile of workers based on their occupational sector and activity. Overall, 1729 PAH samplings from the Exporisq-HAP database (E-HAP) were used. An approach was developed to (i) organize E-HAP in terms of the most detailed unit of description of a job and (ii) structure and subdivide the organized E-HAP into groups of detailed industry units, with each group described by the distribution of concentrations of gaseous and particulate PAHs, which would result in working context-exposure function matrices. PAH exposures were described using two scales: phase (total particulate and gaseous PAH distribution concentrations) and congener (16 congener PAH distribution concentrations). Nine industrial sectors were organized according to the exposure durations, short-term, mid-term and long-term into 5, 36 and 47 detailed industry units, which were structured, respectively, into 2, 4, and 7 groups for the phase scale and 2, 3, and 6 groups for the congener scale, corresponding to as much distinct distribution of concentrations of several PAHs. For the congener scale, which included groups that used products derived from coal, the correlations between the PAHs were strong; for groups that used products derived from petroleum, all PAHs in the mixtures were poorly correlated with each other. The current findings provide insights into both the PAH emissions generated by various industrial processes and their associated occupational exposures and may be further used to develop risk assessment analyses of cancers associated with PAH mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.
Bertelkamp, C; Verliefde, A R D; Reynisson, J; Singhal, N; Cabo, A J; de Jonge, M; van der Hoek, J P
2016-03-05
This study investigated relationships between OMP biodegradation rates and the functional groups present in the chemical structure of a mixture of 31 OMPs. OMP biodegradation rates were determined from lab-scale columns filled with soil from RBF site Engelse Werk of the drinking water company Vitens in The Netherlands. A statistically significant relationship was found between OMP biodegradation rates and the functional groups of the molecular structures of OMPs in the mixture. The OMP biodegradation rate increased in the presence of carboxylic acids, hydroxyl groups, and carbonyl groups, but decreased in the presence of ethers, halogens, aliphatic ethers, methyl groups and ring structures in the chemical structure of the OMPs. The predictive model obtained from the lab-scale soil column experiment gave an accurate qualitative prediction of biodegradability for approximately 70% of the OMPs monitored in the field (80% excluding the glymes). The model was found to be less reliable for the more persistent OMPs (OMPs with predicted biodegradation rates lower or around the standard error=0.77d(-1)) and OMPs containing amide or amine groups. These OMPs should be carefully monitored in the field to determine their removal during RBF. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Qiu, Lei; Yuan, Shenfang; Bao, Qiao; Mei, Hanfei; Ren, Yuanqiang
2016-05-01
For aerospace application of structural health monitoring (SHM) technology, the problem of reliable damage monitoring under time-varying conditions must be addressed and the SHM technology has to be fully validated on real aircraft structures under realistic load conditions on ground before it can reach the status of flight test. In this paper, the guided wave (GW) based SHM method is applied to a full-scale aircraft fatigue test which is one of the most similar test status to the flight test. To deal with the time-varying problem, a GW-Gaussian mixture model (GW-GMM) is proposed. The probability characteristic of GW features, which is introduced by time-varying conditions is modeled by GW-GMM. The weak cumulative variation trend of the crack propagation, which is mixed in time-varying influence can be tracked by the GW-GMM migration during on-line damage monitoring process. A best match based Kullback-Leibler divergence is proposed to measure the GW-GMM migration degree to reveal the crack propagation. The method is validated in the full-scale aircraft fatigue test. The validation results indicate that the reliable crack propagation monitoring of the left landing gear spar and the right wing panel under realistic load conditions are achieved.
Modeling quiescent phase transport of air bubbles induced by breaking waves
NASA Astrophysics Data System (ADS)
Shi, Fengyan; Kirby, James T.; Ma, Gangfeng
Simultaneous modeling of both the acoustic phase and quiescent phase of breaking wave-induced air bubbles involves a large range of length scales from microns to meters and time scales from milliseconds to seconds, and thus is computational unaffordable in a surfzone-scale computational domain. In this study, we use an air bubble entrainment formula in a two-fluid model to predict air bubble evolution in the quiescent phase in a breaking wave event. The breaking wave-induced air bubble entrainment is formulated by connecting the shear production at the air-water interface and the bubble number intensity with a certain bubble size spectra observed in laboratory experiments. A two-fluid model is developed based on the partial differential equations of the gas-liquid mixture phase and the continuum bubble phase, which has multiple size bubble groups representing a polydisperse bubble population. An enhanced 2-DV VOF (Volume of Fluid) model with a k - ɛ turbulence closure is used to model the mixture phase. The bubble phase is governed by the advection-diffusion equations of the gas molar concentration and bubble intensity for groups of bubbles with different sizes. The model is used to simulate air bubble plumes measured in laboratory experiments. Numerical results indicate that, with an appropriate parameter in the air entrainment formula, the model is able to predict the main features of bubbly flows as evidenced by reasonable agreement with measured void fraction. Bubbles larger than an intermediate radius of O(1 mm) make a major contribution to void fraction in the near-crest region. Smaller bubbles tend to penetrate deeper and stay longer in the water column, resulting in significant contribution to the cross-sectional area of the bubble cloud. An underprediction of void fraction is found at the beginning of wave breaking when large air pockets take place. The core region of high void fraction predicted by the model is dislocated due to use of the shear production in the algorithm for initial bubble entrainment. The study demonstrates a potential use of an entrainment formula in simulations of air bubble population in a surfzone-scale domain. It also reveals some difficulties in use of the two-fluid model for predicting large air pockets induced by wave breaking, and suggests that it may be necessary to use a gas-liquid two-phase model as the basic model framework for the mixture phase and to develop an algorithm to allow for transfer of discrete air pockets to the continuum bubble phase. A more theoretically justifiable air entrainment formulation should be developed.
Scaling relation for high-temperature biodiesel surrogate ignition delay times
Campbell, Matthew F.; Davidson, David F.; Hanson, Ronald K.
2015-10-11
High-temperature Arrhenius ignition delay time correlations are useful for revealing the underlying parameter dependencies of combustion models, for simplifying and optimizing combustion mechanisms for use in engine simulations, for scaling experimental data to new conditions for comparison purposes, and for guiding in experimental design. Here, we have developed a scaling relationship for Fatty Acid Methyl Ester (FAME) ignition time data taken at high temperatures in 4%O 2/Ar mixtures behind reflected shocks using an aerosol shock tube: τ ign [ms] = 2.24 x 10 -6 [ms] (P [atm]) -.41 (more » $$\\phi$$) 0.30(C n) -.61 x exp $$ \\left(\\frac{37.1 [kcal/mol]}{\\hat{R}_u [kcal / mol K] T [K]}\\right) $$ In addition, we have combined our ignition delay time data for methyl decanoate, methyl palmitate, methyl oleate, and methyl linoleate with other experimental results in the literature in order to derive fuel-specific oxygen-mole-fraction scaling parameters for these surrogates. In conclusion, in this article, we discuss the significance of the parameter values, compare our correlation to others found in the literature for different classes of fuels, and contrast the above expression’s performance with correlations obtained using leading FAME kinetic models in 4%O 2/Ar mixtures.« less
Saleem, Muhammad; Sharif, Kashif; Fahmi, Aliya
2018-04-27
Applications of Pareto distribution are common in reliability, survival and financial studies. In this paper, A Pareto mixture distribution is considered to model a heterogeneous population comprising of two subgroups. Each of two subgroups is characterized by the same functional form with unknown distinct shape and scale parameters. Bayes estimators have been derived using flat and conjugate priors using squared error loss function. Standard errors have also been derived for the Bayes estimators. An interesting feature of this study is the preparation of components of Fisher Information matrix.
Numerical analysis of similarity of barrier discharges in the 0.95 Ne/0.05 Xe mixture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avtaeva, S. V.; Kulumbaev, E. B.
2009-04-15
Established dynamic regimes of similar (with a scale factor of 10) barrier discharges in the 0.95 Ne/0.05 Xe mixture are simulated in a one-dimensional drift-diffusion model. The similarity is examined of barrier discharges excited in gaps of lengths 0.4 and 4 mm at gas pressures of 350 and 35 Torr and dielectric layer thicknesses of 0.2 and 2 mm, the frequencies of the 400-V ac voltage applied to the discharge electrodes being 100 and 10 kHz, respectively.
Personna, Yves Robert; Slater, Lee; Ntarlagiannis, Dimitrios; Werkema, Dale D.; Szabo, Zoltan
2013-01-01
Ethanol (EtOH), an emerging contaminant with potential direct and indirect environmental effects, poses threats to water supplies when spilled in large volumes. A series of experiments was directed at understanding the electrical geophysical signatures arising from groundwater contamination by ethanol. Conductivity measurements were performed at the laboratory scale on EtOH–water mixtures (0 to 0.97 v/v EtOH) and EtOH–salt solution mixtures (0 to 0.99 v/v EtOH) with and without a sand matrix using a conductivity probe and a four-electrode electrical measurement over the low frequency range (1–1000 Hz). A Lichtenecker–Rother (L–R) type mixing model was used to simulate electrical conductivity as a function of EtOH concentration in the mixture. For all three experimental treatments increasing EtOH concentration resulted in a decrease in measured conductivity magnitude (|σ|). The applied L–R model fitted the experimental data at concentration ≤ 0.4 v/v EtOH, presumably due to predominant and symmetric intermolecular (EtOH–water) interaction in the mixture. The deviation of the experimental |σ| data from the model prediction at higher EtOH concentrations may be associated with hydrophobic effects of EtOH–EtOH interactions in the mixture. The |σ| data presumably reflected changes in relative strength of the three types of interactions (water–water, EtOH–water, and EtOH–EtOH) occurring simultaneously in EtOH–water mixtures as the ratio of EtOH to water changed. No evidence of measurable polarization effects at the EtOH–water and EtOH–water–mineral interfaces over the investigated frequency range was found. Our results indicate the potential for using electrical measurements to characterize and monitor EtOH spills in the subsurface.
Interfacial tension and vapor-liquid equilibria in the critical region of mixtures
NASA Technical Reports Server (NTRS)
Moldover, Michael R.; Rainwater, James C.
1988-01-01
In the critical region, the concept of two-scale-factor universality can be used to accurately predict the surface tension between near-critical vapor and liquid phases from the singularity in the thermodynamic properties of the bulk fluid. In the present work, this idea is generalized to binary mixtures and is illustrated using the data of Hsu et al. (1985) for CO2 + n-butane. The pressure-temperature-composition-density data for coexisting, near-critical phases of the mixtures are fitted with a thermodynamic potential comprised of a sum of a singular term and nonsingular terms. The nonuniversal amplitudes characterizing the singular term for the mixtures are obtained from the amplitudes for the pure components by interpolation in a space of thermodynamic 'field' variables. The interfacial tensions predicted for the mixtures from the singular term are within 10 percent of the data on three isotherms in the pressure range (Pc - P)/Pc of less than 0.5. This difference is comparable to the combined experimental and model errors.
Simulations of Flame Acceleration and Deflagration-to-Detonation Transitions in Methane-Air Systems
2010-03-17
are neglected. 3. Model parameter calibration The one-step Arrhenius kinetics used in this model cannot ex- actly reproduce all properties of laminar...with obstacles are compared to previ- ously reported experimental data. The results obtained using the simple reaction model qualitatively, and in...have taken in developing a multidimensional numerical model to study explosions in large-scale systems containing mixtures of nat- ural gas and air
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
Irreversible opinion spreading on scale-free networks
NASA Astrophysics Data System (ADS)
Candia, Julián
2007-02-01
We study the dynamical and critical behavior of a model for irreversible opinion spreading on Barabási-Albert (BA) scale-free networks by performing extensive Monte Carlo simulations. The opinion spreading within an inhomogeneous society is investigated by means of the magnetic Eden model, a nonequilibrium kinetic model for the growth of binary mixtures in contact with a thermal bath. The deposition dynamics, which is studied as a function of the degree of the occupied sites, shows evidence for the leading role played by hubs in the growth process. Systems of finite size grow either ordered or disordered, depending on the temperature. By means of standard finite-size scaling procedures, the effective order-disorder phase transitions are found to persist in the thermodynamic limit. This critical behavior, however, is absent in related equilibrium spin systems such as the Ising model on BA scale-free networks, which in the thermodynamic limit only displays a ferromagnetic phase. The dependence of these results on the degree exponent is also discussed for the case of uncorrelated scale-free networks.
NASA Astrophysics Data System (ADS)
Gulliver, Eric A.
The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
NASA Astrophysics Data System (ADS)
Hung, Yichen; Winters, Caroline; Jans, Elijah R.; Frederickson, Kraig; Adamovich, Igor V.
2017-06-01
This work presents time-resolved measurements of nitrogen vibrational temperature, translational-rotational temperature, and absolute OH number density in lean hydrogen-air mixtures excited in a diffuse filament nanosecond pulse discharge, at a pressure of 100 Torr and high specific energy loading. The main objective of these measurements is to study a possible effect of nitrogen vibrational excitation on low-temperature kinetics of HO2 and OH radicals. N2 vibrational temperature and gas temperature in the discharge and the afterglow are measured by ns broadband Coherent Anti-Stokes Scattering (CARS). Hydroxyl radical number density is measured by Laser Induced Fluorescence (LIF) calibrated by Rayleigh scattering. The results show that the discharge generates strong vibrational nonequilibrium in air and H2-air mixtures for delay times after the discharge pulse of up to 1 ms, with peak vibrational temperature of Tv ≈ 2000 K at T ≈ 500 K. Nitrogen vibrational temperature peaks ≈ 200 μs after the discharge pulse, before decreasing due to vibrational-translational relaxation by O atoms (on the time scale of a few hundred μs) and diffusion (on ms time scale). OH number density increases gradually after the discharge pulse, peaking at t 100-300 μs and decaying on a longer time scale, until t 1 ms. Both OH rise time and decay time decrease as H2 fraction in the mixture is increased from 1% to 5%. OH number density in a 1% H2-air mixture peaks at approximately the same time as vibrational temperature in air, suggesting that OH kinetics may be affected by N2 vibrational excitation. However, preliminary kinetic modeling calculations demonstrate that OH number density overshoot is controlled by known reactions of H and O radicals generated in the plasma, rather than by dissociation by HO2 radical in collisions with vibrationally excited N2 molecules, as has been suggested earlier. Additional measurements at higher specific energy loadings and kinetic modeling calculations are underway.
A Volume-Fraction Based Two-Phase Constitutive Model for Blood
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Rui; Massoudi, Mehrdad; Hund, S.J.
2008-06-01
Mechanically-induced blood trauma such as hemolysis and thrombosis often occurs at microscopic channels, steps and crevices within cardiovascular devices. A predictive mathematical model based on a broad understanding of hemodynamics at micro scale is needed to mitigate these effects, and is the motivation of this research project. Platelet transport and surface deposition is important in thrombosis. Microfluidic experiments have previously revealed a significant impact of red blood cell (RBC)-plasma phase separation on platelet transport [5], whereby platelet localized concentration can be enhanced due to a non-uniform distribution of RBCs of blood flow in a capillary tube and sudden expansion. However,more » current platelet deposition models either totally ignored RBCs in the fluid by assuming a zero sample hematocrit or treated them as being evenly distributed. As a result, those models often underestimated platelet advection and deposition to certain areas [2]. The current study aims to develop a two-phase blood constitutive model that can predict phase separation in a RBC-plasma mixture at the micro scale. The model is based on a sophisticated theory known as theory of interacting continua, i.e., mixture theory. The volume fraction is treated as a field variable in this model, which allows the prediction of concentration as well as velocity profiles of both RBC and plasma phases. The results will be used as the input of successive platelet deposition models.« less
Wang, Yunpeng; Thompson, Wesley K.; Schork, Andrew J.; Holland, Dominic; Chen, Chi-Hua; Bettella, Francesco; Desikan, Rahul S.; Li, Wen; Witoelar, Aree; Zuber, Verena; Devor, Anna; Nöthen, Markus M.; Rietschel, Marcella; Chen, Qiang; Werge, Thomas; Cichon, Sven; Weinberger, Daniel R.; Djurovic, Srdjan; O’Donovan, Michael; Visscher, Peter M.; Andreassen, Ole A.; Dale, Anders M.
2016-01-01
Most of the genetic architecture of schizophrenia (SCZ) has not yet been identified. Here, we apply a novel statistical algorithm called Covariate-Modulated Mixture Modeling (CM3), which incorporates auxiliary information (heterozygosity, total linkage disequilibrium, genomic annotations, pleiotropy) for each single nucleotide polymorphism (SNP) to enable more accurate estimation of replication probabilities, conditional on the observed test statistic (“z-score”) of the SNP. We use a multiple logistic regression on z-scores to combine information from auxiliary information to derive a “relative enrichment score” for each SNP. For each stratum of these relative enrichment scores, we obtain nonparametric estimates of posterior expected test statistics and replication probabilities as a function of discovery z-scores, using a resampling-based approach that repeatedly and randomly partitions meta-analysis sub-studies into training and replication samples. We fit a scale mixture of two Gaussians model to each stratum, obtaining parameter estimates that minimize the sum of squared differences of the scale-mixture model with the stratified nonparametric estimates. We apply this approach to the recent genome-wide association study (GWAS) of SCZ (n = 82,315), obtaining a good fit between the model-based and observed effect sizes and replication probabilities. We observed that SNPs with low enrichment scores replicate with a lower probability than SNPs with high enrichment scores even when both they are genome-wide significant (p < 5x10-8). There were 693 and 219 independent loci with model-based replication rates ≥80% and ≥90%, respectively. Compared to analyses not incorporating relative enrichment scores, CM3 increased out-of-sample yield for SNPs that replicate at a given rate. This demonstrates that replication probabilities can be more accurately estimated using prior enrichment information with CM3. PMID:26808560
NASA Astrophysics Data System (ADS)
Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.
2018-06-01
We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.
The Manhattan Frame Model-Manhattan World Inference in the Space of Surface Normals.
Straub, Julian; Freifeld, Oren; Rosman, Guy; Leonard, John J; Fisher, John W
2018-01-01
Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches utilize these regularities via the restrictive, and rather local, Manhattan World (MW) assumption which posits that every plane is perpendicular to one of the axes of a single coordinate system. The aforementioned regularities are especially evident in the surface normal distribution of a scene where they manifest as orthogonally-coupled clusters. This motivates the introduction of the Manhattan-Frame (MF) model which captures the notion of an MW in the surface normals space, the unit sphere, and two probabilistic MF models over this space. First, for a single MF we propose novel real-time MAP inference algorithms, evaluate their performance and their use in drift-free rotation estimation. Second, to capture the complexity of real-world scenes at a global scale, we extend the MF model to a probabilistic mixture of Manhattan Frames (MMF). For MMF inference we propose a simple MAP inference algorithm and an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that let us infer the unknown number of mixture components. We demonstrate the versatility of the MMF model and inference algorithm across several scales of man-made environments.
High-Throughput Analysis of Ovarian Cycle Disruption by Mixtures of Aromatase Inhibitors
Golbamaki-Bakhtyari, Nazanin; Kovarich, Simona; Tebby, Cleo; Gabb, Henry A.; Lemazurier, Emmanuel
2017-01-01
Background: Combining computational toxicology with ExpoCast exposure estimates and ToxCast™ assay data gives us access to predictions of human health risks stemming from exposures to chemical mixtures. Objectives: We explored, through mathematical modeling and simulations, the size of potential effects of random mixtures of aromatase inhibitors on the dynamics of women's menstrual cycles. Methods: We simulated random exposures to millions of potential mixtures of 86 aromatase inhibitors. A pharmacokinetic model of intake and disposition of the chemicals predicted their internal concentration as a function of time (up to 2 y). A ToxCast™ aromatase assay provided concentration–inhibition relationships for each chemical. The resulting total aromatase inhibition was input to a mathematical model of the hormonal hypothalamus–pituitary–ovarian control of ovulation in women. Results: Above 10% inhibition of estradiol synthesis by aromatase inhibitors, noticeable (eventually reversible) effects on ovulation were predicted. Exposures to individual chemicals never led to such effects. In our best estimate, ∼10% of the combined exposures simulated had mild to catastrophic impacts on ovulation. A lower bound on that figure, obtained using an optimistic exposure scenario, was 0.3%. Conclusions: These results demonstrate the possibility to predict large-scale mixture effects for endocrine disrupters with a predictive toxicology approach that is suitable for high-throughput ranking and risk assessment. The size of the effects predicted is consistent with an increased risk of infertility in women from everyday exposures to our chemical environment. https://doi.org/10.1289/EHP742 PMID:28886606
NASA Astrophysics Data System (ADS)
Agaoglu, B.; Scheytt, T. J.; Copty, N. K.
2011-12-01
This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations were also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with slow flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. The results were less consistent for fast non-equilibrium flow conditions. The dissolution process from the NAPL mixture into the water-ethanol flushing solutions was found to be more complex than dissolution expressions incorporated in the numerical model. The dissolution rate of individual organic compounds (namely Toluene and Benzene) from a mixture NAPL into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values.The implications of this controlled experimental and modeling study on field cosolvent remediation applications are discussed.
Barillot, Romain; Escobar-Gutiérrez, Abraham J.; Fournier, Christian; Huynh, Pierre; Combes, Didier
2014-01-01
Background and Aims Predicting light partitioning in crop mixtures is a critical step in improving the productivity of such complex systems, and light interception has been shown to be closely linked to plant architecture. The aim of the present work was to analyse the relationships between plant architecture and light partitioning within wheat–pea (Triticum aestivum–Pisum sativum) mixtures. An existing model for wheat was utilized and a new model for pea morphogenesis was developed. Both models were then used to assess the effects of architectural variations in light partitioning. Methods First, a deterministic model (L-Pea) was developed in order to obtain dynamic reconstructions of pea architecture. The L-Pea model is based on L-systems formalism and consists of modules for ‘vegetative development’ and ‘organ extension’. A tripartite simulator was then built up from pea and wheat models interfaced with a radiative transfer model. Architectural parameters from both plant models, selected on the basis of their contribution to leaf area index (LAI), height and leaf geometry, were then modified in order to generate contrasting architectures of wheat and pea. Key results By scaling down the analysis to the organ level, it could be shown that the number of branches/tillers and length of internodes significantly determined the partitioning of light within mixtures. Temporal relationships between light partitioning and the LAI and height of the different species showed that light capture was mainly related to the architectural traits involved in plant LAI during the early stages of development, and in plant height during the onset of interspecific competition. Conclusions In silico experiments enabled the study of the intrinsic effects of architectural parameters on the partitioning of light in crop mixtures of wheat and pea. The findings show that plant architecture is an important criterion for the identification/breeding of plant ideotypes, particularly with respect to light partitioning. PMID:24907314
Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.
Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M
2017-02-02
Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.
Skelsey, P; Rossing, W A H; Kessel, G J T; Powell, J; van der Werf, W
2005-04-01
ABSTRACT A spatiotemporal/integro-difference equation model was developed and utilized to study the progress of epidemics in spatially heterogeneous mixtures of susceptible and resistant host plants. The effects of different scales and patterns of host genotypes on the development of focal and general epidemics were investigated using potato late blight as a case study. Two different radial Laplace kernels and a two-dimensional Gaussian kernel were used for modeling the dispersal of spores. An analytical expression for the apparent infection rate, r, in general epidemics was tested by comparison with dynamic simulations. A genotype connectivity parameter, q, was introduced into the formula for r. This parameter quantifies the probability of pathogen inoculum produced on a certain host genotype unit reaching the same or another unit of the same genotype. The analytical expression for the apparent infection rate provided accurate predictions of realized r in the simulations of general epidemics. The relationship between r and the radial velocity of focus expansion, c, in focal epidemics, was linear in accordance with theory for homogeneous genotype mixtures. The findings suggest that genotype mixtures that are effective in reducing general epidemics of Phytophthora infestans will likewise curtail focal epidemics and vice versa.
Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth
2017-12-01
The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
A dark energy model alternative to generalized Chaplygin gas
NASA Astrophysics Data System (ADS)
Hova, Hoavo; Yang, Huanxiong
By proposing a new cosmic fluid model of ‑ 1 ≤ ω ≤ 0 as an alternative to the generalized Chaplygin gas, we reexamine the role of Chaplygin gaslike fluid models in understanding dark energy and dark matter. Instead of as a unified dark matter, the fluid is suggested to be a mixture of unclustered dark energy and pressureless dark matter. Within such a scenario, the sub-horizon fluctuations of matter are stable and scale invariant, similar to those in standard ΛCDM model.
Low Mach number fluctuating hydrodynamics for electrolytes
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.
2016-11-01
We formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids [A. Donev et al., Phys. Fluids 27, 037103 (2015), 10.1063/1.4913571], we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. We demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second order in the deterministic setting and for length scales much greater than the Debye length gives results consistent with an electroneutral approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.
Modeling avian abundance from replicated counts using binomial mixture models
Kery, Marc; Royle, J. Andrew; Schmid, Hans
2005-01-01
Abundance estimation in ecology is usually accomplished by capture–recapture, removal, or distance sampling methods. These may be hard to implement at large spatial scales. In contrast, binomial mixture models enable abundance estimation without individual identification, based simply on temporally and spatially replicated counts. Here, we evaluate mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season. We chose eight species with contrasting distribution (wide–narrow), abundance (high–low), and detectability (easy–difficult). Abundance was modeled as a random effect with a Poisson or negative binomial distribution, with mean affected by forest cover, elevation, and route length. Detectability was a logit-linear function of survey date, survey date-by-elevation, and sampling effort (time per transect unit). Resulting covariate effects and parameter estimates were consistent with expectations. Detectability per territory (for three surveys) ranged from 0.66 to 0.94 (mean 0.84) for easy species, and from 0.16 to 0.83 (mean 0.53) for difficult species, depended on survey effort for two easy and all four difficult species, and changed seasonally for three easy and three difficult species. Abundance was positively related to route length in three high-abundance and one low-abundance (one easy and three difficult) species, and increased with forest cover in five forest species, decreased for two nonforest species, and was unaffected for a generalist species. Abundance estimates under the most parsimonious mixture models were between 1.1 and 8.9 (median 1.8) times greater than estimates based on territory mapping; hence, three surveys were insufficient to detect all territories for each species. We conclude that binomial mixture models are an important new approach for estimating abundance corrected for detectability when only repeated-count data are available. Future developments envisioned include estimation of trend, occupancy, and total regional abundance.
Predicting mixed-gas adsorption equilibria on activated carbon for precombustion CO2 capture.
García, S; Pis, J J; Rubiera, F; Pevida, C
2013-05-21
We present experimentally measured adsorption isotherms of CO2, H2, and N2 on a phenol-formaldehyde resin-based activated carbon, which had been previously synthesized for the separation of CO2 in a precombustion capture process. The single component adsorption isotherms were measured in a magnetic suspension balance at three different temperatures (298, 318, and 338 K) and over a large range of pressures (from 0 to 3000-4000 kPa). These values cover the temperature and pressure conditions likely to be found in a precombustion capture scenario, where CO2 needs to be separated from a CO2/H2/N2 gas stream at high pressure (~1000-1500 kPa) and with a high CO2 concentration (~20-40 vol %). Data on the pure component isotherms were correlated using the Langmuir, Sips, and dual-site Langmuir (DSL) models, i.e., a two-, three-, and four-parameter model, respectively. By using the pure component isotherm fitting parameters, adsorption equilibrium was then predicted for multicomponent gas mixtures by the extended models. The DSL model was formulated considering the energetic site-matching concept, recently addressed in the literature. Experimental gas-mixture adsorption equilibrium data were calculated from breakthrough experiments conducted in a lab-scale fixed-bed reactor and compared with the predictions from the models. Breakthrough experiments were carried out at a temperature of 318 K and five different pressures (300, 500, 1000, 1500, and 2000 kPa) where two different CO2/H2/N2 gas mixtures were used as the feed gas in the adsorption step. The DSL model was found to be the one that most accurately predicted the CO2 adsorption equilibrium in the multicomponent mixture. The results presented in this work highlight the importance of performing experimental measurements of mixture adsorption equilibria, as they are of utmost importance to discriminate between models and to correctly select the one that most closely reflects the actual process.
NASA Astrophysics Data System (ADS)
Licquia, Timothy C.; Newman, Jeffrey A.
2016-11-01
The exponential scale length (L d ) of the Milky Way’s (MW’s) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and are often statistically incompatible with one another. Here, we perform a Bayesian meta-analysis to determine an improved, aggregate estimate for L d , utilizing a mixture-model approach to account for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery, we explore a variety of ways of modeling the nature of problematic measurements, and then employ a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of L d available in the literature; these involve a broad assortment of observational data sets, MW models and assumptions, and methodologies, all tabulated herein. Analyzing the visible and infrared measurements separately yields estimates for L d of {2.71}-0.20+0.22 kpc and {2.51}-0.13+0.15 kpc, respectively, whereas considering them all combined yields 2.64 ± 0.13 kpc. The ratio between the visible and infrared scale lengths determined here is very similar to that measured in external spiral galaxies. We use these results to update the model of the Galactic disk from our previous work, constraining its stellar mass to be {4.8}-1.1+1.5× {10}10 M ⊙, and the MW’s total stellar mass to be {5.7}-1.1+1.5× {10}10 M ⊙.
Structure, thermodynamics, and solubility in tetromino fluids.
Barnes, Brian C; Siderius, Daniel W; Gelb, Lev D
2009-06-16
To better understand the self-assembly of small molecules and nanoparticles adsorbed at interfaces, we have performed extensive Monte Carlo simulations of a simple lattice model based on the seven hard "tetrominoes", connected shapes that occupy four lattice sites. The equations of state of the pure fluids and all of the binary mixtures are determined over a wide range of density, and a large selection of multicomponent mixtures are also studied at selected conditions. Calculations are performed in the grand canonical ensemble and are analogous to real systems in which molecules or nanoparticles reversibly adsorb to a surface or interface from a bulk reservoir. The model studied is athermal; objects in these simulations avoid overlap but otherwise do not interact. As a result, all of the behavior observed is entropically driven. The one-component fluids all exhibit marked self-ordering tendencies at higher densities, with quite complex structures formed in some cases. Significant clustering of objects with the same rotational state (orientation) is also observed in some of the pure fluids. In all of the binary mixtures, the two species are fully miscible at large scales, but exhibit strong species-specific clustering (segregation) at small scales. This behavior persists in multicomponent mixtures; even in seven-component mixtures of all the shapes there is significant association between objects of the same shape. To better understand these phenomena, we calculate the second virial coefficients of the tetrominoes and related quantities, extract thermodynamic volume of mixing data from the simulations of binary mixtures, and determine Henry's law solubilities for each shape in a variety of solvents. The overall picture obtained is one in which complementarity of both the shapes of individual objects and the characteristic structures of different fluids are important in determining the overall behavior of a fluid of a given composition, with sometimes counterintuitive results. Finally, we note that no sharp phase transitions are observed but that this appears to be due to the small size of the objects considered. It is likely that complex phase behavior may be found in systems of larger polyominoes.
NASA Astrophysics Data System (ADS)
Errington, Jeffrey Richard
This work focuses on the development of intermolecular potential models for real fluids. United-atom models have been developed for both non-polar and polar fluids. The models have been optimized to the vapor-liquid coexistence properties. Histogram reweighting techniques were used to calculate phase behavior. The Hamiltonian scaling grand canonical Monte Carlo method was developed to enable the determination of thermodynamic properties of several related Hamiltonians from a single simulation. With this method, the phase behavior of variations of the Buckingham exponential-6 potential was determined. Reservoir grand canonical Monte Carlo simulations were developed to simulate molecules with complex architectures and/or stiff intramolecular constraints. The scheme is based on the creation of a reservoir of ideal chains from which structures are selected for insertion during a simulation. New intermolecular potential models have been developed for water, the n-alkane homologous series, benzene, cyclohexane, carbon dioxide, ammonia and methanol. The models utilize the Buckingham exponential-6 potential to model non-polar interactions and point charges to describe polar interactions. With the exception of water, the new models reproduce experimental saturated densities, vapor pressures and critical parameters to within a few percent. In the case of water, we found a set of parameters that describes the phase behavior better than other available point charge models while giving a reasonable description of the liquid structure. The mixture behavior of water-hydrocarbon mixtures has also been examined. The Henry's law constants of methane, ethane, benzene and cyclohexane in water were determined using Widom insertion and expanded ensemble techniques. In addition the high-pressure phase behavior of water-methane and water-ethane systems was studied using the Gibbs ensemble method. The results from this study indicate that it is possible to obtain a good description of the phase behavior of pure components using united-atom models. The mixture behavior of non-polar systems, including highly asymmetric components, was in good agreement with experiment. The calculations for the highly non-ideal water-hydrocarbon mixtures reproduced experimental behavior with varying degrees of success. The results indicate that multibody effects, such as polarizability, must be taken into account when modeling mixtures of polar and non-polar components.
Turbulent flame spreading mechanisms after spark ignition
NASA Astrophysics Data System (ADS)
Subramanian, V.; Domingo, Pascale; Vervisch, Luc
2009-12-01
Numerical simulation of forced ignition is performed in the framework of Large-Eddy Simulation (LES) combined with a tabulated detailed chemistry approach. The objective is to reproduce the flame properties observed in a recent experimental work reporting probability of ignition in a laboratory-scale burner operating with Methane/air non premixed mixture [1]. The smallest scales of chemical phenomena, which are unresolved by the LES grid, are approximated with a flamelet model combined with presumed probability density functions, to account for the unresolved part of turbulent fluctuations of species and temperature. Mono-dimensional flamelets are simulated using GRI-3.0 [2] and tabulated under a set of parameters describing the local mixing and progress of reaction. A non reacting case was simulated at first, to study the unsteady velocity and mixture fields. The time averaged velocity and mixture fraction, and their respective turbulent fluctuations, are compared against the experimental measurements, in order to estimate the prediction capabilities of LES. The time history of axial and radial components of velocity and mixture fraction is cumulated and analysed for different burner regimes. Based on this information, spark ignition is mimicked on selected ignition spots and the dynamics of kernel development analyzed to be compared against the experimental observations. The possible link between the success or failure of the ignition and the flow conditions (in terms of velocity and composition) at the sparking time are then explored.
Demuth, Dominik; Haase, Nils; Malzacher, Daniel; Vogel, Michael
2015-08-01
We use (13)C CP MAS NMR to investigate the dependence of elastin dynamics on the concentration and composition of the solvent at various temperatures. For elastin in pure glycerol, line-shape analysis shows that larger-scale fluctuations of the protein backbone require a minimum glycerol concentration of ~0.6 g/g at ambient temperature, while smaller-scale fluctuations are activated at lower solvation levels of ~0.2 g/g. Immersing elastin in various glycerol-water mixtures, we observe at room temperature that the protein mobility is higher for lower glycerol fractions in the solvent and, thus, lower solvent viscosity. When decreasing the temperature, the elastin spectra approach the line shape for the rigid protein at 245 K for all studied samples, indicating that the protein ceases to be mobile on the experimental time scale of ~10(-5) s. Our findings yield evidence for a strong coupling between elastin fluctuations and solvent dynamics and, hence, such interaction is not restricted to the case of protein-water mixtures. Spectral resolution of different carbon species reveals that the protein-solvent couplings can, however, be different for side chain and backbone units. We discuss these results against the background of the slaving model for protein dynamics. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fort, Charles; Fu, Christopher D.; Weichselbaum, Noah A.; Bardet, Philippe M.
2015-12-01
To deploy optical diagnostics such as particle image velocimetry or planar laser-induced fluorescence (PLIF) in complex geometries, it is beneficial to use index-matched facilities. A binary mixture of para-cymene and cinnamaldehyde provides a viable option for matching the refractive index of acrylic, a common material for scaled models and test sections. This fluid is particularly appropriate for large-scale facilities and when a low-density and low-viscosity fluid is sought, such as in fluid-structure interaction studies. This binary solution has relatively low kinematic viscosity and density; its use enables the experimentalist to select operating temperature and to increase fluorescence signal in PLIF experiments. Measurements of spectral and temperature dependence of refractive index, density, and kinematic viscosity are reported. The effect of the binary mixture on solubility control of Rhodamine 6G is also characterized.
Safety Testing of Ammonium Nitrate Based Mixtures
NASA Astrophysics Data System (ADS)
Phillips, Jason; Lappo, Karmen; Phelan, James; Peterson, Nathan; Gilbert, Don
2013-06-01
Ammonium nitrate (AN)/ammonium nitrate based explosives have a lengthy documented history of use by adversaries in acts of terror. While historical research has been conducted on AN-based explosive mixtures, it has primarily focused on detonation performance while varying the oxygen balance between the oxidizer and fuel components. Similarly, historical safety data on these materials is often lacking in pertinent details such as specific fuel type, particle size parameters, oxidizer form, etc. A variety of AN-based fuel-oxidizer mixtures were tested for small-scale sensitivity in preparation for large-scale testing. Current efforts focus on maintaining a zero oxygen-balance (a stoichiometric ratio for active chemical participants) while varying factors such as charge geometry, oxidizer form, particle size, and inert diluent ratios. Small-scale safety testing was conducted on various mixtures and fuels. It was found that ESD sensitivity is significantly affected by particle size, while this is less so for impact and friction. Thermal testing is in progress to evaluate hazards that may be experienced during large-scale testing.
Biclustering Models for Two-Mode Ordinal Data.
Matechou, Eleni; Liu, Ivy; Fernández, Daniel; Farias, Miguel; Gjelsvik, Bergljot
2016-09-01
The work in this paper introduces finite mixture models that can be used to simultaneously cluster the rows and columns of two-mode ordinal categorical response data, such as those resulting from Likert scale responses. We use the popular proportional odds parameterisation and propose models which provide insights into major patterns in the data. Model-fitting is performed using the EM algorithm, and a fuzzy allocation of rows and columns to corresponding clusters is obtained. The clustering ability of the models is evaluated in a simulation study and demonstrated using two real data sets.
Patterning ecological risk of pesticide contamination at the river basin scale.
Faggiano, Leslie; de Zwart, Dick; García-Berthou, Emili; Lek, Sovan; Gevrey, Muriel
2010-05-01
Ecological risk assessment was conducted to determine the risk posed by pesticide mixtures to the Adour-Garonne river basin (south-western France). The objectives of this study were to assess the general state of this basin with regard to pesticide contamination using a risk assessment procedure and to detect patterns in toxic mixture assemblages through a self-organizing map (SOM) methodology in order to identify the locations at risk. Exposure assessment, risk assessment with species sensitivity distribution, and mixture toxicity rules were used to compute six relative risk predictors for different toxic modes of action: the multi-substance potentially affected fraction of species depending on the toxic mode of action of compounds found in the mixture (msPAF CA(TMoA) values). Those predictors computed for the 131 sampling sites assessed in this study were then patterned through the SOM learning process. Four clusters of sampling sites exhibiting similar toxic assemblages were identified. In the first cluster, which comprised 83% of the sampling sites, the risk caused by pesticide mixture toward aquatic species was weak (mean msPAF value for those sites<0.0036%), while in another cluster the risk was significant (mean msPAF<1.09%). GIS mapping allowed an interesting spatial pattern of the distribution of sampling sites for each cluster to be highlighted with a significant and highly localized risk in the French department called "Lot et Garonne". The combined use of the SOM methodology, mixture toxicity modelling and a clear geo-referenced representation of results not only revealed the general state of the Adour-Garonne basin with regard to contamination by pesticides but also enabled to analyze the spatial pattern of toxic mixture assemblage in order to prioritize the locations at risk and to detect the group of compounds causing the greatest risk at the basin scale. Copyright 2010 Elsevier B.V. All rights reserved.
Baldovin-Stella stochastic volatility process and Wiener process mixtures
NASA Astrophysics Data System (ADS)
Peirano, P. P.; Challet, D.
2012-08-01
Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.
A two-phase micromorphic model for compressible granular materials
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Li, Weiming; Powers, Joseph
2009-11-01
We introduce a new two-phase continuum model for compressible granular material based on micromorphic theory and treat it as a two-phase mixture with inner structure. By taking an appropriate number of moments of the local micro scale balance equations, the average phase balance equations result from a systematic averaging procedure. In addition to equations for mass, momentum and energy, the balance equations also include evolution equations for microinertia and microspin tensors. The latter equations combine to yield a general form of a compaction equation when the material is assumed to be isotropic. When non-linear and inertial effects are neglected, the generalized compaction equation reduces to that originally proposed by Bear and Nunziato. We use the generalized compaction equation to numerically model a mixture of granular high explosive and interstitial gas. One-dimensional shock tube and piston-driven solutions are presented and compared with experimental results and other known solutions.
Luo, Zhi; Marson, Domenico; Ong, Quy K; Loiudice, Anna; Kohlbrecher, Joachim; Radulescu, Aurel; Krause-Heuer, Anwen; Darwish, Tamim; Balog, Sandor; Buonsanti, Raffaella; Svergun, Dmitri I; Posocco, Paola; Stellacci, Francesco
2018-04-09
The ligand shell (LS) determines a number of nanoparticles' properties. Nanoparticles' cores can be accurately characterized; yet the structure of the LS, when composed of mixture of molecules, can be described only qualitatively (e.g., patchy, Janus, and random). Here we show that quantitative description of the LS' morphology of monodisperse nanoparticles can be obtained using small-angle neutron scattering (SANS), measured at multiple contrasts, achieved by either ligand or solvent deuteration. Three-dimensional models of the nanoparticles' core and LS are generated using an ab initio reconstruction method. Characteristic length scales extracted from the models are compared with simulations. We also characterize the evolution of the LS upon thermal annealing, and investigate the LS morphology of mixed-ligand copper and silver nanoparticles as well as gold nanoparticles coated with ternary mixtures. Our results suggest that SANS combined with multiphase modeling is a versatile approach for the characterization of nanoparticles' LS.
A flamelet model for transcritical LOx/GCH4 flames
NASA Astrophysics Data System (ADS)
Müller, Hagen; Pfitzner, Michael
2017-03-01
This work presents a numerical framework to efficiently simulate methane combustion at supercritical pressures. A LES flamelet approach is adapted to account for real-gas thermodynamics effects which are a prominent feature of flames at near-critical injection conditions. The thermodynamics model is based on the Peng-Robinson equation of state (PR-EoS) in conjunction with a novel volume-translation method to correct deficiencies in the transcritical regime. The resulting formulation is more accurate than standard cubic EoSs without deteriorating their good computational performance. To consistently account for pressure and strain fluctuations in the flamelet model, an additional enthalpy equation is solved along with the transport equations for mixture fraction and mixture fraction variance. The method is validated against available experimental data for a laboratory scale LOx/GCH4 flame at conditions that resemble those in liquid-propellant rocket engines. The LES result is in good agreement with the measured OH* radiation.
Numerical simulation of field scale cosolvent flooding for LNAPL remediation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roeder, E.; Brame, S.E.; Falta, R.W.
1995-12-31
This paper describes a modeling study which will support remediation of contaminated soils at Hill Air Force Base in Utah. The site is contaminated with a mixture of solvents, jet fuel, and other organic substances which form a separate phase of low density on top of the water table. A test cell within the contaminant zone will be flooded with a cosolvent/water mixture to drive the nonaqueous phase liquids (NAPLs) out. The modeling study is designed to deterine if buoyancy of the flooding solution will cause it to float on top, if heterogeneity of the ground will channel the cosolventmore » around pockets of NAPL, and the sensitivity of the predicted remediation effectiveness to the uncertainty in ternary information. The modeling effort will use UTCHEM, a 3-dimensional finite-difference flooding simulator which solves mass balance equations for up to 21 components in up to 4 phases.« less
Computational Thermomechanical Modelling of Early-Age Silicate Composites
NASA Astrophysics Data System (ADS)
Vala, J.; Št'astník, S.; Kozák, V.
2009-09-01
Strains and stresses in early-age silicate composites, widely used in civil engineering, especially in fresh concrete mixtures, in addition to those caused by exterior mechanical loads, are results of complicated non-deterministic physical and chemical processes. Their numerical prediction at the macro-scale level requires the non-trivial physical analysis based on the thermodynamic principles, making use of micro-structural information from both theoretical and experimental research. The paper introduces a computational model, based on a nonlinear system of macroscopic equations of evolution, supplied with certain effective material characteristics, coming from the micro-scale analysis, and sketches the algorithm for its numerical analysis.
Bayesian nonparametric regression with varying residual density
Pati, Debdeep; Dunson, David B.
2013-01-01
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053
A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.
2013-11-01
A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.
Cancer-preventive activities of tocopherols and tocotrienols.
Ju, Jihyeung; Picinich, Sonia C; Yang, Zhihong; Zhao, Yang; Suh, Nanjoo; Kong, Ah-Ng; Yang, Chung S
2010-04-01
The cancer-preventive activity of vitamin E has been studied. Whereas some epidemiological studies have suggested a protective effect of vitamin E against cancer formation, many large-scale intervention studies with alpha-tocopherol (usually large doses) have not demonstrated a cancer-preventive effect. Studies on alpha-tocopherol in animal models also have not demonstrated robust cancer prevention effects. One possible explanation for the lack of demonstrable cancer-preventive effects is that high doses of alpha-tocopherol decrease the blood and tissue levels of delta-tocopherols. It has been suggested that gamma-tocopherol, due to its strong anti-inflammatory and other activities, may be the more effective form of vitamin E in cancer prevention. Our recent results have demonstrated that a gamma-tocopherol-rich mixture of tocopherols inhibits colon, prostate, mammary and lung tumorigenesis in animal models, suggesting that this mixture may have a high potential for applications in the prevention of human cancer. In this review, we discuss biochemical properties of tocopherols, results of possible cancer-preventive effects in humans and animal models and possible mechanisms involved in the inhibition of carcinogenesis. Based on this information, we propose that a gamma-tocopherol-rich mixture of tocopherols is a very promising cancer-preventive agent and warrants extensive future research.
Asphalt mixture performance characterization using small-scale cylindrical specimens.
DOT National Transportation Integrated Search
2015-06-01
The results of dynamic modulus testing have become one of the primarily used performance criteria to evaluate the : laboratory properties of asphalt mixtures. This test is commonly conducted to characterize asphalt mixtures mechanistically : using an...
A multiscale model for predicting the viscoelastic properties of asphalt concrete
NASA Astrophysics Data System (ADS)
Garcia Cucalon, Lorena; Rahmani, Eisa; Little, Dallas N.; Allen, David H.
2016-08-01
It is well known that the accurate prediction of long term performance of asphalt concrete pavement requires modeling to account for viscoelasticity within the mastic. However, accounting for viscoelasticity can be costly when the material properties are measured at the scale of asphalt concrete. This is due to the fact that the material testing protocols must be performed recursively for each mixture considered for use in the final design.
Xia, Pu; Zhang, Xiaowei; Zhang, Hanxin; Wang, Pingping; Tian, Mingming; Yu, Hongxia
2017-08-15
One of the major challenges in environmental science is monitoring and assessing the risk of complex environmental mixtures. In vitro bioassays with limited key toxicological end points have been shown to be suitable to evaluate mixtures of organic pollutants in wastewater and recycled water. Omics approaches such as transcriptomics can monitor biological effects at the genome scale. However, few studies have applied omics approach in the assessment of mixtures of organic micropollutants. Here, an omics approach was developed for profiling bioactivity of 10 water samples ranging from wastewater to drinking water in human cells by a reduced human transcriptome (RHT) approach and dose-response modeling. Transcriptional expression of 1200 selected genes were measured by an Ampliseq technology in two cell lines, HepG2 and MCF7, that were exposed to eight serial dilutions of each sample. Concentration-effect models were used to identify differentially expressed genes (DEGs) and to calculate effect concentrations (ECs) of DEGs, which could be ranked to investigate low dose response. Furthermore, molecular pathways disrupted by different samples were evaluated by Gene Ontology (GO) enrichment analysis. The ability of RHT for representing bioactivity utilizing both HepG2 and MCF7 was shown to be comparable to the results of previous in vitro bioassays. Finally, the relative potencies of the mixtures indicated by RHT analysis were consistent with the chemical profiles of the samples. RHT analysis with human cells provides an efficient and cost-effective approach to benchmarking mixture of micropollutants and may offer novel insight into the assessment of mixture toxicity in water.
NASA Astrophysics Data System (ADS)
Yozgatligil, Ahmet; Shafee, Sina
2016-11-01
Fire accidents in recent decades have drawn attention to safety issues associated with the design, construction and maintenance of tunnels. A reduced scale tunnel model constructed based on Froude scaling technique is used in the current work. Mixtures of n-heptane and ethanol are burned with ethanol volumetric fraction up to 30 percent and the longitudinal ventilation velocity varying from 0.5 to 2.5 m/s. The burning rates of the pool fires are measured using a precision load cell. The heat release rates of the fires are calculated according to oxygen calorimetry method and the temperature distributions inside the tunnel are also measured. Results of the experiments show that the ventilation velocity variation has a significant effect on the pool fire burning rate, smoke temperature and the critical ventilation velocity. With increased oxygen depletion in case of increased ethanol content of blended pool fires, the quasi-steady heat release rate values tend to increase as well as the ceiling temperatures while the combustion duration decreases.
Atomistic and coarse-grained computer simulations of raft-like lipid mixtures.
Pandit, Sagar A; Scott, H Larry
2007-01-01
Computer modeling can provide insights into the existence, structure, size, and thermodynamic stability of localized raft-like regions in membranes. However, the challenges in the construction and simulation of accurate models of heterogeneous membranes are great. The primary obstacle in modeling the lateral organization within a membrane is the relatively slow lateral diffusion rate for lipid molecules. Microsecond or longer time-scales are needed to fully model the formation and stability of a raft in a membra ne. Atomistic simulations currently are not able to reach this scale, but they do provide quantitative information on the intermolecular forces and correlations that are involved in lateral organization. In this chapter, the steps needed to carry out and analyze atomistic simulations of hydrated lipid bilayers having heterogeneous composition are outlined. It is then shown how the data from a molecular dynamics simulation can be used to construct a coarse-grained model for the heterogeneous bilayer that can predict the lateral organization and stability of rafts at up to millisecond time-scales.
NASA Astrophysics Data System (ADS)
Jesenska, Sona; Liess, Mathias; Schäfer, Ralf; Beketov, Mikhail; Blaha, Ludek
2013-04-01
Species sensitivity distribution (SSD) is statistical method broadly used in the ecotoxicological risk assessment of chemicals. Originally it has been used for prospective risk assessment of single substances but nowadays it is becoming more important also in the retrospective risk assessment of mixtures, including the catchment scale. In the present work, SSD predictions (impacts of mixtures consisting of 25 pesticides; data from several catchments in Germany, France and Finland) were compared with SPEAR-pesticides, which a bioindicator index based on biological traits responsive to the effects of pesticides and post-contamination recovery. The results showed statistically significant correlations (Pearson's R, p<0.01) between SSD (predicted msPAF values) and values of SPEAR-pesticides (based on field biomonitoring observations). Comparisons of the thresholds established for the SSD and SPEAR approaches (SPEAR-pesticides=45%, i.e. LOEC level, and msPAF = 0.05 for SSD, i.e. HC5) showed that use of chronic toxicity data significantly improved the agreement between the two methods but the SPEAR-pesticides index was still more sensitive. Taken together, the validation study shows good potential of SSD models in predicting the real impacts of micropollutant mixtures on natural communities of aquatic biota.
Shock-induced mechanochemistry in heterogeneous reactive powder mixtures
NASA Astrophysics Data System (ADS)
Gonzales, Manny; Gurumurthy, Ashok; Kennedy, Gregory; Neel, Christopher; Gokhale, Arun; Thadhani, Naresh
The bulk response of compacted powder mixtures subjected to high-strain-rate loading conditions in various configurations is manifested from behavior at the meso-scale. Simulations at the meso-scale can provide an additional confirmation of the possible origins of the observed response. This work investigates the bulk dynamic response of Ti +B +Al reactive powder mixtures under two extreme loading configurations - uniaxial stress and strain loading - leveraging highly-resolved in-situ measurements and meso-scale simulations. Modified rod-on-anvil impact tests on a reactive pellet demonstrate an optimized stoichiometry promoting reaction in Ti +B +Al. Encapsulated powders subjected to shock compression via flyer plate tests provide possible evidence of a shock-induced reaction at high pressures. Meso-scale simulations of the direct experimental configurations employing highly-resolved microstructural features of the Ti +B compacted mixture show complex inhomogeneous deformation responses and reveal the importance of meso-scale features such as particle size and morphology and their effects on the measured response. Funding is generously provided by DTRA through Grant No. HDTRA1-10-1-0038 (Dr. Su Peiris - Program Manager) and by the SMART (AFRL Wright Patterson AFB) and NDSEG fellowships (High Performance Computing and Modernization Office).
NASA Astrophysics Data System (ADS)
Ou, Yihong; Du, Yang; Jiang, Xingsheng; Wang, Dong; Liang, Jianjun
2010-04-01
The study on the special phenomenon, occurrence process and control mechanism of gasoline-air mixture thermal ignition in underground oil depots is of important academic and applied value for enriching scientific theories of explosion safety, developing protective technology against fire and decreasing the number of fire accidents. In this paper, the research on thermal ignition process of gasoline-air mixture in model underground oil depots tunnel has been carried out by using experiment and numerical simulation methods. The calculation result has been demonstrated by the experiment data. The five stages of thermal ignition course, which are slow oxidation stage, rapid oxidation stage, fire stage, flameout stage and quench stage, have been firstly defined and accurately descried. According to the magnitude order of concentration, the species have been divided into six categories, which lay the foundation for explosion-proof design based on the role of different species. The influence of space scale on thermal ignition in small-scale space has been found, and the mechanism for not easy to fire is that the wall reflection causes the reflux of fluids and changes the distribution of heat and mass, so that the progress of chemical reactions in the whole space are also changed. The novel mathematical model on the basis of unification chemical kinetics and thermodynamics established in this paper provides supplementary means for the analysis of process and mechanism of thermal ignition.
3D PIC-MCC simulations of discharge inception around a sharp anode in nitrogen/oxygen mixtures
NASA Astrophysics Data System (ADS)
Teunissen, Jannis; Ebert, Ute
2016-08-01
We investigate how photoionization, electron avalanches and space charge affect the inception of nanosecond pulsed discharges. Simulations are performed with a 3D PIC-MCC (particle-in-cell, Monte Carlo collision) model with adaptive mesh refinement for the field solver. This model, whose source code is available online, is described in the first part of the paper. Then we present simulation results in a needle-to-plane geometry, using different nitrogen/oxygen mixtures at atmospheric pressure. In these mixtures non-local photoionization is important for the discharge growth. The typical length scale for this process depends on the oxygen concentration. With 0.2% oxygen the discharges grow quite irregularly, due to the limited supply of free electrons around them. With 2% or more oxygen the development is much smoother. An almost spherical ionized region can form around the electrode tip, which increases in size with the electrode voltage. Eventually this inception cloud destabilizes into streamer channels. In our simulations, discharge velocities are almost independent of the oxygen concentration. We discuss the physical mechanisms behind these phenomena and compare our simulations with experimental observations.
Ha, Dong -Gwang; Kim, Jang -Joo; Baldo, Marc A.
2016-04-29
Mixed host compositions that combine charge transport materials with luminescent dyes offer superior control over exciton formation and charge transport in organic light emitting devices (OLEDs). Two approaches are typically used to optimize the fraction of charge transport materials in a mixed host composition: either an empirical percolative model, or a hopping transport model. We show that these two commonly-employed models are linked by an analytic expression which relates the localization length to the percolation threshold and critical exponent. The relation is confirmed both numerically and experimentally through measurements of the relative conductivity of Tris(4-carbazoyl-9-ylphenyl) amine (TCTA) :1,3-bis(3,5-dipyrid-3-yl-phenyl) benzene (BmPyPb)more » mixtures with different concentrations, where the TCTA plays a role as hole conductor and the BmPyPb as hole insulator. Furthermore, the analytic relation may allow the rational design of mixed layers of small molecules for high-performance OLEDs.« less
Numerical investigation of spray ignition of a multi-component fuel surrogate
NASA Astrophysics Data System (ADS)
Backer, Lara; Narayanaswamy, Krithika; Pepiot, Perrine
2014-11-01
Simulating turbulent spray ignition, an important process in engine combustion, is challenging, since it combines the complexity of multi-scale, multiphase turbulent flow modeling with the need for an accurate description of chemical kinetics. In this work, we use direct numerical simulation to investigate the role of the evaporation model on the ignition characteristics of a multi-component fuel surrogate, injected as droplets in a turbulent environment. The fuel is represented as a mixture of several components, each one being representative of a different chemical class. A reduced kinetic scheme for the mixture is extracted from a well-validated detailed chemical mechanism, and integrated into the multiphase turbulent reactive flow solver NGA. Comparisons are made between a single-component evaporation model, in which the evaporating gas has the same composition as the liquid droplet, and a multi-component model, where component segregation does occur. In particular, the corresponding production of radical species, which are characteristic of the ignition of individual fuel components, is thoroughly analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Dong-Gwang; Kim, Jang-Joo; Baldo, Marc A.
2016-04-01
Mixed host compositions that combine charge transport materials with luminescent dyes offer superior control over exciton formation and charge transport in organic light emitting devices (OLEDs). Two approaches are typically used to optimize the fraction of charge transport materials in a mixed host composition: either an empirical percolative model, or a hopping transport model. We show that these two commonly-employed models are linked by an analytic expression which relates the localization length to the percolation threshold and critical exponent. The relation is confirmed both numerically and experimentally through measurements of the relative conductivity of Tris(4-carbazoyl-9-ylphenyl)amine (TCTA) :1,3-bis(3,5-dipyrid-3-yl-phenyl)benzene (BmPyPb) mixtures withmore » different concentrations, where the TCTA plays a role as hole conductor and the BmPyPb as hole insulator. The analytic relation may allow the rational design of mixed layers of small molecules for high-performance OLEDs.« less
Reduction Kinetics of Wüstite Scale on Pure Iron and Steel Sheets in Ar and H2 Gas Mixture
NASA Astrophysics Data System (ADS)
Mao, Weichen; Sloof, Willem G.
2017-10-01
A dense and closed Wüstite scale is formed on pure iron and Mn alloyed steel after oxidation in Ar + 33 vol pct CO2 + 17 vol pct CO gas mixture. Reducing the Wüstite scale in Ar + H2 gas mixture forms a dense and uniform iron layer on top of the remaining Wüstite scale, which separates the unreduced scale from the gas mixture. The reduction of Wüstite is controlled by the bulk diffusion of dissolved oxygen in the formed iron layer and follows parabolic growth rate law. The reduction kinetics of Wüstite formed on pure iron and on Mn alloyed steel are the same. The parabolic rate constant of Wüstite reduction obeys an Arrhenius relation with an activation energy of 104 kJ/mol if the formed iron layer is in the ferrite phase. However, at 1223 K (950 °C) the parabolic rate constant of Wüstite reduction drops due to the phase transformation of the iron layer from ferrite to austenite. The effect of oxygen partial pressure on the parabolic rate constant of Wüstite reduction is negligible when reducing in a gas mixture with a dew point below 283 K (10 °C). During oxidation of the Mn alloyed steel, Mn is dissolved in the Wüstite scale. Subsequently, during reduction of the Wüstite layer, Mn diffuses into the unreduced Wüstite. Ultimately, an oxide-free iron layer is obtained at the surface of the Mn alloyed steel, which is beneficial for coating application.
Modelling stock order flows with non-homogeneous intensities from high-frequency data
NASA Astrophysics Data System (ADS)
Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.
2013-10-01
A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).
Low Mach number fluctuating hydrodynamics for electrolytes
Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; ...
2016-11-18
Here, we formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are also interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids (A. Donev, et al., Physics of Fluids, 27, 3, 2015), we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the massmore » and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. Furthermore, we demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second-order in the deterministic setting, and for length scales much greater than the Debye length gives results consistent with an electroneutral/ambipolar approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.« less
Yuan, Dawei; Rao, Kripa; Varanasi, Sasidhar; Relue, Patricia
2012-08-01
A system that incorporates a packed bed reactor for isomerization of xylose and a hollow fiber membrane fermentor (HFMF) for sugar fermentation by yeast was developed for facile recovery of the xylose isomerase enzyme pellets and reuse of the cartridge loaded with yeast. Fermentation of pre-isomerized poplar hydrolysate produced using ionic liquid pretreatment in HFMF resulted in ethanol yields equivalent to that of model sugar mixtures of xylose and glucose. By recirculating model sugar mixtures containing partially isomerized xylose through the packed bed and the HFMF connected in series, 39 g/l ethanol was produced within 10h with 86.4% xylose utilization. The modular nature of this configuration has the potential for easy scale-up of the simultaneous isomerization and fermentation process without significant capital costs. Copyright © 2012 Elsevier Ltd. All rights reserved.
Respirometric screening of several types of manure and mixtures intended for composting.
Barrena, Raquel; Turet, Josep; Busquets, Anna; Farrés, Moisès; Font, Xavier; Sánchez, Antoni
2011-01-01
The viability of mixtures from manure and agricultural wastes as composting sources were systematically studied using a physicochemical and biological characterization. The combination of different parameters such as C:N ratio, free air space (FAS) and moisture content can help in the formulation of the mixtures. Nevertheless, the composting process may be challenging, particularly at industrial scales. The results of this study suggest that if the respirometric potential is known, it is possible to predict the behaviour of a full scale composting process. Respiration indices can be used as a tool for determining the suitability of composting as applied to manure and complementary wastes. Accordingly, manure and agricultural wastes with a high potential for composting and some proposed mixtures have been characterized in terms of respiration activity. Specifically, the potential of samples to be composted has been determined by means of the oxygen uptake rate (OUR) and the dynamic respirometric index (DRI). During this study, four of these mixtures were composted at full scale in a system consisting of a confined pile with forced aeration. The biological activity was monitored by means of the oxygen uptake rate inside the material (OURinsitu). This new parameter represents the real activity of the process. The comparison between the potential respirometric activities at laboratory scale with the in situ respirometric activity observed at full scale may be a useful tool in the design and optimization of composting systems for manure and other organic agricultural wastes. Copyright © 2010 Elsevier Ltd. All rights reserved.
A Two-length Scale Turbulence Model for Single-phase Multi-fluid Mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwarzkopf, J. D.; Livescu, D.; Baltzer, J. R.
2015-09-08
A two-length scale, second moment turbulence model (Reynolds averaged Navier-Stokes, RANS) is proposed to capture a wide variety of single-phase flows, spanning from incompressible flows with single fluids and mixtures of different density fluids (variable density flows) to flows over shock waves. The two-length scale model was developed to address an inconsistency present in the single-length scale models, e.g. the inability to match both variable density homogeneous Rayleigh-Taylor turbulence and Rayleigh-Taylor induced turbulence, as well as the inability to match both homogeneous shear and free shear flows. The two-length scale model focuses on separating the decay and transport length scales,more » as the two physical processes are generally different in inhomogeneous turbulence. This allows reasonable comparisons with statistics and spreading rates over such a wide range of turbulent flows using a common set of model coefficients. The specific canonical flows considered for calibrating the model include homogeneous shear, single-phase incompressible shear driven turbulence, variable density homogeneous Rayleigh-Taylor turbulence, Rayleigh-Taylor induced turbulence, and shocked isotropic turbulence. The second moment model shows to compare reasonably well with direct numerical simulations (DNS), experiments, and theory in most cases. The model was then applied to variable density shear layer and shock tube data and shows to be in reasonable agreement with DNS and experiments. Additionally, the importance of using DNS to calibrate and assess RANS type turbulence models is highlighted.« less
Virtual Patterson Experiment - A Way to Access the Rheology of Aggregates and Melanges
NASA Astrophysics Data System (ADS)
Delannoy, Thomas; Burov, Evgueni; Wolf, Sylvie
2014-05-01
Understanding the mechanisms of lithospheric deformation requires bridging the gap between human-scale laboratory experiments and the huge geological objects they represent. Those experiments are limited in spatial and time scale as well as in choice of materials (e.g., mono-phase minerals, exaggerated temperatures and strain rates), which means that the resulting constitutive laws may not fully represent real rocks at geological spatial and temporal scales. We use the thermo-mechanical numerical modelling approach as a tool to link both experiments and nature and hence better understand the rheology of the lithosphere, by enabling us to study the behavior of polymineralic aggregates and their impact on the localization of the deformation. We have adapted the large strain visco-elasto-plastic Flamar code to allow it to operate at all spatial and temporal scales, from sub-grain to geodynamic scale, and from seismic time scales to millions of years. Our first goal was to reproduce real rock mechanics experiments on deformation of mono and polymineralic aggregates in Patterson's load machine in order to deepen our understanding of the rheology of polymineralic rocks. In particular, we studied in detail the deformation of a 15x15 mm mica-quartz sample at 750 °C and 300 MPa. This mixture includes a molten phase and a solid phase in which shear bands develop as a result of interactions between ductile and brittle deformation and stress concentration at the boundaries between weak and strong phases. We used digitized x-ray scans of real samples as initial configuration for the numerical models so the model-predicted deformation and stress-strain behavior can match those observed in the laboratory experiment. Analyzing the numerical experiments providing the best match with the press experiments and making other complementary models by changing different parameters in the initial state (strength contrast between the phases, proportions, microstructure, etc.) provides a number of new elements of understanding of the mechanisms governing the localization of the deformation across the aggregates. We next used stress-strain curves derived from the numerical experiments to study in detail the evolution of the rheological behavior of each mineral phase as well as that of the mixtures in order to formulate constitutive relations for mélanges and polymineralic aggregates. The next step of our approach would be to link the constitutive laws obtained at small scale (laws that govern the rheology of a polymineralic aggregate, the effect of the presence of a molten phase, etc.) to the large-scale behavior of the Earth by implementing them in lithosphere-scale models.
Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?
Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610
Feedforward inhibition and synaptic scaling--two sides of the same coin?
Keck, Christian; Savin, Cristina; Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
Using a Mixture IRT Model to Understand English Learner Performance on Large-Scale Assessments
ERIC Educational Resources Information Center
Shea, Christine A.
2013-01-01
The purpose of this study was to determine whether an eighth grade state-level math assessment contained items that function differentially (DIF) for English Learner students (EL) as compared to English Only students (EO) and if so, what factors might have caused DIF. To determine this, Differential Item Functioning (DIF) analysis was employed.…
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Confinement-Driven Phase Separation of Quantum Liquid Mixtures
NASA Astrophysics Data System (ADS)
Prisk, T. R.; Pantalei, C.; Kaiser, H.; Sokol, P. E.
2012-08-01
We report small-angle neutron scattering studies of liquid helium mixtures confined in Mobil Crystalline Material-41 (MCM-41), a porous silica glass with narrow cylindrical nanopores (d=3.4nm). MCM-41 is an ideal model adsorbent for fundamental studies of gas sorption in porous media because its monodisperse pores are arranged in a 2D triangular lattice. The small-angle scattering consists of a series of diffraction peaks whose intensities are determined by how the imbibed liquid fills the pores. Pure He4 adsorbed in the pores show classic, layer-by-layer film growth as a function of pore filling, leaving the long range symmetry of the system intact. In contrast, the adsorption of He3-He4 mixtures produces a structure incommensurate with the pore lattice. Neither capillary condensation nor preferential adsorption of one helium isotope to the pore walls can provide the symmetry-breaking mechanism. The scattering is consistent with the formation of randomly distributed liquid-liquid microdomains ˜2.3nm in size, providing evidence that confinement in a nanometer scale capillary can drive local phase separation in quantum liquid mixtures.
NASA Astrophysics Data System (ADS)
Komarov, P.; Markina, A.; Ivanov, V.
2016-06-01
The problems of constructing of a meso-scale model of composites based on polymers and aluminosilicate nanotubes for prediction of the filler's spatial distribution at early stages of material formation have been considered. As a test system for the polymer matrix, the mixture of 3,4-epoxycyclohexylmethyl-3,4-epoxycyclohexanecarboxylate as epoxy resin monomers and 4-methylhexahydrophthalic anhydride as curing agent has been used. It is shown that the structure of a mixture of uncured epoxy resin and nanotubes is (mainly) determined by the surface functionalization of nanotubes. The results indicate that only nanotubes with maximum functionalization can preserve a uniform distribution in space.
NASA Astrophysics Data System (ADS)
Mann, B. F.; Small, C.
2014-12-01
Weather-based index insurance projects are rapidly expanding across the developing world. Many of these projects use satellite-based observations to detect extreme weather events, which inform and trigger payouts to smallholder farmers. While most index insurance programs use precipitation measurements to determine payouts, the use of remotely sensed observations of vegetation is currently being explored. In order to use vegetation indices as a basis for payouts, it is necessary to establish a consistent relationship between the vegetation index and the health and abundance of agriculture on the ground. The accuracy with which remotely sensed vegetation indices can detect changes in agriculture depends on both the spatial scale of the agriculture and the spatial resolution of the sensor. This study analyzes the relationship between meter and decameter scale vegetation fraction estimates derived from linear spectral mixture models with a more commonly used vegetation index (NDVI, EVI) at hectometer spatial scales. In addition, the analysis incorporates land cover/land use field observations collected in Tigray Ethiopia in July 2013. . It also tests the flexibility and utility of a standardized spectral mixture model in which land cover is represented as continuous fields of rock and soil substrate (S), vegetation (V) and dark surfaces (D; water, shadow). This analysis found strong linear relationships with vegetation metrics at 1.6-meter, 30-meter and 250-meter resolutions across spectrally diverse subsets of Tigray, Ethiopia and significantly correlated relationships using the Spearman's rho statistic. The observed linear scaling has positive implications for future use of moderate resolution vegetation indices in similar landscapes; especially index insurance projects that are scaling up across the developing world using remotely-sensed environmental information.
Modeling field-scale cosolvent flooding for DNAPL source zone remediation
NASA Astrophysics Data System (ADS)
Liang, Hailian; Falta, Ronald W.
2008-02-01
A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.
Modeling field-scale cosolvent flooding for DNAPL source zone remediation.
Liang, Hailian; Falta, Ronald W
2008-02-19
A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.
A New Self-Consistent Field Model of Polymer/Nanoparticle Mixture
NASA Astrophysics Data System (ADS)
Chen, Kang; Li, Hui-Shu; Zhang, Bo-Kai; Li, Jian; Tian, Wen-De
2016-02-01
Field-theoretical method is efficient in predicting assembling structures of polymeric systems. However, it’s challenging to generalize this method to study the polymer/nanoparticle mixture due to its multi-scale nature. Here, we develop a new field-based model which unifies the nanoparticle description with the polymer field within the self-consistent field theory. Instead of being “ensemble-averaged” continuous distribution, the particle density in the final morphology can represent individual particles located at preferred positions. The discreteness of particle density allows our model to properly address the polymer-particle interface and the excluded-volume interaction. We use this model to study the simplest system of nanoparticles immersed in the dense homopolymer solution. The flexibility of tuning the interfacial details allows our model to capture the rich phenomena such as bridging aggregation and depletion attraction. Insights are obtained on the enthalpic and/or entropic origin of the structural variation due to the competition between depletion and interfacial interaction. This approach is readily extendable to the study of more complex polymer-based nanocomposites or biology-related systems, such as dendrimer/drug encapsulation and membrane/particle assembly.
Nagai, Takashi; De Schamphelaere, Karel A C
2016-11-01
The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
NASA Astrophysics Data System (ADS)
Verginelli, Iason; Capobianco, Oriana; Hartog, Niels; Baciocchi, Renato
2017-02-01
In this work we introduce a 1-D analytical solution that can be used for the design of horizontal permeable reactive barriers (HPRBs) as a vapor mitigation system at sites contaminated by chlorinated solvents. The developed model incorporates a transient diffusion-dominated transport with a second-order reaction rate constant. Furthermore, the model accounts for the HPRB lifetime as a function of the oxidant consumption by reaction with upward vapors and its progressive dissolution and leaching by infiltrating water. Simulation results by this new model closely replicate previous lab-scale tests carried out on trichloroethylene (TCE) using a HPRB containing a mixture of potassium permanganate, water and sand. In view of field applications, design criteria, in terms of the minimum HPRB thickness required to attenuate vapors at acceptable risk-based levels and the expected HPRB lifetime, are determined from site-specific conditions such as vapor source concentration, water infiltration rate and HPRB mixture. The results clearly show the field-scale feasibility of this alternative vapor mitigation system for the treatment of chlorinated solvents. Depending on the oxidation kinetic of the target contaminant, a 1 m thick HPRB can ensure an attenuation of vapor concentrations of orders of magnitude up to 20 years, even for vapor source concentrations up to 10 g/m3. A demonstrative application for representative contaminated site conditions also shows the feasibility of this mitigation system from an economical point of view with capital costs potentially somewhat lower than those of other remediation options, such as soil vapor extraction systems. Overall, based on the experimental and theoretical evaluation thus far, field-scale tests are warranted to verify the potential and cost-effectiveness of HPRBs for vapor mitigation control under various conditions of application.
Nano-particle dynamics during capillary suction.
Kuijpers, C J; Huinink, H P; Tomozeiu, N; Erich, S J F; Adan, O C G
2018-07-01
Due to the increased use of nanoparticles in everyday applications, there is a need for theoretical descriptions of particle transport and attachment in porous media. It should be possible to develop a one dimensional model to describe nanoparticle retention during capillary transport of liquid mixtures in porous media. Water-glycerol-nanoparticle mixtures were prepared and the penetration process in porous Al 2 O 3 samples of varying pore size is measured using NMR imaging. The liquid and particle front can be measured by utilizing T 2 relaxation effects from the paramagnetic nanoparticles. A good agreement between experimental data and the predicted particle retention by the developed theory is found. Using the model, the binding constant for Fe 2 O 3 nanoparticles on sintered Al 2 O 3 samples and the maximum surface coverage are determined. Furthermore, we show that the penetrating liquid front follows a square root of time behavior as predicted by Darcy's law. However, scaling with the liquid parameters is no longer sufficient to map different liquid mixtures onto a single master curve. The Darcy model should be extended to address the two formed domains (with and without particles) and their interaction, to give an accurate prediction for the penetrating liquid front. Copyright © 2018 Elsevier Inc. All rights reserved.
A Diffuse Interface Model with Immiscibility Preservation
Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos
2013-01-01
A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207
Modeling and simulation of large scale stirred tank
NASA Astrophysics Data System (ADS)
Neuville, John R.
The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the agitation of the vessel is adequate to produce a homogenous mixture but not so high that it produces excessive erosion to internal components. The main findings reported by this study were: (1) Careful consideration of the fluid yield stress characteristic is required to make predictions of fluid flow behavior. Laminar Models can predict flow patterns and stagnant regions in the tank until full movement of the flow field occurs. Power Curves and flow patterns were developed for the full scale mixing model to show the differences in expected performance of the mixing process for a broad range of fluids that exhibit Herschel--Bulkley and Bingham Plastic flow behavior. (2) The impeller power demand is independent of the flow model selection for turbulent flow fields in the region of the impeller. The laminar models slightly over predicted the agitator impeller power demand produced by turbulent models. (3) The CFD results show that the power number produced by the mixing system is independent of size. The 40 gallon model produced the same power number results as the 9300 gallon model for the same process conditions. (4) CFD Results show that the Scale-Up of fluid motion in a 40 gallon tank should compare with fluid motion at full scale, 9300 gallons by maintaining constant impeller Tip Speed.
MixGF: spectral probabilities for mixture spectra from more than one peptide.
Wang, Jian; Bourne, Philip E; Bandeira, Nuno
2014-12-01
In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30-390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
MixGF: Spectral Probabilities for Mixture Spectra from more than One Peptide*
Wang, Jian; Bourne, Philip E.; Bandeira, Nuno
2014-01-01
In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30–390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. PMID:25225354
Rodea-Palomares, Ismael; Gonzalez-Pleiter, Miguel; Gonzalo, Soledad; Rosal, Roberto; Leganes, Francisco; Sabater, Sergi; Casellas, Maria; Muñoz-Carpena, Rafael; Fernández-Piñas, Francisca
2016-01-01
The ecological impacts of emerging pollutants such as pharmaceuticals are not well understood. The lack of experimental approaches for the identification of pollutant effects in realistic settings (that is, low doses, complex mixtures, and variable environmental conditions) supports the widespread perception that these effects are often unpredictable. To address this, we developed a novel screening method (GSA-QHTS) that couples the computational power of global sensitivity analysis (GSA) with the experimental efficiency of quantitative high-throughput screening (QHTS). We present a case study where GSA-QHTS allowed for the identification of the main pharmaceutical pollutants (and their interactions), driving biological effects of low-dose complex mixtures at the microbial population level. The QHTS experiments involved the integrated analysis of nearly 2700 observations from an array of 180 unique low-dose mixtures, representing the most complex and data-rich experimental mixture effect assessment of main pharmaceutical pollutants to date. An ecological scaling-up experiment confirmed that this subset of pollutants also affects typical freshwater microbial community assemblages. Contrary to our expectations and challenging established scientific opinion, the bioactivity of the mixtures was not predicted by the null mixture models, and the main drivers that were identified by GSA-QHTS were overlooked by the current effect assessment scheme. Our results suggest that current chemical effect assessment methods overlook a substantial number of ecologically dangerous chemical pollutants and introduce a new operational framework for their systematic identification. PMID:27617294
Kinetics of anaerobic treatment of landfill leachates combined with urban wastewaters.
Fueyo, Gema; Gutiérrez, Antonio; Berrueta, José
2003-04-01
The anaerobic degradation of landfill leachates mixed with domestic wastewater has been studied in a pilot-scale Upflow Anaerobic Sludge Blanket (UASB) reactor. A previous work in laboratory-scale had shown that a fraction (5%) of the refractory organic matter could be additionally degraded when these two substrates were treated in conjunction, but this synergistic effect in the Chemical Oxygen Demand (COD) removal was not reproduced. However, the mass loading rate for which the maximum degradation was obtained was higher for the mixtures (0.5 kg COD/kg SSV x d) than for the separated components (0.18 and 0.19), allowing an increase in the treatment capacity of the leachates. The methane productivity (304 L/kg COD) was close to the theoretical maximum and independent of the proportion of the mixture components. The experimental data were fitted to a modification of Haldane's kinetic model, in which the parameters depend on the hydraulic residence time and the biomass concentration.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
AVIRIS Land-Surface Mapping in Support of the Boreal Ecosystem-Atmosphere Study (BOREAS)
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Gamon, John; Keightley, Keir; Prentiss, Dylan; Reith, Ernest; Green, Robert
2001-01-01
A key scientific objective of the original Boreal Ecosystem-Atmospheric Study (BOREAS) field campaign (1993-1996) was to obtain the baseline data required for modeling and predicting fluxes of energy, mass, and trace gases in the boreal forest biome. These data sets are necessary to determine the sensitivity of the boreal forest biome to potential climatic changes and potential biophysical feedbacks on climate. A considerable volume of remotely-sensed and supporting field data were acquired by numerous researchers to meet this objective. By design, remote sensing and modeling were considered critical components for scaling efforts, extending point measurements from flux towers and field sites over larger spatial and longer temporal scales. A major focus of the BOREAS follow-on program is concerned with integrating the diverse remotely sensed and ground-based data sets to address specific questions such as carbon dynamics at local to regional scales. The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has the potential of contributing to BOREAS through: (1) accurate retrieved apparent surface reflectance; (2) improved landcover classification; and (3) direct assessment of biochemical/biophysical information such as canopy liquid water and chlorophyll concentration through pigment fits. In this paper, we present initial products for major flux tower sites including: (1) surface reflectance of dominant cover types; (2) a land-cover classification developed using spectral mixture analysis (SMA) and Multiple Endmember Spectral Mixture Analysis (MESMA); and (3) liquid water maps. Our goal is to compare these land-cover maps to existing maps and to incorporate AVIRIS image products into models of photosynthetic flux.
A computational investigation of the thermodynamics and structure in colloid and polymer mixtures
NASA Astrophysics Data System (ADS)
Mahynski, Nathan Alexander
In this dissertation I use computational tools to study the structure and thermodynamics of colloid-polymer mixtures. I show that fluid-fluid phase separation in mixtures of colloids and linear polymers cannot be universally reduced using polymer-based scaling principles since these assume the binodals exist in a single scaling regime, whereas accurate simulations clearly demonstrate otherwise. I show that rethinking these solutions in terms of multiple length scales is necessary to properly explain the thermodynamic stability and structure of these fluid phases, and produce phase diagrams in nearly quantitative agreement with experimental results. I then extend this work to encompass more geometrically complex "star" polymers revealing how the phase behavior for many of these binary mixtures may be mapped onto that of mixtures containing only linear polymers. I further consider the depletion-driven crystallization of athermal colloidal hard spheres induced by polymers. I demonstrate how the partitioning of a finite amount of polymer into the colloidal crystal phase implies that the polymer's architecture can be tailored to interact with the internal void structure of different crystal polymorphs uniquely, thus providing a direct route to thermodynamically stabilizing one arbitrarily chosen structure over another, e.g., the hexagonal close-packed crystal over the face-centered cubic. I then begin to generalize this result by considering the consequences of thermal interactions and complex polymer architectures. These principles lay the groundwork for intelligently engineering co-solute additives in crystallizing colloidal suspensions that can be used to thermodynamically isolate single crystal morphologies. Finally, I examine the competition between self-assembly and phase separation in polymer-grafted nanoparticle systems by comparing and contrasting the validity of two different models for grafted nanoparticles: "nanoparticle amphiphiles" versus "patchy particles." The latter suggests these systems have some utility in forming novel "equilibrium gel" phases, however, I find that considering grafted nanoparticles as amphiphiles provides a qualitatively accurate description of their thermodynamics revealing either first-order phase separation into two isotropic phases or continuous self-assembly. I find no signs of empty liquid formation, suggesting that these nanoparticles do not provide a route to such phases.
NASA Astrophysics Data System (ADS)
Magyar, Rudolph
2013-06-01
We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Meng, Jian; Zheng, Liangyuan
2007-09-01
Self-microemulsifying drug delivery systems (SMEDDS) are useful to improve the bioavailability of poorly water-soluble drugs by increasing their apparent solubility through solubilization. However, very few studies, to date, have systematically examined the level of drug apparent solubility in o/w microemulsion formed by self-microemulsifying. In this study, a mixture experimental design was used to simulate the influence of the compositions on simvastatin apparent solubility quantitatively through an empirical model. The reduced cubic polynomial equation successfully modeled the evolution of simvastatin apparent solubility. The results were presented using an analysis of response surface showing a scale of possible simvastatin apparent solubility between 0.0024 ~ 29.0 mg/mL. Moreover, this technique showed that simvastatin apparent solubility was mainly influenced by microemulsion concentration and, suggested that the drug would precipitate in the gastrointestinal tract due to dilution by gastrointestinal fluids. Furthermore, the model would help us design the formulation to maximize the drug apparent solubility and avoid precipitation of the drug.
NASA Technical Reports Server (NTRS)
Sabol, Donald E., Jr.; Adams, John B.; Smith, Milton O.
1992-01-01
The conditions that affect the spectral detection of target materials at the subpixel scale are examined. Two levels of spectral mixture analysis for determining threshold detection limits of target materials in a spectral mixture are presented, the cases where the target is detected as: (1) a component of a spectral mixture (continuum threshold analysis) and (2) residuals (residual threshold analysis). The results of these two analyses are compared under various measurement conditions. The examples illustrate the general approach that can be used for evaluating the spectral detectability of terrestrial and planetary targets at the subpixel scale.
Experimental and computational fluid dynamics studies of mixing of complex oral health products
NASA Astrophysics Data System (ADS)
Cortada-Garcia, Marti; Migliozzi, Simona; Weheliye, Weheliye Hashi; Dore, Valentina; Mazzei, Luca; Angeli, Panagiota; ThAMes Multiphase Team
2017-11-01
Highly viscous non-Newtonian fluids are largely used in the manufacturing of specialized oral care products. Mixing often takes place in mechanically stirred vessels where the flow fields and mixing times depend on the geometric configuration and the fluid physical properties. In this research, we study the mixing performance of complex non-Newtonian fluids using Computational Fluid Dynamics models and validate them against experimental laser-based optical techniques. To this aim, we developed a scaled-down version of an industrial mixer. As test fluids, we used mixtures of glycerol and a Carbomer gel. The viscosities of the mixtures against shear rate at different temperatures and phase ratios were measured and found to be well described by the Carreau model. The numerical results were compared against experimental measurements of velocity fields from Particle Image Velocimetry (PIV) and concentration profiles from Planar Laser Induced Fluorescence (PLIF).
Modular analysis of the probabilistic genetic interaction network.
Hou, Lin; Wang, Lin; Qian, Minping; Li, Dong; Tang, Chao; Zhu, Yunping; Deng, Minghua; Li, Fangting
2011-03-15
Epistatic Miniarray Profiles (EMAP) has enabled the mapping of large-scale genetic interaction networks; however, the quantitative information gained from EMAP cannot be fully exploited since the data are usually interpreted as a discrete network based on an arbitrary hard threshold. To address such limitations, we adopted a mixture modeling procedure to construct a probabilistic genetic interaction network and then implemented a Bayesian approach to identify densely interacting modules in the probabilistic network. Mixture modeling has been demonstrated as an effective soft-threshold technique of EMAP measures. The Bayesian approach was applied to an EMAP dataset studying the early secretory pathway in Saccharomyces cerevisiae. Twenty-seven modules were identified, and 14 of those were enriched by gold standard functional gene sets. We also conducted a detailed comparison with state-of-the-art algorithms, hierarchical cluster and Markov clustering. The experimental results show that the Bayesian approach outperforms others in efficiently recovering biologically significant modules.
Partial molar enthalpies and reaction enthalpies from equilibrium molecular dynamics simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnell, Sondre K.; Department of Chemical and Biomolecular Engineering, University of California, Berkeley, California 94720; Department of Chemistry, Faculty of Natural Science and Technology, Norwegian University of Science and Technology, 4791 Trondheim
2014-10-14
We present a new molecular simulation technique for determining partial molar enthalpies in mixtures of gases and liquids from single simulations, without relying on particle insertions, deletions, or identity changes. The method can also be applied to systems with chemical reactions. We demonstrate our method for binary mixtures of Weeks-Chandler-Anderson particles by comparing with conventional simulation techniques, as well as for a simple model that mimics a chemical reaction. The method considers small subsystems inside a large reservoir (i.e., the simulation box), and uses the construction of Hill to compute properties in the thermodynamic limit from small-scale fluctuations. Results obtainedmore » with the new method are in excellent agreement with those from previous methods. Especially for modeling chemical reactions, our method can be a valuable tool for determining reaction enthalpies directly from a single MD simulation.« less
Mixture Model and MDSDCA for Textual Data
NASA Astrophysics Data System (ADS)
Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît
E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.
Biotic and abiotic variables influencing plant litter breakdown in streams: a global study.
Boyero, Luz; Pearson, Richard G; Hui, Cang; Gessner, Mark O; Pérez, Javier; Alexandrou, Markos A; Graça, Manuel A S; Cardinale, Bradley J; Albariño, Ricardo J; Arunachalam, Muthukumarasamy; Barmuta, Leon A; Boulton, Andrew J; Bruder, Andreas; Callisto, Marcos; Chauvet, Eric; Death, Russell G; Dudgeon, David; Encalada, Andrea C; Ferreira, Verónica; Figueroa, Ricardo; Flecker, Alexander S; Gonçalves, José F; Helson, Julie; Iwata, Tomoya; Jinggut, Tajang; Mathooko, Jude; Mathuriau, Catherine; M'Erimba, Charles; Moretti, Marcelo S; Pringle, Catherine M; Ramírez, Alonso; Ratnarajah, Lavenia; Rincon, José; Yule, Catherine M
2016-04-27
Plant litter breakdown is a key ecological process in terrestrial and freshwater ecosystems. Streams and rivers, in particular, contribute substantially to global carbon fluxes. However, there is little information available on the relative roles of different drivers of plant litter breakdown in fresh waters, particularly at large scales. We present a global-scale study of litter breakdown in streams to compare the roles of biotic, climatic and other environmental factors on breakdown rates. We conducted an experiment in 24 streams encompassing latitudes from 47.8° N to 42.8° S, using litter mixtures of local species differing in quality and phylogenetic diversity (PD), and alder (Alnus glutinosa) to control for variation in litter traits. Our models revealed that breakdown of alder was driven by climate, with some influence of pH, whereas variation in breakdown of litter mixtures was explained mainly by litter quality and PD. Effects of litter quality and PD and stream pH were more positive at higher temperatures, indicating that different mechanisms may operate at different latitudes. These results reflect global variability caused by multiple factors, but unexplained variance points to the need for expanded global-scale comparisons. © 2016 The Author(s).
Biotic and abiotic variables influencing plant litter breakdown in streams: a global study
Pearson, Richard G.; Hui, Cang; Gessner, Mark O.; Pérez, Javier; Alexandrou, Markos A.; Graça, Manuel A. S.; Cardinale, Bradley J.; Albariño, Ricardo J.; Arunachalam, Muthukumarasamy; Barmuta, Leon A.; Boulton, Andrew J.; Bruder, Andreas; Callisto, Marcos; Chauvet, Eric; Death, Russell G.; Dudgeon, David; Encalada, Andrea C.; Ferreira, Verónica; Figueroa, Ricardo; Flecker, Alexander S.; Gonçalves, José F.; Helson, Julie; Iwata, Tomoya; Jinggut, Tajang; Mathooko, Jude; Mathuriau, Catherine; M'Erimba, Charles; Moretti, Marcelo S.; Pringle, Catherine M.; Ramírez, Alonso; Ratnarajah, Lavenia; Rincon, José; Yule, Catherine M.
2016-01-01
Plant litter breakdown is a key ecological process in terrestrial and freshwater ecosystems. Streams and rivers, in particular, contribute substantially to global carbon fluxes. However, there is little information available on the relative roles of different drivers of plant litter breakdown in fresh waters, particularly at large scales. We present a global-scale study of litter breakdown in streams to compare the roles of biotic, climatic and other environmental factors on breakdown rates. We conducted an experiment in 24 streams encompassing latitudes from 47.8° N to 42.8° S, using litter mixtures of local species differing in quality and phylogenetic diversity (PD), and alder (Alnus glutinosa) to control for variation in litter traits. Our models revealed that breakdown of alder was driven by climate, with some influence of pH, whereas variation in breakdown of litter mixtures was explained mainly by litter quality and PD. Effects of litter quality and PD and stream pH were more positive at higher temperatures, indicating that different mechanisms may operate at different latitudes. These results reflect global variability caused by multiple factors, but unexplained variance points to the need for expanded global-scale comparisons. PMID:27122551
A review of toxicity and mechanisms of individual and mixtures of heavy metals in the environment.
Wu, Xiangyang; Cobbina, Samuel J; Mao, Guanghua; Xu, Hai; Zhang, Zhen; Yang, Liuqing
2016-05-01
The rational for the study was to review the literature on the toxicity and corresponding mechanisms associated with lead (Pb), mercury (Hg), cadmium (Cd), and arsenic (As), individually and as mixtures, in the environment. Heavy metals are ubiquitous and generally persist in the environment, enabling them to biomagnify in the food chain. Living systems most often interact with a cocktail of heavy metals in the environment. Heavy metal exposure to biological systems may lead to oxidation stress which may induce DNA damage, protein modification, lipid peroxidation, and others. In this review, the major mechanism associated with toxicities of individual metals was the generation of reactive oxygen species (ROS). Additionally, toxicities were expressed through depletion of glutathione and bonding to sulfhydryl groups of proteins. Interestingly, a metal like Pb becomes toxic to organisms through the depletion of antioxidants while Cd indirectly generates ROS by its ability to replace iron and copper. ROS generated through exposure to arsenic were associated with many modes of action, and heavy metal mixtures were found to have varied effects on organisms. Many models based on concentration addition (CA) and independent action (IA) have been introduced to help predict toxicities and mechanisms associated with metal mixtures. An integrated model which combines CA and IA was further proposed for evaluating toxicities of non-interactive mixtures. In cases where there are molecular interactions, the toxicogenomic approach was used to predict toxicities. The high-throughput toxicogenomics combines studies in genetics, genome-scale expression, cell and tissue expression, metabolite profiling, and bioinformatics.
Fourier transform infrared spectroscopy for Kona coffee authentication.
Wang, Jun; Jun, Soojin; Bittenbender, H C; Gautz, Loren; Li, Qing X
2009-06-01
Kona coffee, the variety of "Kona typica" grown in the north and south districts of Kona-Island, carries a unique stamp of the region of Big Island of Hawaii, U.S.A. The excellent quality of Kona coffee makes it among the best coffee products in the world. Fourier transform infrared (FTIR) spectroscopy integrated with an attenuated total reflectance (ATR) accessory and multivariate analysis was used for qualitative and quantitative analysis of ground and brewed Kona coffee and blends made with Kona coffee. The calibration set of Kona coffee consisted of 10 different blends of Kona-grown original coffee mixture from 14 different farms in Hawaii and a non-Kona-grown original coffee mixture from 3 different sampling sites in Hawaii. Derivative transformations (1st and 2nd), mathematical enhancements such as mean centering and variance scaling, multivariate regressions by partial least square (PLS), and principal components regression (PCR) were implemented to develop and enhance the calibration model. The calibration model was successfully validated using 9 synthetic blend sets of 100% Kona coffee mixture and its adulterant, 100% non-Kona coffee mixture. There were distinct peak variations of ground and brewed coffee blends in the spectral "fingerprint" region between 800 and 1900 cm(-1). The PLS-2nd derivative calibration model based on brewed Kona coffee with mean centering data processing showed the highest degree of accuracy with the lowest standard error of calibration value of 0.81 and the highest R(2) value of 0.999. The model was further validated by quantitative analysis of commercial Kona coffee blends. Results demonstrate that FTIR can be a rapid alternative to authenticate Kona coffee, which only needs very quick and simple sample preparations.
Viewing inside Pyroclastic Flows - Large-scale Experiments on hot pyroclast-gas mixture flows
NASA Astrophysics Data System (ADS)
Breard, E. C.; Lube, G.; Cronin, S. J.; Jones, J.
2014-12-01
Pyroclastic density currents are the largest threat from volcanoes. Direct observations of natural flows are persistently prevented because of their violence and remain limited to broad estimates of bulk flow behaviour. The Pyroclastic Flow Generator - a large-scale experimental facility to synthesize hot gas-particle mixture flows scaled to pyroclastic flows and surges - allows investigating the physical processes behind PDC behaviour in safety. The ability to simulate natural eruption conditions and to view and measure inside the hot flows allows deriving validation and calibration data sets for existing numerical models, and to improve the constitutive relationships necessary for their effective use as powerful tools in hazard assessment. We here report on a systematic series of large-scale experiments on up to 30 ms-1 fast, 2-4.5 m thick, 20-35 m long flows of natural pyroclastic material and gas. We will show high-speed movies and non-invasive sensor data that detail the internal structure of the analogue pyroclastic flows. The experimental PDCs are synthesized by the controlled 'eruption column collapse' of variably diluted suspensions into an instrumented channel. Experiments show four flow phases: mixture acceleration and dilution during free fall; impact and lateral blasting; PDC runout; and co-ignimbrite cloud formation. The fully turbulent flows reach Reynolds number up to 107 and depositional facies similar to natural deposits. In the PDC runout phase, the shear flows develop a four-partite structure from top to base: a fully turbulent, strongly density-stratified ash cloud with average particle concentrations <<1vol%; a transient, turbulent dense suspension region with particle concentrations between 1 and 10 vol%; a non-turbulent, aerated and highly mobile dense underflows with particle concentrations between 40 and 50 vol%; and a vertically aggrading bed of static material. We characterise these regions and the exchanges of energy and momentum through their interfaces via vertical time-series profiles of velocity, particle concentration, gas and particle transport directionality and turbulent eddy characteristics. We highlight the importance of each region for the PDC runout dynamics and introduce a new transport and sedimentation model for downslope evolving pyroclastic flows.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Friction in debris flows: inferences from large-scale flume experiments
Iverson, Richard M.; LaHusen, Richard G.; ,
1993-01-01
A recently constructed flume, 95 m long and 2 m wide, permits systematic experimentation with unsteady, nonuniform flows of poorly sorted geological debris. Preliminary experiments with water-saturated mixtures of sand and gravel show that they flow in a manner consistent with Coulomb frictional behavior. The Coulomb flow model of Savage and Hutter (1989, 1991), modified to include quasi-static pore-pressure effects, predicts flow-front velocities and flow depths reasonably well. Moreover, simple scaling analyses show that grain friction, rather than liquid viscosity or grain collisions, probably dominates shear resistance and momentum transport in the experimental flows. The same scaling indicates that grain friction is also important in many natural debris flows.
Lu, Pei; Xia, Jun; Li, Zhicheng; Xiong, Jing; Yang, Jian; Zhou, Shoujun; Wang, Lei; Chen, Mingyang; Wang, Cheng
2016-11-08
Accurate segmentation of blood vessels plays an important role in the computer-aided diagnosis and interventional treatment of vascular diseases. The statistical method is an important component of effective vessel segmentation; however, several limitations discourage the segmentation effect, i.e., dependence of the image modality, uneven contrast media, bias field, and overlapping intensity distribution of the object and background. In addition, the mixture models of the statistical methods are constructed relaying on the characteristics of the image histograms. Thus, it is a challenging issue for the traditional methods to be available in vessel segmentation from multi-modality angiographic images. To overcome these limitations, a flexible segmentation method with a fixed mixture model has been proposed for various angiography modalities. Our method mainly consists of three parts. Firstly, multi-scale filtering algorithm was used on the original images to enhance vessels and suppress noises. As a result, the filtered data achieved a new statistical characteristic. Secondly, a mixture model formed by three probabilistic distributions (two Exponential distributions and one Gaussian distribution) was built to fit the histogram curve of the filtered data, where the expectation maximization (EM) algorithm was used for parameters estimation. Finally, three-dimensional (3D) Markov random field (MRF) were employed to improve the accuracy of pixel-wise classification and posterior probability estimation. To quantitatively evaluate the performance of the proposed method, two phantoms simulating blood vessels with different tubular structures and noises have been devised. Meanwhile, four clinical angiographic data sets from different human organs have been used to qualitatively validate the method. To further test the performance, comparison tests between the proposed method and the traditional ones have been conducted on two different brain magnetic resonance angiography (MRA) data sets. The results of the phantoms were satisfying, e.g., the noise was greatly suppressed, the percentages of the misclassified voxels, i.e., the segmentation error ratios, were no more than 0.3%, and the Dice similarity coefficients (DSCs) were above 94%. According to the opinions of clinical vascular specialists, the vessels in various data sets were extracted with high accuracy since complete vessel trees were extracted while lesser non-vessels and background were falsely classified as vessel. In the comparison experiments, the proposed method showed its superiority in accuracy and robustness for extracting vascular structures from multi-modality angiographic images with complicated background noises. The experimental results demonstrated that our proposed method was available for various angiographic data. The main reason was that the constructed mixture probability model could unitarily classify vessel object from the multi-scale filtered data of various angiography images. The advantages of the proposed method lie in the following aspects: firstly, it can extract the vessels with poor angiography quality, since the multi-scale filtering algorithm can improve the vessel intensity in the circumstance such as uneven contrast media and bias field; secondly, it performed well for extracting the vessels in multi-modality angiographic images despite various signal-noises; and thirdly, it was implemented with better accuracy, and robustness than the traditional methods. Generally, these traits declare that the proposed method would have significant clinical application.
Campone, Mario; Beck, J Thaddeus; Gnant, Michael; Neven, Patrick; Pritchard, Kathleen I; Bachelot, Thomas; Provencher, Louise; Rugo, Hope S; Piccart, Martine; Hortobagyi, Gabriel N; Nunzi, Martina; Heng, Daniel Y C; Baselga, José; Komorowski, Anna; Noguchi, Shinzaburo; Horiguchi, Jun; Bennett, Lee; Ziemiecki, Ryan; Zhang, Jie; Cahana, Ayelet; Taran, Tetiana; Sahmoud, Tarek; Burris, Howard A
2013-11-01
Everolimus (EVE)+exemestane (EXE; n = 485) more than doubled median progression-free survival versus placebo (PBO) + EXE (n = 239), with a manageable safety profile and no deterioration in health-related quality-of-life (HRQOL) in patients with hormone-receptor-positive (HR(+)) advanced breast cancer (ABC) who recurred or progressed on/after nonsteroidal aromatase inhibitor (NSAI) therapy. To further evaluate EVE + EXE impact on disease burden, we conducted additional post-hoc analyses of patient-reported HRQOL. HRQOL was assessed using EORTC QLQ-C30 and QLQ-BR23 questionnaires at baseline and every 6 weeks thereafter until treatment discontinuation because of disease progression, toxicity, or consent withdrawal. Endpoints included the QLQ-C30 Global Health Status (QL2) scale, the QLQ-BR23 breast symptom (BRBS), and arm symptom (BRAS) scales. Between-group differences in change from baseline were assessed using linear mixed models with selected covariates. Sensitivity analysis using pattern-mixture models determined the effect of study discontinuation on/before week 24. Treatment arms were compared using differences of least squares mean (LSM) changes from baseline and 95% confidence intervals (CIs) at each timepoint and overall. Clinicaltrials.gov: NCT00863655. Progression-free survival, survival, response rate, safety, and HRQOL. Linear mixed models (primary model) demonstrated no statistically significant overall difference between EVE + EXE and PBO + EXE for QL2 (LSM difference = -1.91; 95% CI = -4.61, 0.78), BRBS (LSM difference = -0.18; 95% CI = -1.98, 1.62), or BRAS (LSM difference = -0.42; 95% CI = -2.94, 2.10). Based on pattern-mixture models, patients who dropped out early had worse QL2 decline on both treatments. In the expanded pattern-mixture model, EVE + EXE-treated patients who did not drop out early had stable BRBS and BRAS relative to PBO + EXE. HRQOL data were not collected after disease progression. These analyses confirm that EVE + EXE provides clinical benefit without adversely impacting HRQOL in patients with HR(+) ABC who recurred/progressed on prior NSAIs versus endocrine therapy alone.
NASA Astrophysics Data System (ADS)
Walko, R. L.; Ashby, T.; Cotton, W. R.
2017-12-01
The fundamental role of atmospheric aerosols in the process of cloud droplet nucleation is well known, and there is ample evidence that the concentration, size, and chemistry of aerosols can strongly influence microphysical, thermodynamic, and ultimately dynamic properties and evolution of clouds and convective systems. With the increasing availability of observation- and model-based environmental representations of different types of anthropogenic and natural aerosols, there is increasing need for models to be able to represent which aerosols nucleate and which do not in supersaturated conditions. However, this is a very complex process that involves competition for water vapor between multiple aerosol species (chemistries) and different aerosol sizes within each species. Attempts have been made to parameterize the nucleation properties of mixtures of different aerosol species, but it is very difficult or impossible to represent all possible mixtures that may occur in practice. As part of a modeling study of the impact of anthropogenic and natural aerosols on hurricanes, we developed an ultra-efficient aerosol bin model to represent nucleation in a high-resolution atmospheric model that explicitly represents cloud- and subcloud-scale vertical motion. The bin model is activated at any time and location in a simulation where supersaturation occurs and is potentially capable of activating new cloud droplets. The bins are populated from the aerosol species that are present at the given time and location and by multiple sizes from each aerosol species according to a characteristic size distribution, and the chemistry of each species is represented by its absorption or adsorption characteristics. The bin model is integrated in time increments that are smaller than that of the atmospheric model in order to temporally resolve the peak supersaturation, which determines the total nucleated number. Even though on the order of 100 bins are typically utilized, this leads only to a 10 or 20% increase in overall computational cost due to the efficiency of the bin model. This method is highly versatile in that it automatically accommodates any possible number and mixture of different aerosol species. Applications of this model to simulations of Typhoon Nuri will be presented.
Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin
2018-01-01
The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906
Thermodynamic scaling of the shear viscosity of Mie n-6 fluids and their binary mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delage-Santacreu, Stephanie; Galliero, Guillaume, E-mail: guillaume.galliero@univ-pau.fr; Hoang, Hai
2015-05-07
In this work, we have evaluated the applicability of the so-called thermodynamic scaling and the isomorph frame to describe the shear viscosity of Mie n-6 fluids of varying repulsive exponents (n = 8, 12, 18, 24, and 36). Furthermore, the effectiveness of the thermodynamic scaling to deal with binary mixtures of Mie n-6 fluids has been explored as well. To generate the viscosity database of these fluids, extensive non-equilibrium molecular dynamics simulations have been performed for various thermodynamic conditions. Then, a systematic approach has been used to determine the gamma exponent value (γ) characteristic of the thermodynamic scaling approach formore » each system. In addition, the applicability of the isomorph theory with a density dependent gamma has been confirmed in pure fluids. In both pure fluids and mixtures, it has been found that the thermodynamic scaling with a constant gamma is sufficient to correlate the viscosity data on a large range of thermodynamic conditions covering liquid and supercritical states as long as the density is not too high. Interestingly, it has been obtained that, in pure fluids, the value of γ is directly proportional to the repulsive exponent of the Mie potential. Finally, it has been found that the value of γ in mixtures can be deduced from those of the pure component using a simple logarithmic mixing rule.« less
Generation of skeletal mechanism by means of projected entropy participation indices
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica
2017-11-01
When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.
Simple and Multiple Endmember Mixture Analysis in the Boreal Forest
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Gamon, John A.; Qiu, Hong-Lie
2000-01-01
A key scientific objective of the original Boreal Ecosystem-Atmospheric Study (BOREAS) field campaign (1993-1996) was to obtain the baseline data required for modeling and predicting fluxes of energy, mass, and trace gases in the boreal forest biome. These data sets are necessary to determine the sensitivity of the boreal forest biome to potential climatic changes and potential biophysical feedbacks on climate. A considerable volume of remotely sensed and supporting field data were acquired by numerous researchers to meet this objective. By design, remote sensing and modeling were considered critical components for scaling efforts, extending point measurements from flux towers and field sites over larger spatial and longer temporal scales. A major focus of the BOREAS Follow-on program was concerned with integrating the diverse remotely sensed and ground-based data sets to address specific questions such as carbon dynamics at local to regional scales.
ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
Catalytic ignition model in a monolithic reactor with in-depth reaction
NASA Technical Reports Server (NTRS)
Tien, Ta-Ching; Tien, James S.
1990-01-01
Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Models of chromatin spatial organisation in the cell nucleus
NASA Astrophysics Data System (ADS)
Nicodemi, Mario
2014-03-01
In the cell nucleus chromosomes have a complex architecture serving vital functional purposes. Recent experiments have started unveiling the interaction map of DNA sites genome-wide, revealing different levels of organisation at different scales. The principles, though, which orchestrate such a complex 3D structure remain still mysterious. I will overview the scenario emerging from some classical polymer physics models of the general aspect of chromatin spatial organisation. The available experimental data, which can be rationalised in a single framework, support a picture where chromatin is a complex mixture of differently folded regions, self-organised across spatial scales according to basic physical mechanisms. I will also discuss applications to specific DNA loci, e.g. the HoxB locus, where models informed with biological details, and tested against targeted experiments, can help identifying the determinants of folding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Jason Joe
Based upon the presented sensitivity data for the examined calcium nitrate mixtures using sugar and sawdust, contact handling/mixing of these materials does not present hazards greater than those occurring during handling of dry PETN powder. The aluminized calcium nitrate mixtures present a known ESD fire hazard due to the fine aluminum powder fuel. These mixtures may yet present an ESD explosion hazard, though this has not been investigated at this time. The detonability of these mixtures will be investigated during Phase III testing.
OH radical kinetics in hydrogen-air mixtures at the conditions of strong vibrational nonequilibrium
NASA Astrophysics Data System (ADS)
Winters, Caroline; Hung, Yi-Chen; Jans, Elijah; Eckert, Zak; Frederickson, Kraig; Adamovich, Igor V.; Popov, Nikolay
2017-12-01
This work presents results of time-resolved, absolute measurements of OH number density, nitrogen vibrational temperature, and translational-rotational temperature in air and lean hydrogen-air mixtures excited by a diffuse filament nanosecond pulse discharge, at a pressure of 100 Torr and high specific energy loading. The main objective of these measurements is to study kinetics of OH radicals at the conditions of strong vibrational excitation of nitrogen, below autoignition temperature. N2 vibrational temperature and gas temperature in the discharge and the afterglow are measured by ns broadband coherent anti-Stokes Raman scattering. Hydroxyl radical number density is measured by laser induced fluorescence, calibrated by Rayleigh scattering. The results show that the discharge generates strong vibrational nonequilibrium in air and H2-air mixtures for delay times after the discharge pulse of up to ~1 ms, with a peak vibrational temperature of T v ≈ 1900 K at T ≈ 500 K. Nitrogen vibrational temperature peaks at 100-200 µs after the discharge pulse, before decreasing due to vibrational-translational relaxation by O atoms (on the time scale of several hundred µs) and diffusion (on ms time scale). OH number density increases gradually after the discharge pulse, peaking at t ~ 100-300 µs and decaying on a longer time scale, until t ~ 1 ms. Both OH rise time and decay time decrease as H2 fraction in the mixture is increased from 1% to 5%. Comparison of the experimental data with kinetic modeling predictions shows that OH kinetics is controlled primarily by reactions of H2 and O2 with O and H atoms generated during the discharge. At the present conditions, OH number density is not affected by N2 vibrational excitation directly, i.e. via vibrational energy transfer to HO2. The effect of a reaction between vibrationally excited H2 and O atoms on OH kinetics is also shown to be insignificant. As the discharge pulse coupled energy is increased, the model predicts transient OH number density overshoot due to the temperature rise caused by N2 vibrational relaxation by O atoms, which may well be a dominant effect in discharges with specific energy loading.
Apparatus and methodology for fire gas characterization by means of animal exposure
NASA Technical Reports Server (NTRS)
Marcussen, W. H.; Hilado, C. J.; Furst, A.; Leon, H. A.; Kourtides, D. A.; Parker, J. A.; Butte, J. C.; Cummins, J. M.
1976-01-01
While there is a great deal of information available from small-scale laboratory experiments and for relatively simple mixtures of gases, considerable uncertainty exists regarding appropriate bioassay techniques for the complex mixture of gases generated in full-scale fires. Apparatus and methodology have been developed based on current state of the art for determining the effects of fire gases in the critical first 10 minutes of a full-scale fire on laboratory animals. This information is presented for its potential value and use while further improvements are being made.
Richter, Markus; McLinden, Mark O
2017-07-21
Phase equilibria of fluid mixtures are important in numerous industrial applications and are, thus, a major focus of thermophysical property research. Improved data, particularly along the dew line, are needed to improve model predictions. Here we present experimental results utilizing highly accurate densimetry to quantify the effects of sorption and capillary condensation, which exert a distorting influence on measured properties near the dew line. We investigate the (pressure, density, temperature, composition) behaviour of binary (CH 4 + C 3 H 8 ) and (Ar + CO 2 ) mixtures over the temperature range from (248.15 to 273.15) K starting at low pressures and increasing in pressure towards the dew point along isotherms. Three distinct regions are observed: (1) minor sorption effects in micropores at low pressures; (2) capillary condensation followed by wetting in macro-scale surface scratches beginning approximately 2% below the dew-point pressure; (3) bulk condensation. We hypothesize that the true dew point lies within the second region.
Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.
Nagai, Takashi
2017-10-01
The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.
Bleka, Øyvind; Storvik, Geir; Gill, Peter
2016-03-01
We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter
2016-04-01
Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as illustrated with lab-scale and large-scale experiments. A large-scale natural landslide event down a curved channel is presented to show the model performance at such a scale, calibrated based on the observed surface super-elevation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvis, Ian W.H., E-mail: Ian.Jarvis@ki.se; Bergvall, Christoffer, E-mail: Christoffer.Bergvall@anchem.su.se; Bottai, Matteo, E-mail: Matteo.Bottai@ki.se
2013-02-01
Complex mixtures of polycyclic aromatic hydrocarbons (PAHs) are present in air particulate matter (PM) and have been associated with many adverse human health effects including cancer and respiratory disease. However, due to their complexity, the risk of exposure to mixtures is difficult to estimate. In the present study the effects of binary mixtures of benzo[a]pyrene (BP) and dibenzo[a,l]pyrene (DBP) and complex mixtures of PAHs in urban air PM extracts on DNA damage signaling was investigated. Applying a statistical model to the data we observed a more than additive response for binary mixtures of BP and DBP on activation of DNAmore » damage signaling. Persistent activation of checkpoint kinase 1 (Chk1) was observed at significantly lower BP equivalent concentrations in air PM extracts than BP alone. Activation of DNA damage signaling was also more persistent in air PM fractions containing PAHs with more than four aromatic rings suggesting larger PAHs contribute a greater risk to human health. Altogether our data suggests that human health risk assessment based on additivity such as toxicity equivalency factor scales may significantly underestimate the risk of exposure to complex mixtures of PAHs. The data confirms our previous findings with PAH-contaminated soil (Niziolek-Kierecka et al., 2012) and suggests a possible role for Chk1 Ser317 phosphorylation as a biological marker for future analyses of complex mixtures of PAHs. -- Highlights: ► Benzo[a]pyrene (BP), dibenzo[a,l]pyrene (DBP) and air PM PAH extracts were compared. ► Binary mixture of BP and DBP induced a more than additive DNA damage response. ► Air PM PAH extracts were more potent than toxicity equivalency factor estimates. ► Larger PAHs (> 4 rings) contribute more to the genotoxicity of PAHs in air PM. ► Chk1 is a sensitive marker for persistent activation of DNA damage signaling from PAH mixtures.« less
PLUME-MoM 1.0: a new 1-D model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-05-01
In this paper a new mathematical model for volcanic plumes, named PlumeMoM, is presented. The model describes the steady-state 1-D dynamics of the plume in a 3-D coordinate system, accounting for continuous variability in particle distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. Proper description of such a multiparticle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of properties of the continuous size-distribution of the particles. This is achieved by formulation of fundamental transport equations for the multiparticle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables investigation of the response of four key output variables (mean and standard deviation (SD) of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and SD) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution.
Liquid membrane purification of biogas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majumdar, S.; Guha, A.K.; Lee, Y.T.
1991-03-01
Conventional gas purification technologies are highly energy intensive. They are not suitable for economic removal of CO{sub 2} from methane obtained in biogas due to the small scale of gas production. Membrane separation techniques on the other hand are ideally suited for low gas production rate applications due to their modular nature. Although liquid membranes possess a high species permeability and selectivity, they have not been used for industrial applications due to the problems of membrane stability, membrane flooding and poor operational flexibility, etc. A new hollow-fiber-contained liquid membrane (HFCLM) technique has been developed recently. This technique overcomes the shortcomingsmore » of the traditional immobilized liquid membrane technology. A new technique uses two sets of hydrophobic, microporous hollow fine fibers, packed tightly in a permeator shell. The inter-fiber space is filled with an aqueous liquid acting as the membrane. The feed gas mixture is separated by selective permeation of a species through the liquid from one fiber set to the other. The second fiber set carries a sweep stream, gas or liquid, or simply the permeated gas stream. The objectives (which were met) of the present investigation were as follows. To study the selective removal of CO{sub 2} from a model biogas mixture containing 40% CO{sub 2} (the rest being N{sub 2} or CH{sub 4}) using a HFCLM permeator under various operating modes that include sweep gas, sweep liquid, vacuum and conventional permeation; to develop a mathematical model for each mode of operation; to build a large-scale purification loop and large-scale permeators for model biogas separation and to show stable performance over a period of one month.« less
Thresholding functional connectomes by means of mixture modeling.
Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F
2018-05-01
Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Yu, Pengzhan; Li, Xingqi; Li, Xiunan; Lu, Xiuling; Ma, Guanghui; Su, Zhiguo
2007-10-15
A clear and powerful chromatographic approach to purify polyethylene glycol derivatives at a preparative scale was reported, which was based on the polystyrene-divinylbenzene beads with ethanol/water as eluants. The validity of this method was verified with the reaction mixture of mPEG-Glu and mPEG propionaldehyde diethylacetal (ALD-PEG) as the model. The target products were one-step achieved with the purity of >99% on the polymer resins column at gram scale. The method developed was free from such disadvantages as utility of toxic solvent and narrow application scope, which was combined with conventional approaches. The method developed provided an appealing and attractive alternative methods for purification of PEG derivatives at a preparative scale.
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Pascacio-Villafán, Carlos; Birke, Andrea; Williams, Trevor; Aluja, Martín
2017-01-01
We modeled the cost-effectiveness of rearing Anastrepha ludens, a major fruit fly pest currently mass reared for sterilization and release in pest control programs implementing the sterile insect technique (SIT). An optimization model was generated by combining response surface models of artificial diet cost savings with models of A. ludens pupation, pupal weight, larval development time and adult emergence as a function of mixtures of yeast, a costly ingredient, with corn flour and corncob fractions in the diet. Our model revealed several yeast-reduced mixtures that could be used to prepare diets that were considerably cheaper than a standard diet used for mass rearing. Models predicted a similar production of insects (pupation and adult emergence), with statistically similar pupal weights and larval development times between yeast-reduced diets and the standard mass rearing diet formulation. Annual savings from using the modified diets could be up to 5.9% of the annual cost of yeast, corn flour and corncob fractions used in the standard diet, representing a potential saving of US $27.45 per ton of diet (US $47,496 in the case of the mean annual production of 1,730.29 tons of artificial diet in the Moscafrut mass rearing facility at Metapa, Chiapas, Mexico). Implementation of the yeast-reduced diet on an experimental scale at mass rearing facilities is still required to confirm the suitability of new mixtures of artificial diet for rearing A. ludens for use in SIT. This should include the examination of critical quality control parameters of flies such as adult flight ability, starvation resistance and male sexual competitiveness across various generations. The method used here could be useful for improving the cost-effectiveness of invertebrate or vertebrate mass rearing diets worldwide.
Birke, Andrea; Williams, Trevor; Aluja, Martín
2017-01-01
We modeled the cost-effectiveness of rearing Anastrepha ludens, a major fruit fly pest currently mass reared for sterilization and release in pest control programs implementing the sterile insect technique (SIT). An optimization model was generated by combining response surface models of artificial diet cost savings with models of A. ludens pupation, pupal weight, larval development time and adult emergence as a function of mixtures of yeast, a costly ingredient, with corn flour and corncob fractions in the diet. Our model revealed several yeast-reduced mixtures that could be used to prepare diets that were considerably cheaper than a standard diet used for mass rearing. Models predicted a similar production of insects (pupation and adult emergence), with statistically similar pupal weights and larval development times between yeast-reduced diets and the standard mass rearing diet formulation. Annual savings from using the modified diets could be up to 5.9% of the annual cost of yeast, corn flour and corncob fractions used in the standard diet, representing a potential saving of US $27.45 per ton of diet (US $47,496 in the case of the mean annual production of 1,730.29 tons of artificial diet in the Moscafrut mass rearing facility at Metapa, Chiapas, Mexico). Implementation of the yeast-reduced diet on an experimental scale at mass rearing facilities is still required to confirm the suitability of new mixtures of artificial diet for rearing A. ludens for use in SIT. This should include the examination of critical quality control parameters of flies such as adult flight ability, starvation resistance and male sexual competitiveness across various generations. The method used here could be useful for improving the cost-effectiveness of invertebrate or vertebrate mass rearing diets worldwide. PMID:28257496
Stress-induced modification of the boson peak scaling behavior.
Corezzi, Silvia; Caponi, Silvia; Rossi, Flavio; Fioretto, Daniele
2013-11-21
The scaling behavior of the so-called boson peak in glass-formers and its relation to the elastic properties of the system remains a source of controversy. Here the boson peak in a binary reactive mixture is measured by Raman scattering (i) on cooling the unreacted mixture well below its glass-transition temperature and (ii) after quenching to very low temperature the mixture at different times during isothermal polymerization. We find that the scaling behavior of the boson peak with the properties of the elastic medium - as measured by the Debye frequency - holds for states in which the elastic moduli follow a generalized Cauchy-like relationship, and breaks down in coincidence with the departure from this relation. A possible explanation is given in terms of the development of long-range stresses in glasses. The present study provides new insight into the boson peak behavior and is able to reconcile the apparently conflicting results presented in literature.
NASA Astrophysics Data System (ADS)
Chen, Song; Ikoma, Toshiyuki; Ogawa, Nobuhiro; Migita, Satoshi; Kobayashi, Hisatoshi; Hanagata, Nobutaka
2010-06-01
Novel type I collagen hybrid fibrils were fabricated by neutralizing a mixture of type I fish scale collagen solution and type I porcine collagen solution with a phosphate buffer saline at 28 °C. Their structure was discussed in terms of the volume ratio of fish/porcine collagen solution. Scanning electron and atomic force micrographs showed that the diameter of collagen fibrils derived from the collagen mixture was larger than those derived from each collagen, and all resultant fibrils exhibited a typical D-periodic unit of ~67 nm, irrespective of volume ratio of both collagens. Differential scanning calorimetry revealed only one endothermic peak for the fibrils derived from collagen mixture or from each collagen solution, indicating that the resultant collagen fibrils were hybrids of type I fish scale collagen and type I porcine collagen.
Hostetter, Nathan J.; Gardner, Beth; Schweitzer, Sara H.; Boettcher, Ruth; Wilke, Alexandra L.; Addison, Lindsay; Swilling, William R.; Pollock, Kenneth H.; Simons, Theodore R.
2015-01-01
The extensive breeding range of many shorebird species can make integration of survey data problematic at regional spatial scales. We evaluated the effectiveness of standardized repeated count surveys coordinated across 8 agencies to estimate the abundance of American Oystercatcher (Haematopus palliatus) breeding pairs in the southeastern United States. Breeding season surveys were conducted across coastal North Carolina (90 plots) and the Eastern Shore of Virginia (3 plots). Plots were visited on 1–5 occasions during April–June 2013. N-mixture models were used to estimate abundance and detection probability in relation to survey date, tide stage, plot size, and plot location (coastal bay vs. barrier island). The estimated abundance of oystercatchers in the surveyed area was 1,048 individuals (95% credible interval: 851–1,408) and 470 pairs (384–637), substantially higher than estimates that did not account for detection probability (maximum counts of 674 individuals and 316 pairs). Detection probability was influenced by a quadratic function of survey date, and increased from mid-April (~0.60) to mid-May (~0.80), then remained relatively constant through June. Detection probability was also higher during high tide than during low, rising, or falling tides. Abundance estimates from N-mixture models were validated at 13 plots by exhaustive productivity studies (2–5 surveys wk−1). Intensive productivity studies identified 78 breeding pairs across 13 productivity plots while the N-mixture model abundance estimate was 74 pairs (62–119) using only 1–5 replicated surveys season−1. Our results indicate that standardized replicated count surveys coordinated across multiple agencies and conducted during a relatively short time window (closure assumption) provide tremendous potential to meet both agency-level (e.g., state) and regional-level (e.g., flyway) objectives in large-scale shorebird monitoring programs.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
Comparison of up-scaling methods in poroelasticity and its generalizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J G
2003-12-13
Four methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media will be discussed, compared, and contrasted. The four methods are: (1) effective medium theory, (2) mixture theory, (3) two-scale and multiscale homogenization, and (4) volume averaging. All these methods have advantages for some applications and disadvantages for others. For example, effective medium theory, mixture theory, and homogenization methods can all give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physicalmore » arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis. So, we learn from these comparisons that a researcher in the theory of poroelasticity and its generalizations needs to be conversant with two or more of these methods to solve problems generally.« less
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.
NASA Astrophysics Data System (ADS)
Thompson, Aidan P.; Shan, Tzu-Ray
2014-05-01
Ammonium nitrate mixed with fuel oil (ANFO) is a commonly used blasting agent. In this paper we investigated the shock properties of pure ammonium nitrate (AN) and two different mixtures of ammonium nitrate and n-dodecane by characterizing their Hugoniot states. We simulated shock compression of pure AN and ANFO mixtures using the Multi-scale Shock Technique, and observed differences in chemical reaction. We also performed a large-scale explicit sub-threshold shock of AN crystal with a 10 nm void filled with 4.4 wt% of n-dodecane. We observed the formation of hotspots and enhanced reactivity at the interface region between AN and n-dodecane molecules.
A competitive binding model predicts the response of mammalian olfactory receptors to mixtures
NASA Astrophysics Data System (ADS)
Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay
Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.
NASA Astrophysics Data System (ADS)
Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi
We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.
Active heat exchange system development for latent heat thermal energy storage
NASA Technical Reports Server (NTRS)
Lefrois, R. T.; Knowles, G. R.; Mathur, A. K.; Budimir, J.
1979-01-01
Active heat exchange concepts for use with thermal energy storage systems in the temperature range of 250 C to 350 C, using the heat of fusion of molten salts for storing thermal energy are described. Salt mixtures that freeze and melt in appropriate ranges are identified and are evaluated for physico-chemical, economic, corrosive and safety characteristics. Eight active heat exchange concepts for heat transfer during solidification are conceived and conceptually designed for use with selected storage media. The concepts are analyzed for their scalability, maintenance, safety, technological development and costs. A model for estimating and scaling storage system costs is developed and is used for economic evaluation of salt mixtures and heat exchange concepts for a large scale application. The importance of comparing salts and heat exchange concepts on a total system cost basis, rather than the component cost basis alone, is pointed out. The heat exchange concepts were sized and compared for 6.5 MPa/281 C steam conditions and a 1000 MW(t) heat rate for six hours. A cost sensitivity analysis for other design conditions is also carried out.
Separation of components from a scale mixture of Gaussian white noises
NASA Astrophysics Data System (ADS)
Vamoş, Călin; Crăciun, Maria
2010-05-01
The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.
Dynamic foraging of a top predator in a seasonal polar marine environment.
Weinstein, Ben G; Friedlaender, Ari S
2017-11-01
The seasonal movement of animals at broad spatial scales provides insight into life-history, ecology and conservation. By combining high-resolution satellite-tagged data with hierarchical Bayesian movement models, we can associate spatial patterns of movement with marine animal behavior. We used a multi-state mixture model to describe humpback whale traveling and area-restricted search states as they forage along the West Antarctic Peninsula. We estimated the change in the geography, composition and characteristics of these behavioral states through time. We show that whales later in the austral fall spent more time in movements associated with foraging, traveled at lower speeds between foraging areas, and shifted their distribution northward and inshore. Seasonal changes in movement are likely due to a combination of sea ice advance and regional shifts in the primary prey source. Our study is a step towards dynamic movement models in the marine environment at broad scales.
Bifurcations in models of a society of reasonable contrarians and conformists
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Rechtman, Raúl
2015-10-01
We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.
Bifurcations in models of a society of reasonable contrarians and conformists.
Bagnoli, Franco; Rechtman, Raúl
2015-10-01
We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.
Photogrammetric Measurements of CEV Airbag Landing Attenuation Systems
NASA Technical Reports Server (NTRS)
Barrows, Danny A.; Burner, Alpheus W.; Berry, Felecia C.; Dismond, Harriett R.; Cate, Kenneth H.
2008-01-01
High-speed photogrammetric measurements are being used to assess the impact dynamics of the Orion Crew Exploration Vehicle (CEV) for ground landing contingency upon return to earth. Test articles representative of the Orion capsule are dropped at the NASA Langley Landing and Impact Research (LandIR) Facility onto a sand/clay mixture representative of a dry lakebed from elevations as high as 62 feet (18.9 meters). Two different types of test articles have been evaluated: (1) half-scale metal shell models utilized to establish baseline impact dynamics and soil characterization, and (2) geometric full-scale drop models with shock-absorbing airbags which are being evaluated for their ability to cushion the impact of the Orion CEV with the earth s surface. This paper describes the application of the photogrammetric measurement technique and provides drop model trajectory and impact data that indicate the performance of the photogrammetric measurement system.
Modeling of two-phase porous flow with damage
NASA Astrophysics Data System (ADS)
Cai, Z.; Bercovici, D.
2009-12-01
Two-phase dynamics has been broadly studied in Earth Science in a convective system. We investigate the basic physics of compaction with damage theory and present preliminary results of both steady state and time-dependent transport when melt migrates through porous medium. In our simple 1-D model, damage would play an important role when we consider the ascent of melt-rich mixture at constant velocity. Melt segregation becomes more difficult so that porosity is larger than that in simple compaction in the steady-state compaction profile. Scaling analysis for compaction equation is performed to predict the behavior of melt segregation with damage. The time-dependent of the compacting system is investigated by looking at solitary wave solutions to the two-phase model. We assume that the additional melt is injected to the fracture material through a single pulse with determined shape and velocity. The existence of damage allows the pulse to keep moving further than that in simple compaction. Therefore more melt could be injected to the two-phase mixture and future application such as carbon dioxide injection is proposed.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Mean-Field Models of Structure and Dispersion of Polymer-nanoparticle Mixtures
2010-07-29
out of the seminal descriptions of the wetting and dewetting of polymer melts on polymer brushes advanced by Leibler and coworkers.118,119 Explicitly...using scaling ideas and strong segregation theory calculations they delineated the regions where the matrix polymer wets or dewets the brush. In the...Explicitly, when dewetting of the melt chains is expected ( dry brush). In other words, situations involving long matrix polymers and/or densely grafted
Chemical Reactions in Turbulent Mixing Flows.
1986-06-15
length from Reynolds and Schmidt numbers at high Reynolds number, 2. the linear dependence of flame length on the stoichiometric mixture ratio, and, 3...processes are unsteady and the observed large scale flame length fluctuations are the best evidence of the individual cascade. A more detailed examination...Damk~hler number. When the same ideas are used in a model of fuel jets burning in air, it explains (Broadwell 1982): 1. the independence of flame
Coarse-Grained Molecular Monte Carlo Simulations of Liquid Crystal-Nanoparticle Mixtures
NASA Astrophysics Data System (ADS)
Neufeld, Ryan; Kimaev, Grigoriy; Fu, Fred; Abukhdeir, Nasser M.
Coarse-grained intermolecular potentials have proven capable of capturing essential details of interactions between complex molecules, while substantially reducing the number of degrees of freedom of the system under study. In the domain of liquid crystals, the Gay-Berne (GB) potential has been successfully used to model the behavior of rod-like and disk-like mesogens. However, only ellipsoid-like interaction potentials can be described with GB, making it a poor fit for many real-world mesogens. In this work, the results of Monte Carlo simulations of liquid crystal domains using the Zewdie-Corner (ZC) potential are presented. The ZC potential is constructed from an orthogonal series of basis functions, allowing for potentials of essentially arbitrary shapes to be modeled. We also present simulations of mixtures of liquid crystalline mesogens with nanoparticles. Experimentally these mixtures have been observed to exhibit microphase separation and formation of long-range networks under some conditions. This highlights the need for a coarse-grained approach which can capture salient details on the molecular scale while simulating sufficiently large domains to observe these phenomena. We compare the phase behavior of our simulations with that of a recently presented continuum theory. This work was made possible by the Natural Sciences and Engineering Research Council of Canada and Compute Ontario.
NASA Astrophysics Data System (ADS)
Fomin, P. A.
2018-03-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.
Volumetric Properties and Fluid Phase Equilibria of CO2 + H2O
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capobianco, Ryan; Gruszkiewicz, Miroslaw; Wesolowski, David J
2013-01-01
The need for accurate modeling of fluid-mineral processes over wide ranges of temperature, pressure and composition highlighted considerable uncertainties of available property data and equations of state, even for the CO2 + H2O binary system. In particular, the solubility, activity, and ionic dissociation equilibrium data for the CO2-rich phase, which are essential for understanding dissolution/precipitation, fluid-matrix reactions, and solute transport, are uncertain or missing. In this paper we report the results of a new experimental study of volumetric and phase equilibrium properties of CO2 + H2O, to be followed by measurements for bulk and confined multicomponent fluid mixtures. Mixture densitiesmore » were measured by vibrating tube densimetry (VTD) over the entire composition range at T = 200 and 250 C and P = 20, 40, 60, and 80 MPa. Initial analysis of the mutual solubilities, determined from volumetric data, shows good agreement with earlier results for the aqueous phase, but finds that the data of Takenouchi and Kennedy (1964) significantly overestimated the solubility of water in supercritical CO2 (by a factor of more than two at 200 C). Resolving this well-known discrepancy will have a direct impact on the accuracy of predictive modeling of CO2 injection in geothermal reservoirs and geological carbon sequestration through improved equations of state, needed for calibration of predictive molecular-scale models and large-scale reactive transport simulations.« less
NASA Astrophysics Data System (ADS)
Wyss, Simon A.; Guillevic, Myriam; Vicar, Martin; Nieuwenkamp, Gerard; Vollmer, Martin K.; Pascale, Céline; Reimann, Stefan; Niederhauser, Bernhard; Emmenegger, Lukas
2017-04-01
We developed two SI-traceable methods, using both static and dynamic preparation steps, to produce reference gas mixtures for sulfur hexafluoride (SF6) in gas cylinders at pmol/mol level. This research activity is conducted under the framework of the European EMRP HIGHGAS project, in support of the high quality measurements of this important greenhouse gas in the earth's atmosphere. In the method used by the Czech Metrology Institute (CMI) a parent mixture of SF6 in synthetic air was produced in an aluminium cylinder at VSL as a first step. This mixture was produced gravimetrically according to ISO 6142 at an amount fraction of 1 μmol/mol. In the second step this primary standard was further diluted to near-ambient amount fraction, with the use of a three-step dilution system and directly pressurised into aluminium cylinders to a pressure of 10 bars. The second method used by the Federal Institute of Metrology (METAS) has already been applied to other fluorinated gases such as HFC-125 and HFC-1234yf. In this method a highly concentrated mixture is produced by spiking a purified synthetic air (matrix gas) with SF6 from a permeation device. The mass loss of SF6 in the permeation device is observed by a magnetic suspension balance. In a second step this mixture is diluted with matrix gas to the desired concentrations. All flows are controlled with mass flow controllers. The diluted gas was transferred into Silconert2000-coated stainless steel cylinders by cryo-filling. The final gas mixtures at near-ambient amount fraction were measured on a Medusa gas chromatography-mass spectrometry system (Medusa-GC/MS) against working standards calibrated on existing scales of the Scripps Institution of Oceanography (SIO) and compared to other scales [1]. The agreement of the assigned values by the CMI and METAS, with the measured values referenced on the SIO scale was excellent. This results show that with this methods we are able to produce accurate SI-traceable gas mixtures at near-ambient amount fraction for SF6, without extensive static dilutions. [1] Benjamin R. Miller, Ray F. Weiss, Peter K. Salameh, Toste Tanhua, Brian R. Greally, Jens Mühle, Peter G. Simmonds, Anal. Chem., 2008, 80, 1536.
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
Babusa, Bernadett; Czeglédi, Edit; Túry, Ferenc; Mayville, Stephen B; Urbán, Róbert
2015-01-01
Muscle dysmorphia (MD) is a body image disturbance characterized by a pathological preoccupation with muscularity. The study aimed to differentiate the levels of risk for MD among weightlifters and to define a tentative cut-off score for the Muscle Appearance Satisfaction Scale (MASS) for the identification of high risk MD cases. Hungarian male weightlifters (n=304) completed the MASS, the Exercise Addiction Inventory, and specific exercise and body image related questions. For the differentiation of MD, factor mixture modeling was performed, resulting in three independent groups: low-, moderate-, and high risk MD groups. The estimated prevalence of high risk MD in this sample of weightlifters was 15.1%. To determine a cut-off score for the MASS, sensitivity and specificity analyses were performed and a cut-off point of 63 was suggested. The proposed cut-off score for the MASS can be useful for the early detection of high risk MD. Copyright © 2014 Elsevier Ltd. All rights reserved.
An Infinite Mixture Model for Coreference Resolution in Clinical Notes
Liu, Sijia; Liu, Hongfang; Chaudhary, Vipin; Li, Dingcheng
2016-01-01
It is widely acknowledged that natural language processing is indispensable to process electronic health records (EHRs). However, poor performance in relation detection tasks, such as coreference (linguistic expressions pertaining to the same entity/event) may affect the quality of EHR processing. Hence, there is a critical need to advance the research for relation detection from EHRs. Most of the clinical coreference resolution systems are based on either supervised machine learning or rule-based methods. The need for manually annotated corpus hampers the use of such system in large scale. In this paper, we present an infinite mixture model method using definite sampling to resolve coreferent relations among mentions in clinical notes. A similarity measure function is proposed to determine the coreferent relations. Our system achieved a 0.847 F-measure for i2b2 2011 coreference corpus. This promising results and the unsupervised nature make it possible to apply the system in big-data clinical setting. PMID:27595047
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chremos, Alexandros, E-mail: achremos@imperial.ac.uk; Nikoubashman, Arash, E-mail: arashn@princeton.edu; Panagiotopoulos, Athanassios Z.
In this contribution, we develop a coarse-graining methodology for mapping specific block copolymer systems to bead-spring particle-based models. We map the constituent Kuhn segments to Lennard-Jones particles, and establish a semi-empirical correlation between the experimentally determined Flory-Huggins parameter χ and the interaction of the model potential. For these purposes, we have performed an extensive set of isobaric–isothermal Monte Carlo simulations of binary mixtures of Lennard-Jones particles with the same size but with asymmetric energetic parameters. The phase behavior of these monomeric mixtures is then extended to chains with finite sizes through theoretical considerations. Such a top-down coarse-graining approach is importantmore » from a computational point of view, since many characteristic features of block copolymer systems are on time and length scales which are still inaccessible through fully atomistic simulations. We demonstrate the applicability of our method for generating parameters by reproducing the morphology diagram of a specific diblock copolymer, namely, poly(styrene-b-methyl methacrylate), which has been extensively studied in experiments.« less
Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images
NASA Astrophysics Data System (ADS)
Yao, Shoukui; Qin, Xiaojuan
2018-02-01
Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
Modeling and analysis of personal exposures to VOC mixtures using copulas
Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart
2014-01-01
Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991
Isotope and mixture effects on neoclassical transport in the pedestal
NASA Astrophysics Data System (ADS)
Pusztai, Istvan; Buller, Stefan; Omotani, John T.; Newton, Sarah L.
2017-10-01
The isotope mass scaling of the energy confinement time in tokamak plasmas differs from gyro-Bohm estimates, with implications for the extrapolation from current experiments to D-T reactors. Differences in mass scaling in L-mode and various H-mode regimes suggest that the isotope effect may originate from the pedestal. In the pedestal, sharp gradients render local diffusive estimates invalid, and global effects due to orbit-width scale profile variations have to be taken into account. We calculate neoclassical cross-field fluxes from a radially global drift-kinetic equation using the PERFECT code, to study isotope composition effects in density pedestals. The relative reduction to the peak heat flux due to global effects as a function of the density scale length is found to saturate at an isotope-dependent value that is larger for heavier ions. We also consider D-T and H-D mixtures with a focus on isotope separation. The ability to reproduce the mixture results via single-species simulations with artificial ``DT'' and ``HD'' species has been considered. These computationally convenient single ion simulations give a good estimate of the total ion heat flux in corresponding mixtures. Funding received from the International Career Grant of Vetenskapsradet (VR) (330-2014-6313) with Marie Sklodowska Curie Actions, Cofund, Project INCA 600398, and Framework Grant for Strategic Energy Research of VR (2014-5392).
Hot-corrosion of AISI 1020 steel in a molten NaCl/Na2SO4 eutectic at 700°C
NASA Astrophysics Data System (ADS)
Badaruddin, Mohammad; Risano, Ahmad Yudi Eka; Wardono, Herry; Asmi, Dwi
2017-01-01
Hot-corrosion behavior and morphological development of AISI 1020 steel with 2 mg cm-2 mixtures of various NaCl/Na2SO4 ratios at 700°C were investigated by means of weight gain measurements, Optical Microscope (OM), X-ray diffraction (XRD), scanning electron microscopy (SEM), and energy dispersive X-ray spectroscopy (EDS). The weight gain kinetics of the steel with mixtures of salt deposits display a rapid growth rates, compared with the weight gain kinetics of AISI 1020 steel without salt deposit in dry air oxidation, and follow a steady-state parabolic law for 49 h. Chloridation and sulfidation produced by a molten NaCl/Na2SO4 on the steel induced hot-corrosion mechanism attack, and are responsible for the formation of thicker scale. The most severe corrosion takes place with the 70 wt.% NaCl mixtures in Na2SO4. The typical Fe2O3 whisker growth in outer part scale was attributed to the FeCl3 volatilization. The formation of FeS in the innermost scale is more pronounced as the content of Na2SO4 in the mixture is increased.
Scale-Up of Lubricant Mixing Process by Using V-Type Blender Based on Discrete Element Method.
Horibe, Masashi; Sonoda, Ryoichi; Watano, Satoru
2018-01-01
A method for scale-up of a lubricant mixing process in a V-type blender was proposed. Magnesium stearate was used for the lubricant, and the lubricant mixing experiment was conducted using three scales of V-type blenders (1.45, 21 and 130 L) under the same fill level and Froude (Fr) number. However, the properties of lubricated mixtures and tablets could not correspond with the mixing time or the total revolution number. To find the optimum scale-up factor, discrete element method (DEM) simulations of three scales of V-type blender mixing were conducted, and the total travel distance of particles under the different scales was calculated. The properties of the lubricated mixture and tablets obtained from the scale-up experiment were well correlated with the mixing time determined by the total travel distance. It was found that a scale-up simulation based on the travel distance of particles is valid for the lubricant mixing scale-up processes.
Estimation and Model Selection for Finite Mixtures of Latent Interaction Models
ERIC Educational Resources Information Center
Hsu, Jui-Chen
2011-01-01
Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…
Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J
2010-09-17
In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri
2017-12-01
In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.
Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.
Li, Haoxiang; Hua, Gang
2018-04-01
Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.
Verginelli, Iason; Capobianco, Oriana; Hartog, Niels; Baciocchi, Renato
2017-02-01
In this work we introduce a 1-D analytical solution that can be used for the design of horizontal permeable reactive barriers (HPRBs) as a vapor mitigation system at sites contaminated by chlorinated solvents. The developed model incorporates a transient diffusion-dominated transport with a second-order reaction rate constant. Furthermore, the model accounts for the HPRB lifetime as a function of the oxidant consumption by reaction with upward vapors and its progressive dissolution and leaching by infiltrating water. Simulation results by this new model closely replicate previous lab-scale tests carried out on trichloroethylene (TCE) using a HPRB containing a mixture of potassium permanganate, water and sand. In view of field applications, design criteria, in terms of the minimum HPRB thickness required to attenuate vapors at acceptable risk-based levels and the expected HPRB lifetime, are determined from site-specific conditions such as vapor source concentration, water infiltration rate and HPRB mixture. The results clearly show the field-scale feasibility of this alternative vapor mitigation system for the treatment of chlorinated solvents. Depending on the oxidation kinetic of the target contaminant, a 1m thick HPRB can ensure an attenuation of vapor concentrations of orders of magnitude up to 20years, even for vapor source concentrations up to 10g/m 3 . A demonstrative application for representative contaminated site conditions also shows the feasibility of this mitigation system from an economical point of view with capital costs potentially somewhat lower than those of other remediation options, such as soil vapor extraction systems. Overall, based on the experimental and theoretical evaluation thus far, field-scale tests are warranted to verify the potential and cost-effectiveness of HPRBs for vapor mitigation control under various conditions of application. Copyright © 2017 Elsevier B.V. All rights reserved.
QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.
Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng
2018-05-01
Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C
2017-01-01
Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix = 16; RMSE Zn only = 18; RMSE Ni only = 17; RMSE Pb only = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Filtering analysis of a direct numerical simulation of the turbulent Rayleigh-Benard problem
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Hussaini, M. Y.; Zang, T. A.
1990-01-01
A filtering analysis of a turbulent flow was developed which provides details of the path of the kinetic energy of the flow from its creation via thermal production to its dissipation. A low-pass spatial filter is used to split the velocity and the temperature field into a filtered component (composed mainly of scales larger than a specific size, nominally the filter width) and a fluctuation component (scales smaller than a specific size). Variables derived from these fields can fall into one of the above two ranges or be composed of a mixture of scales dominated by scales near the specific size. The filter is used to split the kinetic energy equation into three equations corresponding to the three scale ranges described above. The data from a direct simulation of the Rayleigh-Benard problem for conditions where the flow is turbulent are used to calculate the individual terms in the three kinetic energy equations. This is done for a range of filter widths. These results are used to study the spatial location and the scale range of the thermal energy production, the cascading of kinetic energy, the diffusion of kinetic energy, and the energy dissipation. These results are used to evaluate two subgrid models typically used in large-eddy simulations of turbulence. Subgrid models attempt to model the energy below the filter width that is removed by a low-pass filter.
Development of scale inhibitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gill, J.S.
1996-12-01
During the last fifty years, scale inhibition has gone from an art to a science. Scale inhibition has changed from simple pH adjustment to the use of optimized dose of designer polymers from multiple monomers. The water-treatment industry faces many challenges due to the need to conserve water, availability of only low quality water, increasing environmental regulations of the water discharge, and concern for human safety when using acid. Natural materials such as starch, lignin, tannin, etc., have been replaced with hydrolytically stable organic phosphates and synthetic polymers. Most progress in scale inhibition has come from the use of synergisticmore » mixtures and copolymerizing different functionalities to achieve specific goals. Development of scale inhibitors requires an understanding of the mechanism of crystal growth and its inhibition. This paper discusses the historic perspective of scale inhibition and the development of new inhibitors based on the understanding of the mechanism of crystal growth and the use of powerful tools like molecular modeling to visualize crystal-inhibitor interactions.« less
NASA Astrophysics Data System (ADS)
Abdelmalak, M. M.; Bulois, C.; Mourgues, R.; Galland, O.; Legland, J.-B.; Gruber, C.
2016-08-01
Cohesion and friction coefficient are fundamental parameters for scaling brittle deformation in laboratory models of geological processes. However, they are commonly not experimental variable, whereas (1) rocks range from cohesion-less to strongly cohesive and from low friction to high friction and (2) strata exhibit substantial cohesion and friction contrasts. This brittle paradox implies that the effects of brittle properties on processes involving brittle deformation cannot be tested in laboratory models. Solving this paradox requires the use of dry granular materials of tunable and controllable brittle properties. In this paper, we describe dry mixtures of fine-grained cohesive, high friction silica powder (SP) and low-cohesion, low friction glass microspheres (GM) that fulfill this requirement. We systematically estimated the cohesions and friction coefficients of mixtures of variable proportions using two independent methods: (1) a classic Hubbert-type shear box to determine the extrapolated cohesion (C) and friction coefficient (μ), and (2) direct measurements of the tensile strength (T0) and the height (H) of open fractures to calculate the true cohesion (C0). The measured values of cohesion increase from 100 Pa for pure GM to 600 Pa for pure SP, with a sub-linear trend of the cohesion with the mixture GM content. The two independent cohesion measurement methods, from shear tests and tension/extensional tests, yield very similar results of extrapolated cohesion (C) and show that both are robust and can be used independently. The measured values of friction coefficients increase from 0.5 for pure GM to 1.05 for pure SP. The use of these granular material mixtures now allows testing (1) the effects of cohesion and friction coefficient in homogeneous laboratory models and (2) testing the effect of brittle layering on brittle deformation, as demonstrated by preliminary experiments. Therefore, the brittle properties become, at last, experimental variables.
Rasch Mixture Models for DIF Detection
Strobl, Carolin; Zeileis, Achim
2014-01-01
Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819
Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data
ERIC Educational Resources Information Center
Kim, Su-Young; Kim, Jee-Seon
2012-01-01
This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…
Mixture Modeling: Applications in Educational Psychology
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Hodis, Flaviu A.
2016-01-01
Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…
Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie
2017-11-10
A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.
Multi-scale analysis of compressible fluctuations in the solar wind
NASA Astrophysics Data System (ADS)
Roberts, Owen W.; Narita, Yasuhito; Escoubet, C.-Philippe
2018-01-01
Compressible plasma turbulence is investigated in the fast solar wind at proton kinetic scales by the combined use of electron density and magnetic field measurements. Both the scale-dependent cross-correlation (CC) and the reduced magnetic helicity (σm) are used in tandem to determine the properties of the compressible fluctuations at proton kinetic scales. At inertial scales the turbulence is hypothesised to contain a mixture of Alfvénic and slow waves, characterised by weak magnetic helicity and anti-correlation between magnetic field strength B and electron density ne. At proton kinetic scales the observations suggest that the fluctuations have stronger positive magnetic helicities as well as strong anti-correlations within the frequency range studied. These results are interpreted as being characteristic of either counter-propagating kinetic Alfvén wave packets or a mixture of anti-sunward kinetic Alfvén waves along with a component of kinetic slow waves.
NASA Astrophysics Data System (ADS)
Khayyat, Abdulkareem Hawta Abdullah Kak Ahmed
Scope and Method of Study: Most developing countries, including Iraq, have very poor wind data. Existing wind speed measurements of poor quality may therefore be a poor guide to where to look for the best wind resources. The main focus of this study is to examine how effectively a GIS spatial model estimates wind power potential in regions where high-quality wind data are very scarce, such as Iraq. The research used a mixture of monthly and hourly wind data from 39 meteorological stations. The study applied spatial analysis statistics and GIS techniques in modeling wind power potential. The model weighted important human, environmental and geographic factors that impact wind turbine siting, such as roughness length, land use⪉nd cover type, airport locations, road access, transmission lines, slope and aspect. Findings and Conclusions: The GIS model provided estimations for wind speed and wind power density and identified suitable areas for wind power projects. Using a high resolution (30*30m) digital elevation model DEM improved the GIS wind suitability model. The model identified areas suitable for wind farm development on different scales. The model showed that there are many locations available for large-scale wind turbines in the southern part of Iraq. Additionally, there are many places in central and northern parts (Kurdistan Region) for smaller scale wind turbine placement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Y.; Kawase, Y.
2006-07-01
In order to examine the optimal design and operating parameters, kinetics for microbiological reaction and oxygen consumption in composting of waste activated sludge were quantitatively examined. A series of experiments was conducted to discuss the optimal operating parameters for aerobic composting of waste activated sludge obtained from Kawagoe City Wastewater Treatment Plant (Saitama, Japan) using 4 and 20 L laboratory scale bioreactors. Aeration rate, compositions of compost mixture and height of compost pile were investigated as main design and operating parameters. The optimal aerobic composting of waste activated sludge was found at the aeration rate of 2.0 L/min/kg (initial compostingmore » mixture dry weight). A compost pile up to 0.5 m could be operated effectively. A simple model for composting of waste activated sludge in a composting reactor was developed by assuming that a solid phase of compost mixture is well mixed and the kinetics for microbiological reaction is represented by a Monod-type equation. The model predictions could fit the experimental data for decomposition of waste activated sludge with an average deviation of 2.14%. Oxygen consumption during composting was also examined using a simplified model in which the oxygen consumption was represented by a Monod-type equation and the axial distribution of oxygen concentration in the composting pile was described by a plug-flow model. The predictions could satisfactorily simulate the experiment results for the average maximum oxygen consumption rate during aerobic composting with an average deviation of 7.4%.« less
Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yiguang Ju; Frederick Dryer
2009-02-07
Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.
1977-10-01
PLUME FROM THE COMPRESSOR JtESEARCHJAC ILITY AT WRIGHT- /ATTERSON AIR FORCE JBASE, OHIO , r= mrm (.) Gary R./Ludwig 9. PERFORMING ORGANIZATION NAME... ms Mass flux of stack exhaust gas (slugs/sec) nrtfl Mass flux of ambient air and stack exhaust gas mixture st plume cross-section A (slugs/sec...the horizontal momentum flux in the ambient wind be the same in the model as it is in full-scale. /»» Ms M i a. ’ ro P>"S P*» + ’f (3) where 0
Local Solutions in the Estimation of Growth Mixture Models
ERIC Educational Resources Information Center
Hipp, John R.; Bauer, Daniel J.
2006-01-01
Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karmis, Michael; Luttrell, Gerald; Ripepi, Nino
The research activities presented in this report are intended to address the most critical technical challenges pertaining to coal-biomass briquette feedstocks. Several detailed investigations were conducted using a variety of coal and biomass feedstocks on the topics of (1) coal-biomass briquette production and characterization, (2) gasification of coal-biomass mixtures and briquettes, (3) combustion of coal-biomass mixtures and briquettes, and (4) conceptual engineering design and economic feasibility of briquette production. The briquette production studies indicate that strong and durable co-firing feedstocks can be produced by co-briquetting coal and biomass resources commonly available in the United States. It is demonstrated that binderlessmore » coal-biomass briquettes produced at optimized conditions exhibit very high strength and durability, which indicates that such briquettes would remain competent in the presence of forces encountered in handling, storage and transportation. The gasification studies conducted demonstrate that coal-biomass mixtures and briquettes are exceptional gasification feedstocks, particularly with regard to the synergistic effects realized during devolatilization of the blended materials. The mixture combustion studies indicate that coal-biomass mixtures are exceptional combustion feedstocks, while the briquette combustion study indicates that the use of blended briquettes reduces NO x, CO 2, and CO emissions, and requires the least amount of changes in the operating conditions of an existing coal-fired power plant. Similar results were obtained for the physical durability of the pilot-scale briquettes compared to the bench-scale tests. Finally, the conceptual engineering and feasibility analysis study for a commercial-scale briquetting production facility provides preliminary flowsheet and cost simulations to evaluate the various feedstocks, equipment selection and operating parameters.« less
Ghorab, Mohamed K; Adeyeye, Moji Christianah
2007-10-19
The aims of the study were to evaluate the effect of high shear mixer (HSM) granulation process parameters and scale-up on wet mass consistency and granulation characteristics. A mixer torque rheometer (MTR) was employed to evaluate the granulating solvents used (water, isopropanol, and 1:1 vol/vol mixture of both) based on the wet mass consistency. Gral 25 and mini-HSM were used for the granulation. The MTR study showed that the water significantly enhanced the beta-cyclodextrin (beta CD) binding tendency and the strength of liquid bridges formed between the particles, whereas the isopropanol/water mixture yielded more suitable agglomerates. Mini-HSM granulation with the isopropanol/water mixture (1:1 vol/vol) showed a reduction in the extent of torque value rise by increasing the impeller speed as a result of more breakdown of agglomerates than coalescence. In contrast, increasing the impeller speed of the Gral 25 resulted in higher torque readings, larger granule size, and consequently, slower dissolution. This was due to a remarkable rise in temperature during Gral granulation that reduced the isopropanol/water ratio in the granulating solvent as a result of evaporation and consequently increased the beta CD binding strength. In general, the HSM granulation retarded ibuprofen dissolution compared with the physical mixture because of densification and agglomeration. However, a successful HSM granulation scale-up was not achieved due to the difference in the solvent mixture's effect from 1 scale to the other.
Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.
Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten
2017-10-01
Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.
Global Gridded Emission Inventories of Pentabrominated Diphenyl Ether (PeBDE)
NASA Astrophysics Data System (ADS)
Li, Yi-Fan; Tian, Chongguo; Yang, Meng; Jia, Hongliang; Ma, Jianmin; Li, Dacheng
2010-05-01
Polybrominated diphenyl ethers (PBDEs) are flame retardants widely used in many everyday products such as cars, furniture, textiles, and other electronic equipment. The commercial PBDEs have three major technical mixtures: penta-(PeBDE), octa-(OBDE) and decabromodiphenyl ethers (DeBDE). PeBDE is a mixture of several BDE congeners, such as BDE-47, -99, and -100, and has been included as a new member of persistent organic pollutants (POPs) under the 2009 Stockholm Convention. In order to produce gridded emission inventories of PeBDE on a global scale, information of production, consumption, emission, and physiochemical properties of PeBDE have been searched for published papers, government reports, and internet publications. A methodology to estimate the emissions of PeBDE has been developed and global gridded emission inventories of 2 major congener in PeBDE mixture, BDE-47 and -99, on a 1 degree by 1degree latitude/longitude resolution for 2005 have been compiled. Using these emission inventories as input data, the Canadian Model for Environmental Transport of Organochlorine Pesticides (CanMETOP) model was used to simulate the transport of these chemicals and their concentrations in air were calculated for the year of 2005. The modeled air concentration of BDE-47 and -99 were compared with the monitoring air concentrations of these two congeners in the same year obtained from renowned international/national monitoring programs, such as Global Atmospheric Passive Sampling (GAPS), the Integrated Atmospheric Deposition Network (IADN), and the Chinese POPs Soil and Air Monitoring Program (SAMP), and significant correlations between the modeled results and the monitoring data were found, indicating the high quality of the produced emission inventories of BDE-47 and -99. Keywords: Pentabrominated Diphenyl Ether (PeBDE), Emission Inventories, Global, Model
Degueurce, Axelle; Clément, Rémi; Moreau, Sylvain; Peu, Pascal
2016-10-01
Agricultural waste is a valuable resource for solid state anaerobic digestion (SSAD) thanks to its high solid content (>15%). Batch mode SSAD with leachate recirculation is particularly appropriate for such substrates. However, for successful degradation, the leachate must be evenly distributed through the substrate to improve its moisture content. To study the distribution of leachate in agricultural waste, electrical resistivity tomography (ERT) was performed. First, laboratory-scale experiments were conducted to check the reliability of this method to monitor infiltration of the leachate throughout the solid. Two representative mixtures of agricultural wastes were prepared: a "winter" mixture, with cattle manure, and a "summer" mixture, with cattle manure, wheat straw and hay. The influence of density and water content on electrical resistivity variations was assessed in the two mixtures. An increase in density was found to lead to a decrease in electrical resistivity: at the initial water content, resistivity decreased from 109.7 to 19.5Ω·m in the summer mixture and from 9.8 to 2.7Ω·m in the "winter" mixture with a respective increased in density of 0.134-0.269, and 0.311-0.577. Similarly, resistivity decreased with an increase in water content: for low densities, resistivity dropped from 109.7 to 7.1Ω·m and 9.8 to 4.0Ω·m with an increase in water content from 64 to 90w% and 74 to 93w% for "summer" and "winter" mixtures respectively. Second, a time-lapse ERT was performed in a farm-scale SSAD plant to monitor leachate infiltration. Results revealed very heterogeneous distribution of the leachate in the waste, with two particularly moist areas around the leachate injection holes. However, ERT was successfully applied in the SSAD plant, and produced a reliable 3D map of leachate infiltration. Copyright © 2016 Elsevier Ltd. All rights reserved.
PLUME-MoM 1.0: A new integral model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-08-01
In this paper a new integral mathematical model for volcanic plumes, named PLUME-MoM, is presented. The model describes the steady-state dynamics of a plume in a 3-D coordinate system, accounting for continuous variability in particle size distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. A proper description of such a multi-particle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows for a description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of parameters of the continuous size distribution of the particles. This is achieved by formulation of fundamental transport equations for the multi-particle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows for the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables the investigation of the response of four key output variables (mean and standard deviation of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and standard deviation) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated and without considering interparticle processes such as aggregation or comminution, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution. The adopted approach can be potentially extended to the consideration of key particle-particle effects occurring in the plume including particle aggregation and fragmentation.
Cluster kinetics model for mixtures of glassformers
NASA Astrophysics Data System (ADS)
Brenskelle, Lisa A.; McCoy, Benjamin J.
2007-10-01
For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.
The pitcher plant flesh fly exhibits a mixture of patchy and metapopulation attributes.
Rasic, Gordana; Keyghobadi, Nusha
2012-01-01
We investigated the pattern of spatial genetic structure and the extent of gene flow in the pitcher plant flesh fly Fletcherimyia fletcheri, the largest member of the inquiline community of the purple pitcher plant Sarracenia purpurea. Using microsatellite loci, we tested the theoretical predictions of different hypothesized population models (patchy population, metapopulation, or isolated populations) among 11 bogs in Algonquin Provincial Park (Canada). Our results revealed that the pitcher plant flesh fly exhibits a mixture of patchy and metapopulation characteristics. There is significant differentiation among bogs and limited gene flow at larger spatial scales, but local populations do not experience frequent local extinctions/recolonizations. Our findings suggest a strong dispersal ability and stable population sizes in F. fletcheri, providing novel insights into the ecology of this member of a unique ecological microcosm.
Electrostatic shock structures in dissipative multi-ion dusty plasmas
NASA Astrophysics Data System (ADS)
Elkamash, I. S.; Kourakis, I.
2018-06-01
A comprehensive analytical model is introduced for shock excitations in dusty bi-ion plasma mixtures, taking into account collisionality and kinematic (fluid) viscosity. A multicomponent plasma configuration is considered, consisting of positive ions, negative ions, electrons, and a massive charged component in the background (dust). The ionic dynamical scale is focused upon; thus, electrons are assumed to be thermalized, while the dust is stationary. A dissipative hybrid Korteweg-de Vries/Burgers equation is derived. An analytical solution is obtained, in the form of a shock structure (a step-shaped function for the electrostatic potential, or an electric field pulse) whose maximum amplitude in the far downstream region decays in time. The effect of relevant plasma configuration parameters, in addition to dissipation, is investigated. Our work extends earlier studies of ion-acoustic type shock waves in pure (two-component) bi-ion plasma mixtures.
Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies
NASA Astrophysics Data System (ADS)
Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu
2015-09-01
Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.
A phase field approach for multicellular aggregate fusion in biofabrication.
Yang, Xiaofeng; Sun, Yi; Wang, Qi
2013-07-01
We present a modeling and computational approach to study fusion of multicellular aggregates during tissue and organ fabrication, which forms the foundation for the scaffold-less biofabrication of tissues and organs known as bioprinting. It is known as the phase field method, where multicellular aggregates are modeled as mixtures of multiphase complex fluids whose phase mixing or separation is governed by interphase force interactions, mimicking the cell-cell interaction in the multicellular aggregates, and intermediate range interaction mediated by the surrounding hydrogel. The material transport in the mixture is dictated by hydrodynamics as well as forces due to the interphase interactions. In a multicellular aggregate system with fixed number of cells and fixed amount of the hydrogel medium, the effect of cell differentiation, proliferation, and death are neglected in the current model, which can be readily included in the model, and the interaction between different components is dictated by the interaction energy between cell and cell as well as between cell and medium particles, respectively. The modeling approach is applicable to transient simulations of fusion of cellular aggregate systems at the time and length scale appropriate to biofabrication. Numerical experiments are presented to demonstrate fusion and cell sorting during tissue and organ maturation processes in biofabrication.
Green hypergolic combination: Diethylenetriamine-based fuel and hydrogen peroxide
NASA Astrophysics Data System (ADS)
Kang, Hongjae; Kwon, Sejin
2017-08-01
The present research dealt with the concept of green hypergolic combination to replace the toxic hypergolic combinations. Hydrogen peroxide was selected as a green oxidizer. A novel recipe for the non-toxic hypergolic fuel (Stock 3) was suggested. Sodium borohydride was blended into the mixture of energetic hydrocarbon solvents as an ignition source for hypergolic ignition. The main ingredient of the mixture was diethylenetriamine. By mixing some amount of tetrahydrofuran with diethylenetriamine, the mixture became more flammable and volatile. The mixture of Stock 3 fuel remained stable for four months in the lab scale storability test. Through a simple drop test, the hypergolicity of the green hypergolic combination was verified. Comparing to the toxic hypergolic combination MMH/NTO as the reference, the theoretical performance of the green hypergolic combination would be achieved about 96.7% of the equilibrium specific impulse and about 105.7% of the density specific impulse. The applicability of the green hypergolic combination was successfully confirmed through the static hot-fire tests using 500 N scale hypergolic thruster.
Thermal transitions, pseudogap behavior, and BCS-BEC crossover in Fermi-Fermi mixtures
NASA Astrophysics Data System (ADS)
Karmakar, Madhuparna
2018-03-01
We study the mass imbalanced Fermi-Fermi mixture within the framework of a two-dimensional lattice fermion model. Based on the thermodynamic and species-dependent quasiparticle behavior, we map out the finite-temperature phase diagram of this system and show that unlike the balanced Fermi superfluid, there are now two different pseudogap regimes as PG-I and PG-II. While within the PG-I regime both the fermionic species are pseudogapped, PG-II corresponds to the regime where pseudogap feature survives only in the light species. We believe that the single-particle spectral features that we discuss in this paper are observable through the species-resolved radio-frequency spectroscopy and momentum-resolved photoemission spectroscopy measurements on systems such as 6Li-40K mixture. We further investigate the interplay between the population and mass imbalances and report that at a fixed population imbalance, the BCS-BEC crossover in a Fermi-Fermi mixture would require a critical interaction (Uc) for the realization of the uniform superfluid state. The effect of imbalance in mass on the exotic Fulde-Ferrell-Larkin-Ovchinnikov superfluid phase has been probed in detail in terms of the thermodynamic and quasiparticle behavior of this phase. It has been observed that in spite of the s -wave symmetry of the pairing field, a nodal superfluid gap is realized in the Larkin-Ovchinnikov regime. Our results on the various thermal scales and regimes are expected to serve as benchmarks for the experimental observations on 6Li-40K mixture.
Gauthier, Patrick T; Norwood, Warren P; Prepas, Ellie E; Pyle, Greg G
2014-09-01
Mixtures of metals and polycyclic aromatic hydrocarbons (PAHs) occur ubiquitously in aquatic environments, yet relatively little is known regarding their combined toxicities. Emerging reports investigating the additive mortality in metal-PAH mixtures have indicated that more-than-additive effects are equally as common as strictly-additive effects, raising concern for ecological risk assessment typically based on the summation of individual toxicities. Moreover, the current separation of focus between in vivo and in vitro studies, and fine- and coarse-scale endpoints, creates uncertainty regarding the mechanisms of co-toxicity involved in more-than-additive effects on whole organisms. Drawing from literature on metal and PAH toxicity in bacteria, protozoa, invertebrates, fish, and mammalian models, this review outlines several key mechanistic interactions likely to promote more-than-additive toxicity in metal-PAH mixtures. Namely, the deleterious effects of PAHs on membrane integrity and permeability to metals, the potential for metal-PAH complexation, the inhibitory nature of metals to the detoxification of PAHs via the cytochrome P450 pathway, the inhibitory nature of PAHs towards the detoxification of metals via metallothionein, and the potentiated production of reactive oxygenated species (ROS) in certain metal (e.g. Cu) and PAH (e.g., phenanthrenequinone) mixtures. Moreover, the mutual inhibition of detoxification suggests the possibility of positive feedback among these mechanisms. The individual toxicities and interactive aspects of contaminant transport, detoxification, and the production of ROS are herein discussed. Copyright © 2014 Elsevier B.V. All rights reserved.
Generation of a mixture model ground-motion prediction equation for Northern Chile
NASA Astrophysics Data System (ADS)
Haendel, A.; Kuehn, N. M.; Scherbaum, F.
2012-12-01
In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from observed ground motion data in the region of interest) is then transferring information from other regions to the region where the observations have been produced in a data driven way. The backbone model is learned by comparing the model predictions to observations of the target region. For each observation and each model, the likelihood of an observation given a certain GMPE is calculated. Mixture weights can then be assigned using the expectation maximization (EM) algorithm or Bayesian inference. The new method is used to generate a backbone reference model for Northern Chile, an area for which no dedicated GMPE exists. Strong motion recordings from the target area are used to learn the backbone model from a set of 10 GMPEs developed for different subduction zones of the world. The formation of mixture models is done individually for interface and intraslab type events. The ability of the resulting backbone models to describe ground motions in Northern Chile is then compared to the predictive performance of their constituent models.
Discrete element modelling of bedload transport
NASA Astrophysics Data System (ADS)
Loyer, A.; Frey, P.
2011-12-01
Discrete element modelling (DEM) has been widely used in solid mechanics and in granular physics. In this type of modelling, each individual particle is taken into account and intergranular interactions are modelled with simple laws (e.g. Coulomb friction). Gravity and contact forces permit to solve the dynamical behaviour of the system. DEM is interesting to model configurations and access to parameters not directly available in laboratory experimentation, hence the term "numerical experimentations" sometimes used to describe DEM. DEM was used to model bedload transport experiments performed at the particle scale with spherical glass beads in a steep and narrow flume. Bedload is the larger material that is transported on the bed on stream channels. It has a great geomorphic impact. Physical processes ruling bedload transport and more generally coarse-particle/fluid systems are poorly known, arguably because granular interactions have been somewhat neglected. An existing DEM code (PFC3D) already computing granular interactions was used. We implemented basic hydrodynamic forces to model the fluid interactions (buoyancy, drag, lift). The idea was to use the minimum number of ingredients to match the experimental results. Experiments were performed with one-size and two-size mixtures of coarse spherical glass beads entrained by a shallow turbulent and supercritical water flow down a steep channel with a mobile bed. The particle diameters were 4 and 6mm, the channel width 6.5mm (about the same width as the coarser particles) and the channel inclination was typically 10%. The water flow rate and the particle rate were kept constant at the upstream entrance and adjusted to obtain bedload transport equilibrium. Flows were filmed from the side by a high-speed camera. Using image processing algorithms made it possible to determine the position, velocity and trajectory of both smaller and coarser particles. Modelled and experimental particle velocity and concentration depth profiles were compared in the case of the one-size mixture. The turbulent fluid velocity profile was prescribed and attached to the variable upper bedline. Provided the upper bedline was calculated with a refined space and time resolution, a fair agreement between DEM and experiments was reached. Experiments with two-size mixtures were designed to study vertical grain size sorting or segregation patterns. Sorting is arguably the reason why the predictive capacity of bedload formulations remains so poor. Modelling of the two-size mixture was also performed and gave promising qualitative results.
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne
2010-01-01
Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…
The Potential of Growth Mixture Modelling
ERIC Educational Resources Information Center
Muthen, Bengt
2006-01-01
The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moges, Edom; Demissie, Yonas; Li, Hong-Yi
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less
Development of PBPK Models for Gasoline in Adult and ...
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of
The thinning of viscous liquid threads.
NASA Astrophysics Data System (ADS)
Castrejon-Pita, J. Rafael; Castrejon-Pita, Alfonso A.; Hutchings, Ian M.
2012-11-01
The thinning neck of dripping droplets is studied experimentally for viscous Newtonian fluids. High speed imaging is used to measure the minimum neck diameter in terms of the time τ to breakup. Mixtures of water and glycerol with viscosities ranging from 20 to 363 mPa s are used to model the Newtonian behavior. The results show the transition from potential to inertial-viscous regimes occurs at the predicted values of ~Oh2. Before this transition the neck contraction rate follows the inviscid scaling law ~τ 2 / 3 . After the transition, the neck thinning tends towards the linear viscous scaling law ~ τ . Project supported by the EPSRC-UK (EP/G029458/1) and Cambridge-KACST.
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures
NASA Astrophysics Data System (ADS)
Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.
2017-10-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.
ERIC Educational Resources Information Center
Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk
2008-01-01
Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…
RELATIONSHIPS BETWEEN LABORATORY AND PILOT-SCALE COMBUSTION OF SOME CHLORINATED HYDROCARBONS
Factors governing the occurence of trace amounts of residual organic substance emmissions (ROSEs) in full-scale incierators are not fully understood. Pilot-scale spray combustion expereiments involving some liquid chlorinated hydrocarbons (CHCs) and their dilute mixtures with hy...
NASA Astrophysics Data System (ADS)
Guillevic, Myriam; Vollmer, Martin K.; Wyss, Simon A.; Leuenberger, Daiana; Ackermann, Andreas; Pascale, Céline; Niederhauser, Bernhard; Reimann, Stefan
2018-06-01
For many years, the comparability of measurements obtained with various instruments within a global-scale air quality monitoring network has been ensured by anchoring all results to a unique suite of reference gas mixtures, also called a primary calibration scale
. Such suites of reference gas mixtures are usually prepared and then stored over decades in pressurised cylinders by a designated laboratory. For the halogenated gases which have been measured over the last 40 years, this anchoring method is highly relevant as measurement reproducibility is currently much better ( < 1 %, k = 2 or 95 % confidence interval) than the expanded uncertainty of a reference gas mixture (usually > 2 %). Meanwhile, newly emitted halogenated gases are already measured in the atmosphere at pmol mol-1 levels, while still lacking an established reference standard. For compounds prone to adsorption on material surfaces, it is difficult to evaluate mixture stability and thus variations in the molar fractions over time in cylinders at pmol mol-1 levels.To support atmospheric monitoring of halogenated gases, we create new primary calibration scales for SF6 (sulfur hexafluoride), HFC-125 (pentafluoroethane), HFO-1234yf (or HFC-1234yf, 2,3,3,3-tetrafluoroprop-1-ene), HCFC-132b (1,2-dichloro-1,1-difluoroethane) and CFC-13 (chlorotrifluoromethane). The preparation method, newly applied to halocarbons, is dynamic and gravimetric: it is based on the permeation principle followed by dynamic dilution and cryo-filling of the mixture in cylinders. The obtained METAS-2017 primary calibration scales are made of 11 cylinders containing these five substances at near-ambient and slightly varying molar fractions. Each prepared molar fraction is traceable to the realisation of SI units (International System of Units) and is assigned an uncertainty estimate following international guidelines (JCGM, 2008), ranging from 0.6 % for SF6 to 1.3 % (k = 2) for all other substances. The smallest uncertainty obtained for SF6 is mostly explained by the high substance purity level in the permeator and the low SF6 contamination of the matrix gas. The measured internal consistency of the suite ranges from 0.23 % for SF6 to 1.1 % for HFO-1234yf (k = 1). The expanded uncertainty after verification (i.e. measurement of the cylinders vs. each others) ranges from 1 to 2 % (k = 2).This work combines the advantages of SI-traceable reference gas mixture preparation with a calibration scale system for its use as anchor by a monitoring network. Such a combined system supports maximising compatibility within the network while linking all reference values to the SI and assigning carefully estimated uncertainties.For SF6, comparison of the METAS-2017 calibration scale with the scale prepared by SIO (Scripps Institution of Oceanography, SIO-05) shows excellent concordance, the ratio METAS-2017 / SIO-05 being 1.002. For HFC-125, the METAS-2017 calibration scale is measured as 7 % lower than SIO-14; for HFO-1234yf, it is 9 % lower than Empa-2013. No other scale for HCFC-132b was available for comparison. Finally, for CFC-13 the METAS-2017 primary calibration scale is 5 % higher than the interim calibration scale (Interim-98) that was in use within the Advanced Global Atmospheric Gases Experiment (AGAGE) network before adopting the scale established in the present work.
AMELIORATION OF ACID MINE DRAINAGE USING REACTIVE MIXTURES IN PERMEABLE REACTIVE BARRIERS
The generation and release of acidic drainage from mine wastes is an environmental problem of international scale. The use of zero-valent iron and/or iron mixtures in subsurface Permeable Reactive Barriers (PRB) presents a possible passive alternative for remediating acidic grou...
NASA Astrophysics Data System (ADS)
Leys, Jan; Losada-Pérez, Patricia; Cordoyiannis, George; Cerdeiriña, Claudio A.; Glorieux, Christ; Thoen, Jan
2010-03-01
Detailed results are reported for the dielectric constant ɛ as a function of temperature, concentration, and frequency near the upper critical point of the binary liquid mixture nitrobenzene-tetradecane. The data have been analyzed in the context of the recently developed concept of complete scaling. It is shown that the amplitude of the low frequency critical Maxwell-Wagner relaxation (with a relaxation frequency around 10 kHz) along the critical isopleth is consistent with the predictions of a droplet model for the critical fluctuations. The temperature dependence of ɛ in the homogeneous phase can be well described with a combination of a (1-α) power law term (with α the heat capacity critical exponent) and a linear term in reduced temperature with the Ising value for α. For the proper description of the temperature dependence of the difference Δɛ between the two coexisting phases below the critical temperature, it turned out that good fits with the Ising value for the order parameter exponent β required the addition of a corrections-to-scaling contribution or a linear term in reduced temperature. Good fits to the dielectric diameter ɛd require a (1-α) power law term, a 2β power law term (in the past considered as spurious), and a linear term in reduced temperature, consistent with complete scaling.
Geometrical Description in Binary Composites and Spectral Density Representation
Tuncer, Enis
2010-01-01
In this review, the dielectric permittivity of dielectric mixtures is discussed in view of the spectral density representation method. A distinct representation is derived for predicting the dielectric properties, permittivities ε, of mixtures. The presentation of the dielectric properties is based on a scaled permittivity approach, ξ=(εe-εm)(εi-εm)-1, where the subscripts e, m and i denote the dielectric permittivities of the effective, matrix and inclusion media, respectively [Tuncer, E. J. Phys.: Condens. Matter 2005, 17, L125]. This novel representation transforms the spectral density formalism to a form similar to the distribution of relaxation times method of dielectric relaxation. Consequently, I propose that any dielectric relaxation formula, i.e., the Havriliak-Negami empirical dielectric relaxation expression, can be adopted as a scaled permittivity. The presented scaled permittivity representation has potential to be improved and implemented into the existing data analyzing routines for dielectric relaxation; however, the information to extract would be the topological/morphological description in mixtures. To arrive at the description, one needs to know the dielectric properties of the constituents and the composite prior to the spectral analysis. To illustrate the strength of the representation and confirm the proposed hypothesis, the Landau-Lifshitz/Looyenga (LLL) [Looyenga, H. Physica 1965, 31, 401] expression is selected. The structural information of a mixture obeying LLL is extracted for different volume fractions of phases. Both an in-house computational tool based on the Monte Carlo method to solve inverse integral transforms and the proposed empirical scaled permittivity expression are employed to estimate the spectral density function of the LLL expression. The estimated spectral functions for mixtures with different inclusion concentration compositions show similarities; they are composed of a couple of bell-shaped distributions, with coinciding peak locations but different heights. It is speculated that the coincidence in the peak locations is an absolute illustration of the self-similar fractal nature of the mixture topology (structure) created with the LLL expression. Consequently, the spectra are not altered significantly with increased filler concentration level—they exhibit a self-similar spectral density function for different concentration levels. Last but not least, the estimated percolation strengths also confirm the fractal nature of the systems characterized by the LLL mixture expression. It is concluded that the LLL expression is suitable for complex composite systems that have hierarchical order in their structure. These observations confirm the finding in the literature.
Johnson, B. Thomas
1989-01-01
Traditional single species toxicity tests and multiple component laboratory-scaled microcosm assays were combined to assess the toxicological hazard of diesel oil, a model complex mixture, to a model aquatic environment. The immediate impact of diesel oil dosed on a freshwater community was studied in a model pond microcosm over 14 days: a 7-day dosage and a 7-day recovery period. A multicomponent laboratory microcosm was designed to monitor the biological effects of diesel oil (1·0 mg litre−1) on four components: water, sediment (soil + microbiota), plants (aquatic macrophytes and algae), and animals (zooplanktonic and zoobenthic invertebrates). To determine the sensitivity of each part of the community to diesel oil contamination and how this model community recovered when the oil dissipated, limnological, toxicological, and microbiological variables were considered. Our model revealed these significant occurrences during the spill period: first, a community production and respiration perturbation, characterized in the water column by a decrease in dissolved oxygen and redox potential and a concomitant increase in alkalinity and conductivity; second, marked changes in microbiota of sediments that included bacterial heterotrophic dominance and a high heterotrophic index (0·6), increased bacterial productivity, and the marked increases in numbers of saprophytic bacteria (10 x) and bacterial oil degraders (1000 x); and third, column water acutely toxic (100% mortality) to two model taxa: Selenastrum capricornutum and Daphnia magna. Following the simulated clean-up procedure to remove the oil slick, the recovery period of this freshwater microcosm was characterized by a return to control values. This experimental design emphasized monitoring toxicological responses in aquatic microcosm; hence, we proposed the term ‘toxicosm’ to describe this approach to aquatic toxicological hazard evaluation. The toxicosm as a valuable toxicological tool for screening aquatic contaminants was demonstrated using diesel oil as a model complex mixture.
Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis
2005-07-25
analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for
Microstructure and hydrogen bonding in water-acetonitrile mixtures.
Mountain, Raymond D
2010-12-16
The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.
Observation of quantum criticality with ultracold atoms in optical lattices
NASA Astrophysics Data System (ADS)
Zhang, Xibo
As biological problems are becoming more complex and data growing at a rate much faster than that of computer hardware, new and faster algorithms are required. This dissertation investigates computational problems arising in two of the fields: comparative genomics and epigenomics, and employs a variety of computational techniques to address the problems. One fundamental question in the studies of chromosome evolution is whether the rearrangement breakpoints are happening at random positions or along certain hotspots. We investigate the breakpoint reuse phenomenon, and show the analyses that support the more recently proposed fragile breakage model as opposed to the conventional random breakage models for chromosome evolution. The identification of syntenic regions between chromosomes forms the basis for studies of genome architectures, comparative genomics, and evolutionary genomics. The previous synteny block reconstruction algorithms could not be scaled to a large number of mammalian genomes being sequenced; neither did they address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolutionary history of large-scale duplications prevalent in plant genomes. We present a new unified synteny block generation algorithm based on A-Bruijn graph framework that overcomes these shortcomings. In the epigenome sequencing, a sample may contain a mixture of epigenomes and there is a need to resolve the distinct methylation patterns from the mixture. Many sequencing applications, such as haplotype inference for diploid or polyploid genomes, and metagenomic sequencing, share the similar objective: to infer a set of distinct assemblies from reads that are sequenced from a heterogeneous sample and subsequently aligned to a reference genome. We model the problem from both a combinatorial and a statistical angles. First, we describe a theoretical framework. A linear-time algorithm is then given to resolve a minimum number of assemblies that are consistent with all reads, substantially improving on previous algorithms. An efficient algorithm is also described to determine a set of assemblies that is consistent with a maximum subset of the reads, a previously untreated problem. We then prove that allowing nested reads or permitting mismatches between reads and their assemblies renders these problems NP-hard. Second, we describe a mixture model-based approach, and applied the model for the detection of allele-specific methylations.
A numerical model to simulate foams during devolatilization of polymers
NASA Astrophysics Data System (ADS)
Khan, Irfan; Dixit, Ravindra
2014-11-01
Customers often demand that the polymers sold in the market have low levels of volatile organic compounds (VOC). Some of the processes for making polymers involve the removal of volatiles to the levels of parts per million (devolatilization). During this step the volatiles are phase separated out of the polymer through a combination of heating and applying lower pressure, creating foam with the pure polymer in liquid phase and the volatiles in the gas phase. The efficiency of the devolatilization process depends on predicting the onset of solvent phase change in the polymer and volatiles mixture accurately based on the processing conditions. However due to the complex relationship between the polymer properties and the processing conditions this is not trivial. In this work, a bubble scale model is coupled with a bulk scale transport model to simulate the processing conditions of polymer devolatilization. The bubble scale model simulates the nucleation and bubble growth based on the classical nucleation theory and the popular ``influence volume approach.'' As such it provides the information of bubble size distribution and number density inside the polymer at any given time and position. This information is used to predict the bulk properties of the polymer and its behavior under the applied processing conditions. Initial results of this modeling approach will be presented.
NASA Astrophysics Data System (ADS)
Dehghan Banadaki, Arash
Predicting the ultimate performance of asphalt concrete under realistic loading conditions is the main key to developing better-performing materials, designing long-lasting pavements, and performing reliable lifecycle analysis for pavements. The fatigue performance of asphalt concrete depends on the mechanical properties of the constituent materials, namely asphalt binder and aggregate. This dependent link between performance and mechanical properties is extremely complex, and experimental techniques often are used to try to characterize the performance of hot mix asphalt. However, given the seemingly uncountable number of mixture designs and loading conditions, it is simply not economical to try to understand and characterize the material behavior solely by experimentation. It is well known that analytical and computational modeling methods can be combined with experimental techniques to reduce the costs associated with understanding and characterizing the mechanical behavior of the constituent materials. This study aims to develop a multiscale micromechanical lattice-based model to predict cracking in asphalt concrete using component material properties. The proposed algorithm, while capturing different phenomena for different scales, also minimizes the need for laboratory experiments. The developed methodology builds on a previously developed lattice model and the viscoelastic continuum damage model to link the component material properties to the mixture fatigue performance. The resulting lattice model is applied to predict the dynamic modulus mastercurves for different scales. A framework for capturing the so-called structuralization effects is introduced that significantly improves the accuracy of the modulus prediction. Furthermore, air voids are added to the model to help capture this important micromechanical feature that affects the fatigue performance of asphalt concrete as well as the modulus value. The effects of rate dependency are captured by implementing the viscoelastic fracture criterion. In the end, an efficient cyclic loading framework is developed to evaluate the damage accumulation in the material that is caused by long-sustained cyclic loads.
NASA Astrophysics Data System (ADS)
Angst, Sebastian; Engelke, Lukas; Winterer, Markus; Wolf, Dietrich E.
2017-06-01
Densification of (semi-)conducting particle agglomerates with the help of an electrical current is much faster and more energy efficient than traditional thermal sintering or powder compression. Therefore, this method becomes more and more common among experimentalists, engineers, and in industry. The mechanisms at work at the particle scale are highly complex because of the mutual feedback between current and pore structure. This paper extends previous modelling approaches in order to study mixtures of particles of two different materials. In addition to the delivery of Joule heat throughout the sample, especially in current bottlenecks, thermoelectric effects must be taken into account. They lead to segregation or spatial correlations in the particle arrangement. Various model extensions are possible and will be discussed.
The propulsive capability of explosives heavily loaded with inert materials
NASA Astrophysics Data System (ADS)
Loiseau, J.; Georges, W.; Frost, D. L.; Higgins, A. J.
2018-01-01
The effect of inert dilution on the accelerating ability of high explosives for both grazing and normal detonations was studied. The explosives considered were: (1) neat, amine-sensitized nitromethane (NM), (2) packed beds of glass, steel, or tungsten particles saturated with amine-sensitized NM, (3) NM gelled with PMMA containing dispersed glass microballoons, (4) NM gelled with PMMA containing glass microballoons and steel particles, and (5) C-4 containing varying mass fractions of glass or steel particles. Flyer velocity was measured via photonic Doppler velocimetry, and the results were analysed using a Gurney model augmented to include the influence of the diluent. Reduction in accelerating ability with increasing dilution for the amine-sensitized NM, gelled NM, and C-4 was measured experimentally. Variation of flyer terminal velocity with the ratio of flyer mass to charge mass (M/C) was measured for both grazing and normally incident detonations in gelled NM containing 10% microballoons by mass and for steel beads saturated with amine-sensitized NM. Finally, flyer velocity was measured in grazing versus normal loading for a number of explosive admixtures. The augmented Gurney model predicted the effect of dilution on accelerating ability and the scaling of flyer velocity with M/C for mixtures containing low-density diluents. The augmented Gurney model failed to predict the scaling of flyer velocity with M/C for mixtures heavily loaded with dense diluents. In all cases, normally incident detonations propelled flyers to higher velocity than the equivalent grazing detonations because of material velocity imparted by the incident shock wave and momentum/energy transfer from the slapper used to uniformly initiate the charge.
The propulsive capability of explosives heavily loaded with inert materials
NASA Astrophysics Data System (ADS)
Loiseau, J.; Georges, W.; Frost, D. L.; Higgins, A. J.
2018-07-01
The effect of inert dilution on the accelerating ability of high explosives for both grazing and normal detonations was studied. The explosives considered were: (1) neat, amine-sensitized nitromethane (NM), (2) packed beds of glass, steel, or tungsten particles saturated with amine-sensitized NM, (3) NM gelled with PMMA containing dispersed glass microballoons, (4) NM gelled with PMMA containing glass microballoons and steel particles, and (5) C-4 containing varying mass fractions of glass or steel particles. Flyer velocity was measured via photonic Doppler velocimetry, and the results were analysed using a Gurney model augmented to include the influence of the diluent. Reduction in accelerating ability with increasing dilution for the amine-sensitized NM, gelled NM, and C-4 was measured experimentally. Variation of flyer terminal velocity with the ratio of flyer mass to charge mass ( M/ C) was measured for both grazing and normally incident detonations in gelled NM containing 10% microballoons by mass and for steel beads saturated with amine-sensitized NM. Finally, flyer velocity was measured in grazing versus normal loading for a number of explosive admixtures. The augmented Gurney model predicted the effect of dilution on accelerating ability and the scaling of flyer velocity with M/ C for mixtures containing low-density diluents. The augmented Gurney model failed to predict the scaling of flyer velocity with M/ C for mixtures heavily loaded with dense diluents. In all cases, normally incident detonations propelled flyers to higher velocity than the equivalent grazing detonations because of material velocity imparted by the incident shock wave and momentum/energy transfer from the slapper used to uniformly initiate the charge.
NASA Astrophysics Data System (ADS)
Akasaka, Ryo
This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.
Turbulent Flame Propagation Characteristics of High Hydrogen Content Fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitzman, Jerry; Lieuwen, Timothy
2014-09-30
This final report describes the results of an effort to better understand turbulent flame propagation, especially at conditions relevant to gas turbines employing fuels with syngas or hydrogen mixtures. Turbulent flame speeds were measured for a variety of hydrogen/carbon monoxide (H2/CO) and hydrogen/methane (H2/CH4) fuel mixtures with air as the oxidizer. The measurements include global consumption speeds (ST,GC) acquired in a turbulent jet flame at pressures of 1-10 atm and local displacement speeds (ST,LD) acquired in a low-swirl burner at atmospheric pressure. The results verify the importance of fuel composition in determining turbulent flame speeds. For example, different fuel-air mixturesmore » having the same unstretched laminar flame speed (SL,0) but different fuel compositions resulted in significantly different ST,GC for the same turbulence levels (u'). This demonstrates the weakness of turbulent flame speed correlations based simply on u'/SL,0. The results were analyzed using a steady-steady leading points concept to explain the sensitivity of turbulent burning rates to fuel (and oxidizer) composition. Leading point theories suggest that the premixed turbulent flame speed is controlled by the flame front characteristics at the flame brush leading edge, or, in other words, by the flamelets that advance farthest into the unburned mixture (the so-called leading points). For negative Markstein length mixtures, this is assumed to be close to the maximum stretched laminar flame speed (SL,max) for the given fuel-oxidizer mixture. For the ST,GC measurements, the data at a given pressure were well-correlated with an SL,max scaling. However the variation with pressure was not captured, which may be due to non-quasi-steady effects that are not included in the current model. For the ST,LD data, the leading points model again faithfully captured the variation of turbulent flame speed over a wide range of fuel-compositions and turbulence intensities. These results provide evidence that the leading points model can provide useful predictions of turbulent flame speed over a wide range of operating conditions and flow geometries.« less
NASA Astrophysics Data System (ADS)
De Lucia, Marco; Pilz, Peter
2015-04-01
Underground gas storage is increasingly regarded as a technically viable option for meeting the energy demand and environmental targets of many industrialized countries. Besides the long-term CO2 sequestration, energy can be chemically stored in form of CO2/CH4/H2 mixtures, for example resulting from excess wind energy. A precise estimation of the impact of such gas mixtures on the mineralogical, geochemical and petrophysical properties of specific reservoirs and caprocks is crucial for site selection and optimization of storage depth. Underground gas storage is increasingly regarded as a technically viable option for meeting environmental targets and the energy demand through storage in form of H2 or CH4, i.e. resulting from excess wind energy. Gas storage in salt caverns is nowadays a mature technology; in regions where favorable geologic structures such as salt diapires are not available, however, gas storage can only be implemented in porous media such as depleted gas and oil reservoirs or suitable saline aquifers. In such settings, a significant amount of in-situ gas components such as CO2, CH4 (and N2) will always be present, making the CO2/CH4/H2 system of particular interest. A precise estimation of the impact of their gas mixtures on the mineralogical, geochemical and petrophysical properties of specific reservoirs and caprocks is therefore crucial for site selection and optimization of storage depth. In the framework of the collaborative research project H2STORE, the feasibility of industrial-scale gas storage in porous media in several potential siliciclastic depleted gas and oil reservoirs or suitable saline aquifers is being investigated by means of experiments and modelling on actual core materials from the evaluated sites. Among them are the Altmark depleted gas reservoir in Saxony-Anhalt and the Ketzin pilot site for CO2 storage in Brandenburg (Germany). Further sites are located in the Molasse basin in South Germany and Austria. In particular, two work packages hosted at the German Research Centre for Geosciences (GFZ) focus on the fluid-fluid and fluid-rock interactions triggered by CO2, H2 and their mixtures. Laboratory experiments expose core samples to hydrogen and CO2/hydrogen mixtures under site-specific conditions (temperatures up to 200 °C and pressure up to 300 bar). The resulting qualitative and, whereas possible, quantitative data are expected to ameliorate the precision of predictive geochemical and reactive transport modelling, which is also performed within the project. The combination of experiments, chemical and mineralogical analyses and models is needed to improve the knowledge about: (1) solubility model and mixing rule for multicomponent gas mixtures in high saline formation fluids: no data are namely available in literature for H2-charged gas mixtures in the conditions expected in the potential sites; (2) chemical reactivity of different mineral assemblages and formation fluids in a broad spectrum of P-T conditions and composition of the stored gas mixtures; (3) thermodynamics and kinetics of relevant reactions involving mineral dissolution or precipitation. The resulting amelioration of site characterization and the overall enhancement in understanding the potential processes will benefit the operational reliability, the ecological tolerance, and the economic efficiency of future energy storing plants, crucial aspects for public acceptance and for industrial investors.
Two-Phase Solid/Fluid Simulation of Dense Granular Flows With Dilatancy Effects
NASA Astrophysics Data System (ADS)
Mangeney, A.; Bouchut, F.; Fernández-Nieto, E. D.; Kone, E. H.; Narbona-Reina, G.
2016-12-01
Describing grain/fluid interaction in debris flows models is still an open and challenging issue with key impact on hazard assessment [1]. We present here a two-phase two-thin-layer model for fluidized debris flows that takes into account dilatancy effects. It describes the velocity of both the solid and the fluid phases, the compression/ dilatation of the granular media and its interaction with the pore fluid pressure [2]. The model is derived from a 3D two-phase model proposed by Jackson [3] and the mixture equations are closed by a weak compressibility relation. This relation implies that the occurrence of dilation or contraction of the granular material in the model depends on whether the solid volume fraction is respectively higher or lower than a critical value. When dilation occurs, the fluid is sucked into the granular material, the pore pressure decreases and the friction force on the granular phase increases. On the contrary, in the case of contraction, the fluid is expelled from the mixture, the pore pressure increases and the friction force diminishes. To account for this transfer of fluid into and out of the mixture, a two-layer model is proposed with a fluid or a solid layer on top of the two-phase mixture layer. Mass and momentum conservation are satisfied for the two phases, and mass and momentum are transferred between the two layers. A thin-layer approximation is used to derive average equations. Special attention is paid to the drag friction terms that are responsible for the transfer of momentum between the two phases and for the appearance of an excess pore pressure with respect to the hydrostatic pressure. By comparing quantitatively the results of simulation and laboratory experiments on submerged granular flows, we show that our model contains the basic ingredients making it possible to reproduce the interaction between the granular and fluid phases through the change in pore fluid pressure. In particular, we analyse the different time scales in the model and their role in granular/fluid flow dynamics. References[1] R. Delannay, A. Valance, A. Mangeney, O. Roche, P. Richard, J. Phys. D: Appl. Phys., in press (2016). [2] F. Bouchut, E. D. Fernández-Nieto, A. Mangeney, G. Narbona-Reina, J. Fluid Mech., 801, 166-221 (2016). [3] R. Jackson, Cambridges Monographs on Mechanics (2000).
Carbon deposition model for oxygen-hydrocarbon combustion
NASA Technical Reports Server (NTRS)
Bossard, John A.
1988-01-01
The objectives are to use existing hardware to verify and extend the database generated on the original test programs. The data to be obtained are the carbon deposition characteristics when methane is used at injection densities comparable to full scale values. The database will be extended to include liquid natural gas (LNG) testing at low injection densities for gas generator/preburner conditions. The testing will be performed at mixture ratios between 0.25 and 0.60, and at chamber pressures between 750 and 1500 psi.
Development of a PBPK Model for JP-8
2006-11-15
risks from exposures to chemicals. JP-8 is a challenging material to work with because JP-8 is a mixture of hundreds of hydrocarbons, significantly...et al., 1999) CONSTANT VLC = 0.04 !Liver tissue Schoeffner et al, 1999 CONSTANT VBC = 0.0076 !Brain tissue Schoeffner et al, 1999 CONSTANT VFC = 0.07...0.78*QC-QL-QB QS = 0.22*QC-QF !Scaled Tissue Volumes VL = VLC *BW VF = VFC*BW VB = VBC*BW 10 VS = 0.82*BW-VF VR = 0.09*BW-VL-VB !Metabolic
Volatile Reaction Products From Silicon-Based Ceramics in Combustion Environments Identified
NASA Technical Reports Server (NTRS)
Opila, Elizabeth J.
1997-01-01
Silicon-based ceramics and composites are prime candidates for use as components in the hot sections of advanced aircraft engines. These materials must have long-term durability in the combustion environment. Because water vapor is always present as a major product of combustion in the engine environment, its effect on the durability of silicon-based ceramics must be understood. In combustion environments, silicon-based ceramics react with water vapor to form a surface silica (SiO2) scale. This SiO2 scale, in turn, has been found to react with water vapor to form volatile hydroxides. Studies to date have focused on how water vapor reacts with high-purity silicon carbide (SiC) and SiO2 in model combustion environments. Because the combustion environment in advanced aircraft engines is expected to contain about 10-percent water vapor at 10-atm total pressure, the durability of SiC and SiO2 in gas mixtures containing 0.1- to 1-atm water vapor is of interest. The reactions of SiC and SiO2 with water vapor were monitored by measuring weight changes of sample coupons in a 0.5-atm water vapor/0.5-atm oxygen gas mixture with thermogravimetric analysis.
NASA Astrophysics Data System (ADS)
Koger, B.; Kirkby, C.
2016-03-01
Gold nanoparticles (GNPs) have shown potential in recent years as a means of therapeutic dose enhancement in radiation therapy. However, a major challenge in moving towards clinical implementation is the exact characterisation of the dose enhancement they provide. Monte Carlo studies attempt to explore this property, but they often face computational limitations when examining macroscopic scenarios. In this study, a method of converting dose from macroscopic simulations, where the medium is defined as a mixture containing both gold and tissue components, to a mean dose-to-tissue on a microscopic scale was established. Monte Carlo simulations were run for both explicitly-modeled GNPs in tissue and a homogeneous mixture of tissue and gold. A dose ratio was obtained for the conversion of dose scored in a mixture medium to dose-to-tissue in each case. Dose ratios varied from 0.69 to 1.04 for photon sources and 0.97 to 1.03 for electron sources. The dose ratio is highly dependent on the source energy as well as GNP diameter and concentration, though this effect is less pronounced for electron sources. By appropriately weighting the monoenergetic dose ratios obtained, the dose ratio for any arbitrary spectrum can be determined. This allows complex scenarios to be modeled accurately without explicitly simulating each individual GNP.
Favard, Cyril; Wenger, Jérôme; Lenne, Pierre-François; Rigneault, Hervé
2011-03-02
Many efforts have been undertaken over the last few decades to characterize the diffusion process in model and cellular lipid membranes. One of the techniques developed for this purpose, fluorescence correlation spectroscopy (FCS), has proved to be a very efficient approach, especially if the analysis is extended to measurements on different spatial scales (referred to as FCS diffusion laws). In this work, we examine the relevance of FCS diffusion laws for probing the behavior of a pure lipid and a lipid mixture at temperatures below, within and above the phase transitions, both experimentally and numerically. The accuracy of the microscopic description of the lipid mixtures found here extends previous work to a more complex model in which the geometry is unknown and the molecular motion is driven only by the thermodynamic parameters of the system itself. For multilamellar vesicles of both pure lipid and lipid mixtures, the FCS diffusion laws recorded at different temperatures exhibit large deviations from pure Brownian motion and reveal the existence of nanodomains. The variation of the mean size of these domains with temperature is in perfect correlation with the enthalpy fluctuation. This study highlights the advantages of using FCS diffusion laws in complex lipid systems to describe their temporal and spatial structure. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Vakalis, S; Malamis, D; Moustakas, K
2018-06-15
Small scale biomass gasifiers have the advantage of having higher electrical efficiency in comparison to other conventional small scale energy systems. Nonetheless, a major drawback of small scale biomass gasifiers is the relatively poor quality of the producer gas. In addition, several EU Member States are seeking ways to store the excess energy that is produced from renewables like wind power and hydropower. A recent development is the storage of energy by electrolysis of water and the production of hydrogen in a process that is commonly known as "power-to-gas". The present manuscript proposes an onsite secondary reactor for upgrading producer gas by mixing it with hydrogen in order to initiate methanation reactions. A thermodynamic model has been developed for assessing the potential of the proposed methanation process. The model utilized input parameters from a representative small scale biomass gasifier and molar ratios of hydrogen from 1:0 to 1:4.1. The Villar-Cruise-Smith algorithm was used for minimizing the Gibbs free energy. The model returned the molar fractions of the permanent gases, the heating values and the Wobbe Index. For mixtures of hydrogen and producer gas on a 1:0.9 ratio the increase of the heating value is maximized with an increase of 78%. For ratios higher than 1:3, the Wobbe index increases significantly and surpasses the value of 30 MJ/Nm 3 . Copyright © 2017 Elsevier Ltd. All rights reserved.
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
ERIC Educational Resources Information Center
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J
2017-10-05
A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling of the Inter-phase Mass Transfer during Cosolvent-Enhanced NAPL Remediation
NASA Astrophysics Data System (ADS)
Agaoglu, B.; Scheytt, T. J.; Copty, N. K.
2012-12-01
This study investigates the factors influencing inter-phase mass transfer during cosolvent-enhanced NAPL remediation and the ability of the REV (Representative Elementary Volume) modeling approach to simulate these processes. The NAPLs considered in this study consist of pure toluene, pure benzene and known mixtures of these two compounds, while ethanol-water mixtures were selected as the remedial flushing solutions. Batch tests were performed to identify both the equilibrium and non-equilibrium properties of the multiphase system. A series of column flushing experiments involving different NAPLs were conducted for different ethanol contents in the flushing solution and for different operational parameters. Experimental results were compared to numerical simulations obtained with the UTCHEM multiphase flow simulator (Delshad et al., 1996). Results indicate that the velocity of the flushing solution is a major parameter influencing the inter-phase mass transport processes at the pore scale. Depending on the NAPL composition and porous medium properties, the remedial solution may follow preferential flow paths and be subject to reduced contact with the NAPL. This leads to a steep decrease in the apparent mass transfer coefficient. Correlations of the apparent time-dependent mass transfer coefficient as a function of flushing velocity are developed for various porous media. Experimental results also show that the NAPL mass transfer coefficient into the cosolvent solution increases when the NAPL phase becomes mobile. This is attributed to the increase in pore scale contact area between NAPL and the remedial solution when NAPL mobilization occurs. These results suggest the need to define a temporal and spatially variable mass transfer coefficient of the NAPL into the cosolvent solution to reflect the occurrence of subscale preferential flow paths and the transient bypassing of the NAPL mass. The implications of these findings on field scale NAPL remediation with cosolvents are discussed.
Response properties in the adsorption-desorption model on a triangular lattice
NASA Astrophysics Data System (ADS)
Šćepanović, J. R.; Stojiljković, D.; Jakšić, Z. M.; Budinski-Petković, Lj.; Vrhovac, S. B.
2016-06-01
The out-of-equilibrium dynamical processes during the reversible random sequential adsorption (RSA) of objects of various shapes on a two-dimensional triangular lattice are studied numerically by means of Monte Carlo simulations. We focused on the influence of the order of symmetry axis of the shape on the response of the reversible RSA model to sudden perturbations of the desorption probability Pd. We provide a detailed discussion of the significance of collective events for governing the time coverage behavior of shapes with different rotational symmetries. We calculate the two-time density-density correlation function C(t ,tw) for various waiting times tw and show that longer memory of the initial state persists for the more symmetrical shapes. Our model displays nonequilibrium dynamical effects such as aging. We find that the correlation function C(t ,tw) for all objects scales as a function of single variable ln(tw) / ln(t) . We also study the short-term memory effects in two-component mixtures of extended objects and give a detailed analysis of the contribution to the densification kinetics coming from each mixture component. We observe the weakening of correlation features for the deposition processes in multicomponent systems.
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Solubility modeling of refrigerant/lubricant mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels, H.H.; Sienel, T.H.
1996-12-31
A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less
Confined wetting of FoCa clay powder/pellet mixtures: Experimentation and numerical modeling
NASA Astrophysics Data System (ADS)
Maugis, Pascal; Imbert, Christophe
Potential geological nuclear waste disposals must be properly sealed to prevent contamination of the biosphere by radionuclides. In the framework of the RESEAL project, the performance of a bentonite shaft seal is currently studied at Mol (Belgium). This paper focuses on the hydro-mechanical physical behavior of centimetric, unsaturated samples of the backfilling material - a mixture of FoCa-clay powder and pellets - during oedometer tests. The hydro-mechanical response of the samples is observed experimentally, and then compared to numerical simulations performed by our Cast3M Finite Element code. The generalized Darcy’s law and the Barcelona Basic Model mechanical model formed the physical basis of the numerical model and the interpretation. They are widely used in engineered barriers modeling. Vertical swelling pressure and water intake were measured throughout the test. Although water income presents a monotonous increase, the swelling pressure evolution is marked by a peak, and then a local minimum before increasing again to an asymptotic value. This unexpected behavior is explained by yielding rather than by heterogeneity. It is satisfactorily reproduced by the model after parameter calibration. Several samples with different heights ranging from 5 to 12 cm show the same hydro-mechanical response, apart from a dilatation of the time scale. The interest of the characterization of centimetric samples to predicting the efficiency of a metric sealing is discussed.
Kirkwood–Buff integrals for ideal solutions
Ploetz, Elizabeth A.; Bentenitis, Nikolaos; Smith, Paul E.
2010-01-01
The Kirkwood–Buff (KB) theory of solutions is a rigorous theory of solution mixtures which relates the molecular distributions between the solution components to the thermodynamic properties of the mixture. Ideal solutions represent a useful reference for understanding the properties of real solutions. Here, we derive expressions for the KB integrals, the central components of KB theory, in ideal solutions of any number of components corresponding to the three main concentration scales. The results are illustrated by use of molecular dynamics simulations for two binary solutions mixtures, benzene with toluene, and methanethiol with dimethylsulfide, which closely approach ideal behavior, and a binary mixture of benzene and methanol which is nonideal. Simulations of a quaternary mixture containing benzene, toluene, methanethiol, and dimethylsulfide suggest this system displays ideal behavior and that ideal behavior is not limited to mixtures containing a small number of components. PMID:20441282
Experimental evaluation of drying characteristics of sewage sludge and hazelnut shell mixtures
NASA Astrophysics Data System (ADS)
Pehlivan, Hüseyin; Ateş, Asude; Özdemir, Mustafa
2016-11-01
In this study the drying behavior of organic and agricultural waste mixtures has been experimentally investigated. The usability of sewage sludge as an organic waste and hazelnut shell as an agricultural waste was assessed in different mixture range. The paper discusses the applicability of these mixtures as a recovery energy source. Moisture content of mixtures has been calculated in laboratory and plant conditions. Indoor and outdoor solar sludge drying plants were constructed in pilot scale for experimental purposes. Dry solids and climatic conditions were constantly measured. A total more than 140 samples including for drying has been carried out to build up results. Indoor and outdoor weather conditions are taken into consideration in winter and summer. The most effective drying capacity is obtained in mixture of 20 % hazelnut shell and 80 % sewage sludge.
Mapping behavioral landscapes for animal movement: a finite mixture modeling approach
Tracey, Jeff A.; Zhu, Jun; Boydston, Erin E.; Lyren, Lisa M.; Fisher, Robert N.; Crooks, Kevin R.
2013-01-01
Because of its role in many ecological processes, movement of animals in response to landscape features is an important subject in ecology and conservation biology. In this paper, we develop models of animal movement in relation to objects or fields in a landscape. We take a finite mixture modeling approach in which the component densities are conceptually related to different choices for movement in response to a landscape feature, and the mixing proportions are related to the probability of selecting each response as a function of one or more covariates. We combine particle swarm optimization and an Expectation-Maximization (EM) algorithm to obtain maximum likelihood estimates of the model parameters. We use this approach to analyze data for movement of three bobcats in relation to urban areas in southern California, USA. A behavioral interpretation of the models revealed similarities and differences in bobcat movement response to urbanization. All three bobcats avoided urbanization by moving either parallel to urban boundaries or toward less urban areas as the proportion of urban land cover in the surrounding area increased. However, one bobcat, a male with a dispersal-like large-scale movement pattern, avoided urbanization at lower densities and responded strictly by moving parallel to the urban edge. The other two bobcats, which were both residents and occupied similar geographic areas, avoided urban areas using a combination of movements parallel to the urban edge and movement toward areas of less urbanization. However, the resident female appeared to exhibit greater repulsion at lower levels of urbanization than the resident male, consistent with empirical observations of bobcats in southern California. Using the parameterized finite mixture models, we mapped behavioral states to geographic space, creating a representation of a behavioral landscape. This approach can provide guidance for conservation planning based on analysis of animal movement data using statistical models, thereby linking connectivity evaluations to empirical data.
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Ye, Meixia; Wang, Zhong; Wang, Yaqun; Wu, Rongling
2015-03-01
Dynamic changes of gene expression reflect an intrinsic mechanism of how an organism responds to developmental and environmental signals. With the increasing availability of expression data across a time-space scale by RNA-seq, the classification of genes as per their biological function using RNA-seq data has become one of the most significant challenges in contemporary biology. Here we develop a clustering mixture model to discover distinct groups of genes expressed during a period of organ development. By integrating the density function of multivariate Poisson distribution, the model accommodates the discrete property of read counts characteristic of RNA-seq data. The temporal dependence of gene expression is modeled by the first-order autoregressive process. The model is implemented with the Expectation-Maximization algorithm and model selection to determine the optimal number of gene clusters and obtain the estimates of Poisson parameters that describe the pattern of time-dependent expression of genes from each cluster. The model has been demonstrated by analyzing a real data from an experiment aimed to link the pattern of gene expression to catkin development in white poplar. The usefulness of the model has been validated through computer simulation. The model provides a valuable tool for clustering RNA-seq data, facilitating our global view of expression dynamics and understanding of gene regulation mechanisms. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Canopy reflectance modelling of semiarid vegetation
NASA Technical Reports Server (NTRS)
Franklin, Janet
1994-01-01
Three different types of remote sensing algorithms for estimating vegetation amount and other land surface biophysical parameters were tested for semiarid environments. These included statistical linear models, the Li-Strahler geometric-optical canopy model, and linear spectral mixture analysis. The two study areas were the National Science Foundation's Jornada Long Term Ecological Research site near Las Cruces, NM, in the northern Chihuahuan desert, and the HAPEX-Sahel site near Niamey, Niger, in West Africa, comprising semiarid rangeland and subtropical crop land. The statistical approach (simple and multiple regression) resulted in high correlations between SPOT satellite spectral reflectance and shrub and grass cover, although these correlations varied with the spatial scale of aggregation of the measurements. The Li-Strahler model produced estimated of shrub size and density for both study sites with large standard errors. In the Jornada, the estimates were accurate enough to be useful for characterizing structural differences among three shrub strata. In Niger, the range of shrub cover and size in short-fallow shrublands is so low that the necessity of spatially distributed estimation of shrub size and density is questionable. Spectral mixture analysis of multiscale, multitemporal, multispectral radiometer data and imagery for Niger showed a positive relationship between fractions of spectral endmembers and surface parameters of interest including soil cover, vegetation cover, and leaf area index.
Process Dissociation and Mixture Signal Detection Theory
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.
2008-01-01
The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…
ERIC Educational Resources Information Center
Li, Ming; Harring, Jeffrey R.
2017-01-01
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Geophysics and Nanosciences: Nano to Micro to Meso to Macro Scale Swelling Soils
NASA Astrophysics Data System (ADS)
Cushman, J.
2003-04-01
We use statistical mechanical simulations of nanoporous materials to motivate a choice of independent constitutive variables for a multiscale mixture theory of swelling soils. A video will illustrate the structural behavior of fluids in nanopores when they are adsorbed from a bulk phase vapor to form capillaries on the nanoscale. These simulations suggest that when a swelling soil is very dry, the full strain tensor for the liquid phase should be included in the list of independent variables in any mixture theory. We use this information to develop a three-scale (micro, meso, macro) mixture theory for swelling soils. For a simplified case, we present the underlying multiscale field equations and constitutive theory, solve the resultant well posed system numerically, and present some graphical results for a drying and shrinking body.
Method of synthesizing silica nanofibers using sound waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Jaswinder K.; Datskos, Panos G.
A method for synthesizing silica nanofibers using sound waves is provided. The method includes providing a solution of polyvinyl pyrrolidone, adding sodium citrate and ammonium hydroxide to form a first mixture, adding a silica-based compound to the solution to form a second mixture, and sonicating the second mixture to synthesize a plurality of silica nanofibers having an average cross-sectional diameter of less than 70 nm and having a length on the order of at least several hundred microns. The method can be performed without heating or electrospinning, and instead includes less energy intensive strategies that can be scaled up tomore » an industrial scale. The resulting nanofibers can achieve a decreased mean diameter over conventional fibers. The decreased diameter generally increases the tensile strength of the silica nanofibers, as defects and contaminations decrease with the decreasing diameter.« less
Method of synthesizing silica nanofibers using sound waves
Sharma, Jaswinder K.; Datskos, Panos G.
2015-09-15
A method for synthesizing silica nanofibers using sound waves is provided. The method includes providing a solution of polyvinyl pyrrolidone, adding sodium citrate and ammonium hydroxide to form a first mixture, adding a silica-based compound to the solution to form a second mixture, and sonicating the second mixture to synthesize a plurality of silica nanofibers having an average cross-sectional diameter of less than 70 nm and having a length on the order of at least several hundred microns. The method can be performed without heating or electrospinning, and instead includes less energy intensive strategies that can be scaled up to an industrial scale. The resulting nanofibers can achieve a decreased mean diameter over conventional fibers. The decreased diameter generally increases the tensile strength of the silica nanofibers, as defects and contaminations decrease with the decreasing diameter.
Hybrid and Nonhybrid Lipids Exert Common Effects on Membrane Raft Size and Morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heberle, Frederick A; Doktorova, Milka; Goh, Shih Lin
2013-01-01
Nanometer-scale domains in cholesterolrich model membranes emulate lipid rafts in cell plasma membranes (PMs). The physicochemical mechanisms that maintain a finite, small domain size are, however, not well understood. A special role has been postulated for chainasymmetric or hybrid lipids having a saturated sn-1 chain and an unsaturated sn-2 chain. Hybrid lipids generate nanodomains in some model membranes and are also abundant in the PM. It was proposed that they align in a preferred orientation at the boundary of ordered and disordered phases, lowering the interfacial energy and thus reducing domain size. We used small-angle neutron scattering and fluorescence techniquesmore » to detect nanoscopic and modulated liquid phase domains in a mixture composed entirely of nonhybrid lipids and cholesterol. Our results are indistinguishable from those obtained previously for mixtures containing hybrid lipids, conclusively showing that hybrid lipids are not required for the formation of nanoscopic liquid domains and strongly implying a common mechanism for the overall control of raft size and morphology. We discuss implications of these findings for theoretical descriptions of nanodomains.« less
CFD Modelling of Particle Mixtures in a 2D CFB
NASA Astrophysics Data System (ADS)
Seppälä, M.; Kallio, S.
The capability of Fluent 6.2.16 to simulate particle mixtures in a laboratory scale 2D circulating fluidized bed (CFB) unit has been tested. In the simulations, the solids were described as one or two particle phases. The loading ratio of small to large particles, particle diameters and the gas inflow velocity were varied. The 40 cm wide and 3 m high 2D CFB was modeled using a grid with 31080 cells. The outflow of particles at the top of the CFB was monitored and emanated particles were fed back to the riser through a return duct. The paper presents the segregation patterns of the particle phases obtained from the simulations. When the fraction of large particles was 50% or larger, large particles segregated, as expected, to the wall regions and to the bottom part of the riser. However, when the fraction of large particles was 10%, an excess of large particles was found in the upper half of the riser. The explanation for this unexpected phenomenon was found in the distribution of the large particles between the slow clusters and the faster moving lean suspension.
Mobility of maerl-siliciclastic mixtures: Impact of waves, currents and storm events
NASA Astrophysics Data System (ADS)
Joshi, Siddhi; Duffy, Garret Patrick; Brown, Colin
2017-04-01
Maerl beds are free-living, non-geniculate coralline algae habitats which form biogenic reefs with high micro-scale complexity supporting a diversity and abundance of rare epifauna and epiflora. These habitats are highly mobile in shallow marine environments where substantial maerl beds co-exist with siliciclastic sediment, exemplified by our study site of Galway Bay. Coupled hydrodynamic-wave-sediment transport models have been used to explore the transport patterns of maerl-siliciclastic sediment during calm summer conditions and severe winter storms. The sediment distribution is strongly influenced by storm waves even in water depths greater than 100 m. Maerl is present at the periphery of wave-induced residual current gyres during storm conditions. A combined wave-current Sediment Mobility Index during storm conditions shows correlation with multibeam backscatter and surficial sediment distribution. A combined wave-current Mobilization Frequency Index during storm conditions acts as a physical surrogate for the presence of maerl-siliciclastic mixtures in Galway Bay. Both indices can provide useful integrated oceanographic and sediment information to complement coupled numerical hydrodynamic, sediment transport and erosion-deposition models.
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
NASA Technical Reports Server (NTRS)
Dalling, D. K.; Bailey, B. K.; Pugmire, R. J.
1984-01-01
A proton and carbon-13 nuclear magnetic resonance (NMR) study was conducted of Ashland shale oil refinery products, experimental referee broadened-specification jet fuels, and of related isoprenoid model compounds. Supercritical fluid chromatography techniques using carbon dioxide were developed on a preparative scale, so that samples could be quantitatively separated into saturates and aromatic fractions for study by NMR. An optimized average parameter treatment was developed, and the NMR results were analyzed in terms of the resulting average parameters; formulation of model mixtures was demonstrated. Application of novel spectroscopic techniques to fuel samples was investigated.
Cosmic microwave background radiation anisotropies in brane worlds.
Koyama, Kazuya
2003-11-28
We propose a new formulation to calculate the cosmic microwave background (CMB) spectrum in the Randall-Sundrum two-brane model based on recent progress in solving the bulk geometry using a low energy approximation. The evolution of the anisotropic stress imprinted on the brane by the 5D Weyl tensor is calculated. An impact of the dark radiation perturbation on the CMB spectrum is investigated in a simple model assuming an initially scale-invariant adiabatic perturbation. The dark radiation perturbation induces isocurvature perturbations, but the resultant spectrum can be quite different from the prediction of simple mixtures of adiabatic and isocurvature perturbations due to Weyl anisotropic stress.
Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel
2017-05-01
Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.
NASA Astrophysics Data System (ADS)
Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.
2017-08-01
We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.
Gaussian mixture clustering and imputation of microarray data.
Ouyang, Ming; Welsh, William J; Georgopoulos, Panos
2004-04-12
In microarray experiments, missing entries arise from blemishes on the chips. In large-scale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses. This study compares methods of missing value estimation. Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of mis-clustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both time-series (correlated) and non-time series (uncorrelated) data sets.
Bourasseau, Emeric; Maillet, Jean-Bernard
2011-04-21
This paper presents a new method to obtain chemical equilibrium properties of detonation products mixtures including a solid carbon phase. In this work, the solid phase is modelled through a mesoparticle immersed in the fluid, such that the heterogeneous character of the mixture is explicitly taken into account. Inner properties of the clusters are taken from an equation of state obtained in a previous work, and interaction potential between the nanocluster and the fluid particles is derived from all-atoms simulations using the LCBOPII potential (Long range Carbon Bond Order Potential II). It appears that differences in chemical equilibrium results obtained with this method and the "composite ensemble method" (A. Hervouet et al., J. Phys. Chem. B, 2008, 112.), where fluid and solid phases are considered as non-interacting, are not significant, underlining the fact that considering the inhomogeneity of such system is crucial.
Park, Jung-Sun; Kim, Hye-Sung; Park, Hye-Mi; Kim, Chang-Hyun; Kim, Tai-Gyu
2011-11-03
Protein vaccines may be a useful strategy for cancer immunotherapy because recombinant tumor antigen proteins can be produced on a large scale at relatively low cost and have been shown to be safe for clinical application. However, protein vaccines have historically exhibited poor immunogenicity; thus, an improved strategy is needed for successful induction of immune responses. TAT peptide is a protein transduction domain composed of an 11-amino acid peptide (TAT(47-57): YGRKKRRQRRR). The positive charge of this peptide allows protein antigen fused with it to improve cell penetration. Poly(I:C) is a synthetic double-stranded RNA that is negatively charged and favors interaction with the cationic TAT peptide. Poly(I:C) has been reported on adjuvant role in tumor vaccine through promotion of immune responses. Therefore, we demonstrated that vaccine with a mixture of TAT-CEA fusion protein and poly(I:C) can induce anti-tumor immunity in a murine colorectal tumor model. Splenocytes from mice vaccinated with a mixture of TAT-CEA fusion protein and poly(I:C) effectively induced CEA-specific IFN-γ-producing T cells and showed cytotoxic activity specific for MC-38-cea2 tumor cells expressing CEA. Vaccine with a mixture of TAT-CEA fusion protein and poly(I:C) delayed tumor growth in MC-38-cea-2 tumor-bearing mice. Depletion of CD8(+) T cells and NK cells reversed the inhibition of tumor growth in an MC-38-cea2-bearing mice, indicating that CD8(+) T cells and NK cells are responsible for anti-tumor immunity by vaccine with a mixture of TAT-CEA fusion protein and poly(I:C). Taken together, these results suggest that poly(I:C) could be used as a potent adjuvant to induce the anti-tumor immunity of a TAT-CEA fusion protein vaccine in a murine colorectal tumor model. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Skakun, Sergii; Franch, Belen; Vermote, Eric; Roger, Jean-Claude; Becker-Reshef, Inbal; Justice, Christopher; Kussul, Nataliia
2017-01-01
Knowledge on geographical location and distribution of crops at global, national and regional scales is an extremely valuable source of information applications. Traditional approaches to crop mapping using remote sensing data rely heavily on reference or ground truth data in order to train/calibrate classification models. As a rule, such models are only applicable to a single vegetation season and should be recalibrated to be applicable for other seasons. This paper addresses the problem of early season large-area winter crop mapping using Moderate Resolution Imaging Spectroradiometer (MODIS) derived Normalized Difference Vegetation Index (NDVI) time-series and growing degree days (GDD) information derived from the Modern-Era Retrospective analysis for Research and Applications (MERRA-2) product. The model is based on the assumption that winter crops have developed biomass during early spring while other crops (spring and summer) have no biomass. As winter crop development is temporally and spatially non-uniform due to the presence of different agro-climatic zones, we use GDD to account for such discrepancies. A Gaussian mixture model (GMM) is applied to discriminate winter crops from other crops (spring and summer). The proposed method has the following advantages: low input data requirements, robustness, applicability to global scale application and can provide winter crop maps 1.5-2 months before harvest. The model is applied to two study regions, the State of Kansas in the US and Ukraine, and for multiple seasons (2001-2014). Validation using the US Department of Agriculture (USDA) Crop Data Layer (CDL) for Kansas and ground measurements for Ukraine shows that accuracies of greater than 90% can be achieved in mapping winter crops 1.5-2 months before harvest. Results also show good correspondence to official statistics with average coefficients of determination R(exp. 2) greater than 0.85.
Ehrlich, Matthias; Schüffny, René
2013-01-01
One of the major outcomes of neuroscientific research are models of Neural Network Structures (NNSs). Descriptions of these models usually consist of a non-standardized mixture of text, figures, and other means of visual information communication in print media. However, as neuroscience is an interdisciplinary domain by nature, a standardized way of consistently representing models of NNSs is required. While generic descriptions of such models in textual form have recently been developed, a formalized way of schematically expressing them does not exist to date. Hence, in this paper we present Neural Schematics as a concept inspired by similar approaches from other disciplines for a generic two dimensional representation of said structures. After introducing NNSs in general, a set of current visualizations of models of NNSs is reviewed and analyzed for what information they convey and how their elements are rendered. This analysis then allows for the definition of general items and symbols to consistently represent these models as Neural Schematics on a two dimensional plane. We will illustrate the possibilities an agreed upon standard can yield on sampled diagrams transformed into Neural Schematics and an example application for the design and modeling of large-scale NNSs.
The architecture of dynamic reservoir in the echo state network
NASA Astrophysics Data System (ADS)
Cui, Hongyan; Liu, Xiang; Li, Lixiang
2012-09-01
Echo state network (ESN) has recently attracted increasing interests because of its superior capability in modeling nonlinear dynamic systems. In the conventional echo state network model, its dynamic reservoir (DR) has a random and sparse topology, which is far from the real biological neural networks from both structural and functional perspectives. We hereby propose three novel types of echo state networks with new dynamic reservoir topologies based on complex network theory, i.e., with a small-world topology, a scale-free topology, and a mixture of small-world and scale-free topologies, respectively. We then analyze the relationship between the dynamic reservoir structure and its prediction capability. We utilize two commonly used time series to evaluate the prediction performance of the three proposed echo state networks and compare them to the conventional model. We also use independent and identically distributed time series to analyze the short-term memory and prediction precision of these echo state networks. Furthermore, we study the ratio of scale-free topology and the small-world topology in the mixed-topology network, and examine its influence on the performance of the echo state networks. Our simulation results show that the proposed echo state network models have better prediction capabilities, a wider spectral radius, but retain almost the same short-term memory capacity as compared to the conventional echo state network model. We also find that the smaller the ratio of the scale-free topology over the small-world topology, the better the memory capacities.
Abeish, Abdulbasit M; Ang, Ha Ming; Znad, Hussein
2015-01-01
The solar-photocatalytic degradation mechanisms and kinetics of 4-chlorophenol (4-CP) and 2,4-dichlorophenol (2,4-DCP) using TiO2 have been investigated both individually and combined. The individual solar-photocatalytic degradation of both phenolic compounds showed that the reaction rates follow pseudo-first-order reaction. During the individual photocatalytic degradation of both 4-CP and 2,4-DCP under the same condition of TiO2 (0.5 g L(-1)) and light intensities (1000 mW cm(-2)) different intermediates were detected, three compounds associated with 4-CP (hydroquinone (HQ), phenol (Ph) and 4-chlorocatechol (4-cCat)) and two compounds associated with 2,4-DCP (4-CP and Ph). The photocatalytic degradation of the combined mixture (4-CP and 2,4-DCP) was also investigated at the same conditions and different 2,4-DCP initial concentrations. The results showed that the degradation rate of 4-CP decreases when the 2,4-DCP concentration increases. Furthermore, the intermediates detected were similar to that found in the individual degradation but with high Ph concentration. Therefore, a possible reaction mechanism for degradation of this combined mixture was proposed. Moreover, a modified Langmuir-Hinshelwood (L-H) kinetic model considering all detected intermediates was developed. A good agreement between experimental and estimated results was achieved. This model can be useful for scaling-up purposes more accurately as its considering the intermediates formed, which has a significant effect on degrading the main pollutants (4-CP and 2,4-DCP).
Systematic description of the effect of particle shape on the strength properties of granular media
NASA Astrophysics Data System (ADS)
Azéma, Emilien; Estrada, Nicolas; Preechawuttipong, Itthichai; Delenne, Jean-Yves; Radjai, Farhang
2017-06-01
In this paper, we explore numerically the effect of particle shape on the mechanical behavior of sheared granular packings. In the framework of the Contact Dynamic (CD)Method, we model angular shape as irregular polyhedral particles, non-convex shape as regular aggregates of four overlapping spheres, elongated shape as rounded cap rectangles and platy shape as square-plates. Binary granular mixture consisting of disks and elongated particles are also considered. For each above situations, the number of face of polyhedral particles, the overlap of spheres, the aspect ratio of elongated and platy particles, are systematically varied from spheres to very angular, non-convex, elongated and platy shapes. The level of homogeneity of binary mixture varies from homogenous packing to fully segregated packings. Our numerical results suggest that the effects of shape parameters are nonlinear and counterintuitive. We show that the shear strength increases as shape deviate from spherical shape. But, for angular shapes it first increases up to a maximum value and then saturates to a constant value as the particles become more angular. For mixture of two shapes, the strength increases with respect of the increase of the proportion of elongated particles, but surprisingly it is independent with the level of homogeneity of the mixture. A detailed analysis of the contact network topology, evidence that various contact types contribute differently to stress transmission at the micro-scale.
Rafal Podlaski; Francis Roesch
2014-01-01
In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...
A general mixture model and its application to coastal sandbar migration simulation
NASA Astrophysics Data System (ADS)
Liang, Lixin; Yu, Xiping
2017-04-01
A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es
2013-06-01
Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Molenaar, Dylan; de Boeck, Paul
2018-06-01
In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.
Accuracy metrics for judging time scale algorithms
NASA Technical Reports Server (NTRS)
Douglas, R. J.; Boulanger, J.-S.; Jacques, C.
1994-01-01
Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.
Dielectric relaxation in ionic liquid/dipolar solvent binary mixtures: A semi-molecular theory.
Daschakraborty, Snehasis; Biswas, Ranjit
2016-03-14
A semi-molecular theory is developed here for studying dielectric relaxation (DR) in binary mixtures of ionic liquids (ILs) with common dipolar solvents. Effects of ion translation on DR time scale, and those of ion rotation on conductivity relaxation time scale are explored. Two different models for the theoretical calculations have been considered: (i) separate medium approach, where molecularities of both the IL and dipolar solvent molecules are retained, and (ii) effective medium approach, where the added dipolar solvent molecules are assumed to combine with the dipolar ions of the IL, producing a fictitious effective medium characterized via effective dipole moment, density, and diameter. Semi-molecular expressions for the diffusive DR times have been derived which incorporates the effects of wavenumber dependent orientational static correlations, ion dynamic structure factors, and ion translation. Subsequently, the theory has been applied to the binary mixtures of 1-butyl-3-methylimidazolium tetrafluoroborate ([Bmim][BF4]) with water (H2O), and acetonitrile (CH3CN) for which experimental DR data are available. On comparison, predicted DR time scales show close agreement with the measured DR times at low IL mole fractions (x(IL)). At higher IL concentrations (x(IL) > 0.05), the theory over-estimates the relaxation times and increasingly deviates from the measurements with x(IL), deviation being the maximum for the neat IL by almost two orders of magnitude. The theory predicts negligible contributions to this deviation from the x(IL) dependent collective orientational static correlations. The drastic difference between DR time scales for IL/solvent mixtures from theory and experiments arises primarily due to the use of the actual molecular volume (V(mol)(dip)) for the rotating dipolar moiety in the present theory and suggests that only a fraction of V(mol)(dip) is involved at high x(IL). Expectedly, nice agreement between theory and experiments appears when experimental estimates for the effective rotational volume (V(eff)(dip)) are used as inputs. The fraction, V(eff)(dip)/V(mol)(dip), sharply decreases from ∼1 at pure dipolar solvent to ∼0.01 at neat IL, reflecting a dramatic crossover from viscosity-coupled hydrodynamic angular diffusion at low IL mole fractions to orientational relaxation predominantly via large angle jumps at high x(IL). Similar results are obtained on applying the present theory to the aqueous solution of an electrolyte guanidinium chloride (GdmCl) having a permanent dipole moment associated with the cation, Gdm(+).
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
Structure-reactivity modeling using mixture-based representation of chemical reactions.
Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre
2017-09-01
We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.
Metastable liquid lamellar structures in binary and ternary mixtures of Lennard-Jones fluids
NASA Astrophysics Data System (ADS)
Díaz-Herrera, Enrique; Ramírez-Santiago, Guillermo; Moreno Razo, José A.
2004-03-01
We have carried out extensive equilibrium MD simulations to investigate the Liquid-Vapor coexistence in partially miscible binary and ternary mixtures LJ fluids. We have studied in detail the time evolution of the density profiles and the interfacial properties in a temperature region of the phase diagram where the condensed phase is demixed. The composition of the mixtures are fixed, 50% for the binary mixture and 33.33% for the ternary mixture. The results of the simulations clearly indicate that in the range of temperatures 78 < T < 102 ^oK,--in the scale of argon-- the system evolves towards a metastable alternated liquid-liquid lamellar state in coexistence with its vapor phase. These states can be achieved if the initial configuration is fully disordered, that is, when the particles of the fluids are randomly placed on the sites of an FCC crystal or the system is completely mixed. As temperature decreases these states become very well defined and more stable in time. We find that below 90 ^oK, the alternated liquid-liquid lamellar state remains alive for 80 ns, in the scale of argon, the longest simulation we have carried out. Nonetheless, we believe that in this temperature region these states will be alive for even much longer times.
An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models
ERIC Educational Resources Information Center
Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol
2016-01-01
The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…
ERIC Educational Resources Information Center
Liu, Junhui
2012-01-01
The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…
Effects of three veterinary antibiotics and their binary mixtures on two green alga species.
Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A
2018-03-01
The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ostoja, Steven M.; Schupp, Eugene W.; Klinger, Rob
2013-01-01
Granivore foraging decisions affect consumer success and determine the quantity and spatial pattern of seed survival. These decisions are influenced by environmental variation at spatial scales ranging from landscapes to local foraging patches. In a field experiment, the effects of seed patch variation across three spatial scales on seed removal by western harvester ants Pogonomyrmex occidentalis were evaluated. At the largest scale we assessed harvesting in different plant communities, at the intermediate scale we assessed harvesting at different distances from ant mounds, and at the smallest scale we assessed the effects of interactions among seed species in local seed neighborhoods on seed harvesting (i.e. resource–consumer interface). Selected seed species were presented alone (monospecific treatment) and in mixture with Bromus tectorum (cheatgrass; mixture treatment) at four distances from P. occidentalis mounds in adjacent intact sagebrush and non-native cheatgrass-dominated communities in the Great Basin, Utah, USA. Seed species differed in harvest, with B. tectorum being least preferred. Large and intermediate scale variation influenced harvest. More seeds were harvested in sagebrush than in cheatgrass-dominated communities (largest scale), and the quantity of seed harvested varied with distance from mounds (intermediate-scale), although the form of the distance effect differed between plant communities. At the smallest scale, seed neighborhood affected harvest, but the patterns differed among seed species considered. Ants harvested fewer seeds from mixed-seed neighborhoods than from monospecific neighborhoods, suggesting context dependence and potential associational resistance. Further, the effects of plant community and distance from mound on seed harvest in mixtures differed from their effects in monospecific treatments. Beyond the local seed neighborhood, selection of seed resources is better understood by simultaneously evaluating removal at multiple scales. Associational effects provide a useful theoretical basis for better understanding harvester ant foraging decisions. These results demonstrate the importance of ecological context for seed removal, which has implications for seed pools, plant populations and communities.
Two-Phase Solid/Fluid Simulation of Dense Granular Flows With Dilatancy Effects
NASA Astrophysics Data System (ADS)
Mangeney, Anne; Bouchut, Francois; Fernandez-Nieto, Enrique; Narbona-Reina, Gladys; Kone, El Hadj
2017-04-01
Describing grain/fluid interaction in debris flows models is still an open and challenging issue with key impact on hazard assessment [1]. We present here a two-phase two-thin-layer model for fluidized debris flows that takes into account dilatancy effects. It describes the velocity of both the solid and the fluid phases, the compression/ dilatation of the granular media and its interaction with the pore fluid pressure [2]. The model is derived from a 3D two-phase model proposed by Jackson [3] and the mixture equations are closed by a weak compressibility relation. This relation implies that the occurrence of dilation or contraction of the granular material in the model depends on whether the solid volume fraction is respectively higher or lower than a critical value. When dilation occurs, the fluid is sucked into the granular material, the pore pressure decreases and the friction force on the granular phase increases. On the contrary, in the case of contraction, the fluid is expelled from the mixture, the pore pressure increases and the friction force diminishes. To account for this transfer of fluid into and out of the mixture, a two-layer model is proposed with a fluid or a solid layer on top of the two-phase mixture layer. Mass and momentum conservation are satisfied for the two phases, and mass and momentum are transferred between the two layers. A thin-layer approximation is used to derive average equations. Special attention is paid to the drag friction terms that are responsible for the transfer of momentum between the two phases and for the appearance of an excess pore pressure with respect to the hydrostatic pressure. Interestingly, when removing the role of water, our model reduces to a dry granular flow model including dilatancy. We first compare experimental and numerical results of dilatant dry granular flows. Then, by quantitatively comparing the results of simulation and laboratory experiments on submerged granular flows, we show that our model contains the basic ingredients making it possible to reproduce the interaction between the granular and fluid phases through the change in pore fluid pressure. In particular, we analyse the different time scales in the model and their role in granular/fluid flow dynamics. References [1] R. Delannay, A. Valance, A. Mangeney, O. Roche, P. Richard, J. Phys. D: Appl. Phys., in press (2016). [2] F. Bouchut, E. D. Fernández-Nieto, A. Mangeney, G. Narbona-Reina, J. Fluid Mech., 801, 166-221 (2016). [3] R. Jackson, Cambridges Monographs on Mechanics (2000).
Cappello, Carmelina; Tremolaterra, Fabrizio; Pascariello, Annalisa; Ciacci, Carolina; Iovino, Paola
2013-03-01
The aim of this study is to test in a double-blinded, randomised placebo-controlled study the effects of a commercially available multi-strain symbiotic mixture on symptoms, colonic transit and quality of life in irritable bowel syndrome (IBS) patients who meet Rome III criteria. There is only one other double-blinded RCT on a single-strain symbiotic mixture in IBS. This is a double-blinded, randomised placebo-controlled study of a symbiotic mixture (Probinul, 5 g bid) over 4 weeks after 2 weeks of run-in. The primary endpoints were global satisfactory relief of abdominal flatulence and bloating. Responders were patients who reported at least 50 % of the weeks of treatment with global satisfactory relief. The secondary endpoints were change in abdominal bloating, flatulence, pain and urgency by a 100-mm visual analog scale, stool frequency and bowel functions on validated adjectival scales (Bristol Scale and sense of incomplete evacuation). Pre- and post-treatment colonic transit time (Metcalf) and quality of life (SF-36) were assessed. Sixty-four IBS patients (symbiotic n = 32, 64 % females, mean age 38.7 ± 12.6 years) were studied. This symbiotic mixture reduced flatulence over a 4-week period of treatment (repeated-measures analysis of covariance, p < 0.05). Proportions of responders were not significantly different between groups. At the end of the treatment, a longer rectosigmoid transit time and a significant improvement in most SF-36 scores were observed in the symbiotic group. This symbiotic mixture has shown a beneficial effect in decreasing the severity of flatulence in IBS patients, a lack of adverse events and a good side-effect profile; however, it failed to achieve an improvement in global satisfactory relief of abdominal flatulence and bloating. Further studies are warranted.
General Blending Models for Data From Mixture Experiments
Brown, L.; Donev, A. N.; Bissett, A. C.
2015-01-01
We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812
Probing the early stages of shock-induced chondritic meteorite formation at the mesoscale
Rutherford, Michael E.; Chapman, David J.; Derrick, James G.; Patten, Jack R. W.; Bland, Philip A.; Rack, Alexander; Collins, Gareth S.; Eakins, Daniel E.
2017-01-01
Chondritic meteorites are fragments of asteroids, the building blocks of planets, that retain a record of primordial processes. Important in their early evolution was impact-driven lithification, where a porous mixture of millimetre-scale chondrule inclusions and sub-micrometre dust was compacted into rock. In this Article, the shock compression of analogue precursor chondrite material was probed using state of the art dynamic X-ray radiography. Spatially-resolved shock and particle velocities, and shock front thicknesses were extracted directly from the radiographs, representing a greatly enhanced scope of data than could be measured in surface-based studies. A statistical interpretation of the measured velocities showed that mean values were in good agreement with those predicted using continuum-level modelling and mixture theory. However, the distribution and evolution of wave velocities and wavefront thicknesses were observed to be intimately linked to the mesoscopic structure of the sample. This Article provides the first detailed experimental insight into the distribution of extreme states within a shocked powder mixture, and represents the first mesoscopic validation of leading theories concerning the variation in extreme pressure-temperature states during the formation of primordial planetary bodies. PMID:28555619
NASA Astrophysics Data System (ADS)
Lempert, Walter; Uddi, Mruthunjaya; Mintusov, Eugene; Jiang, Naibo; Adamovich, Igor
2007-10-01
Two Photon Laser Induced Fluorescence (TALIF) is used to measure time-dependent absolute oxygen atom concentrations in O2/He, O2/N2, and CH4/air plasmas produced with a 20 nanosecond duration, 20 kV pulsed discharge at 10 Hz repetition rate. Xenon calibrated spectra show that a single discharge pulse creates initial oxygen dissociation fraction of ˜0.0005 for air like mixtures at 40-60 torr total pressure. Peak O atom concentration is a factor of approximately two lower in fuel lean (φ=0.5) methane/air mixtures. In helium buffer, the initially formed atomic oxygen decays monotonically, with decay time consistent with formation of ozone. In all nitrogen containing mixtures, atomic oxygen concentrations are found to initially increase, for time scales on the order of 10-100 microseconds, due presumably to additional O2 dissociation caused by collisions with electronically excited nitrogen. Further evidence of the role of metastable N2 is demonstrated from time-dependent N2 2^nd Positive and NO Gamma band emission spectroscopy. Comparisons with modeling predictions show qualitative, but not quantitative, agreement with the experimental data.
Experimental determination of useful resistance value during pasta dough kneading
NASA Astrophysics Data System (ADS)
Podgornyj, Yu I.; Martynova, T. G.; Skeeba, V. Yu; Kosilov, A. S.; Chernysheva, A. A.; Skeeba, P. Yu
2017-10-01
There is a large quantity of materials produced in the form of dry powder or low humidity granulated masses in the modern market, and there is a need to develop new manufacturing machinery and to renew the existing facilities involved in the production of various loose mixtures. One of the machinery upgrading tasks is enhancing its performance. In view of the fact that experimental research is not feasible in full-scale samples, an experimental installation was to be constructed. The article contains its kinematic scheme and the 3D model. The angle of the kneading blade location, the volume of the loose mixture, rotating frequency and the number of the work member double passes were chosen as variables to carry out the experiment. The technique of the experiment, which includes two stages for the rotary and reciprocating movement of the work member, was proposed. The results of the experimental data processing yield the correlations between the load characteristics of the mixer work member and the angle of the blade, the volume of the mixture and the work member rotating frequency, allowing for the recalculation of loads for this type machines.
Coupland, N; Zedkova, L; Sanghera, G; Leyton, M; Le Mellédo, J M
2001-01-01
OBJECTIVE: To assess the effects of the acute depletion of the catecholamine precursors phenylalanine and tyrosine on mood and pentagastrin-induced anxiety. DESIGN: Randomized, double-blind controlled multiple crossover study. SETTING: University department of psychiatry. PARTICIPANTS: 6 healthy male volunteers. INTERVENTIONS: 3 treatments were compared: pretreatment with a nutritionally balanced amino acid mixture, followed 5 hours later by a bolus injection of normal saline placebo; pretreatment with a balanced amino acid mixture, followed by a bolus injection of pentagastrin (0.6 microgram/kg); and pretreatment with an amino acid mixture without the catecholamine precursors phenylalanine or tyrosine, followed by pentagastrin (0.6 microgram/kg). OUTCOME MEASURES: Scores on the panic symptom scale, a visual analogue scale for anxiety, the Borg scale of respiratory exertion and the Profile of Mood States Elation-Depression Scale. RESULTS: Pentagastrin produced the expected increases in anxiety symptoms, but there was no significant or discernible influence of acute phenylalanine and tyrosine depletion on anxiety or mood. CONCLUSIONS: These pilot data do not support further study using the same design in healthy men. Under these study conditions, phenylalanine and tyrosine depletion may have larger effects on dopamine than noradrenaline. Alternative protocols to assess the role of catecholamines in mood and anxiety are proposed. PMID:11394194
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
The Penetration of Solar Radiation Into Carbon Dioxide Ice
NASA Astrophysics Data System (ADS)
Chinnery, H. E.; Hagermann, A.; Kaufmann, E.; Lewis, S. R.
2018-04-01
Icy surfaces behave differently to rocky or regolith-covered surfaces in response to irradiation. A key factor is the ability of visible light to penetrate partially into the subsurface. This results in the solid-state greenhouse effect, as ices can be transparent or translucent to visible and shorter wavelengths, while opaque in the infrared. This can lead to significant differences in shallow subsurface temperature profiles when compared to rocky surfaces. Of particular significance for modeling the solid-state greenhouse effect is the e-folding scale, otherwise known as the absorption scale length, or penetration depth, of the ice. While there have been measurements for water ice and snow, pure and with mixtures, to date, there have been no such measurements published for carbon dioxide ice. After an extensive series of measurements we are able to constrain the e-folding scale of CO2 ice for the cumulative wavelength range 300 to 1,100 nm, which is a vital parameter in heat transfer models for the Martian surface, enabling us to better understand surface-atmosphere interactions at Mars' polar caps.
Substructure of fuzzy dark matter haloes
NASA Astrophysics Data System (ADS)
Du, Xiaolong; Behrens, Christoph; Niemeyer, Jens C.
2017-02-01
We derive the halo mass function (HMF) for fuzzy dark matter (FDM) by solving the excursion set problem explicitly with a mass-dependent barrier function, which has not been done before. We find that compared to the naive approach of the Sheth-Tormen HMF for FDM, our approach has a higher cutoff mass and the cutoff mass changes less strongly with redshifts. Using merger trees constructed with a modified version of the Lacey & Cole formalism that accounts for suppressed small-scale power and the scale-dependent growth of FDM haloes and the semi-analytic GALACTICUS code, we study the statistics of halo substructure including the effects from dynamical friction and tidal stripping. We find that if the dark matter is a mixture of cold dark matter (CDM) and FDM, there will be a suppression on the halo substructure on small scales which may be able to solve the missing satellites problem faced by the pure CDM model. The suppression becomes stronger with increasing FDM fraction or decreasing FDM mass. Thus, it may be used to constrain the FDM model.
Overview of human health and chemical mixtures: problems facing developing countries.
Yáñ ez, Leticia; Ortiz, Deogracias; Calderón, Jaqueline; Batres, Lilia; Carrizales, Leticia; Mejía, Jesús; Martínez, Lourdes; García-Nieto, Edelmira; Díaz-Barriga, Fernando
2002-01-01
In developing countries, chemical mixtures within the vicinity of small-scale enterprises, smelters, mines, agricultural areas, toxic waste disposal sites, etc., often present a health hazard to the populations within those vicinities. Therefore, in these countries, there is a need to study the toxicological effects of mixtures of metals, pesticides, and organic compounds. However, the study of mixtures containing substances such as DDT (dichlorodiphenyltrichloroethane, an insecticide banned in developed nations), and mixtures containing contaminants such as fluoride (of concern only in developing countries) merit special attention. Although the studies may have to take into account simultaneous exposures to metals and organic compounds, there is also a need to consider the interaction between chemicals and other specific factors such as nutritional conditions, alcoholism, smoking, infectious diseases, and ethnicity. PMID:12634117
Overview of human health and chemical mixtures: problems facing developing countries.
Yáñ ez, Leticia; Ortiz, Deogracias; Calderón, Jaqueline; Batres, Lilia; Carrizales, Leticia; Mejía, Jesús; Martínez, Lourdes; García-Nieto, Edelmira; Díaz-Barriga, Fernando
2002-12-01
In developing countries, chemical mixtures within the vicinity of small-scale enterprises, smelters, mines, agricultural areas, toxic waste disposal sites, etc., often present a health hazard to the populations within those vicinities. Therefore, in these countries, there is a need to study the toxicological effects of mixtures of metals, pesticides, and organic compounds. However, the study of mixtures containing substances such as DDT (dichlorodiphenyltrichloroethane, an insecticide banned in developed nations), and mixtures containing contaminants such as fluoride (of concern only in developing countries) merit special attention. Although the studies may have to take into account simultaneous exposures to metals and organic compounds, there is also a need to consider the interaction between chemicals and other specific factors such as nutritional conditions, alcoholism, smoking, infectious diseases, and ethnicity.
Mixed-up trees: the structure of phylogenetic mixtures.
Matsen, Frederick A; Mossel, Elchanan; Steel, Mike
2008-05-01
In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
NASA Astrophysics Data System (ADS)
Co, Raymond T.; Harigaya, Keisuke; Nomura, Yasunori
2017-03-01
We present a simple and natural dark sector model in which dark matter particles arise as composite states of hidden strong dynamics and their stability is ensured by accidental symmetries. The model has only a few free parameters. In particular, the gauge symmetry of the model forbids the masses of dark quarks, and the confinement scale of the dynamics provides the unique mass scale of the model. The gauge group contains an Abelian symmetry U (1 )D , which couples the dark and standard model sectors through kinetic mixing. This model, despite its simple structure, has rich and distinctive phenomenology. In the case where the dark pion becomes massive due to U (1 )D quantum corrections, direct and indirect detection experiments can probe thermal relic dark matter which is generically a mixture of the dark pion and the dark baryon, and the Large Hadron Collider can discover the U (1 )D gauge boson. Alternatively, if the dark pion stays light due to a specific U (1 )D charge assignment of the dark quarks, then the dark pion constitutes dark radiation. The signal of this radiation is highly correlated with that of dark baryons in dark matter direct detection.
Co, Raymond T; Harigaya, Keisuke; Nomura, Yasunori
2017-03-10
We present a simple and natural dark sector model in which dark matter particles arise as composite states of hidden strong dynamics and their stability is ensured by accidental symmetries. The model has only a few free parameters. In particular, the gauge symmetry of the model forbids the masses of dark quarks, and the confinement scale of the dynamics provides the unique mass scale of the model. The gauge group contains an Abelian symmetry U(1)_{D}, which couples the dark and standard model sectors through kinetic mixing. This model, despite its simple structure, has rich and distinctive phenomenology. In the case where the dark pion becomes massive due to U(1)_{D} quantum corrections, direct and indirect detection experiments can probe thermal relic dark matter which is generically a mixture of the dark pion and the dark baryon, and the Large Hadron Collider can discover the U(1)_{D} gauge boson. Alternatively, if the dark pion stays light due to a specific U(1)_{D} charge assignment of the dark quarks, then the dark pion constitutes dark radiation. The signal of this radiation is highly correlated with that of dark baryons in dark matter direct detection.
New approach in direct-simulation of gas mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren
1991-01-01
Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.
Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation
NASA Astrophysics Data System (ADS)
Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter
2016-11-01
Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.
Jammed Limit of Bijel Structure Formation
Welch, P. M.; Lee, M. N.; Parra-Vasquez, A. N. G.; ...
2017-11-02
Over the past decade, methods to control microstructure in heterogeneous mixtures by arresting spinodal decomposition via the addition of colloidal particles have led to an entirely new class of bicontinuous materials known as bijels. We present a new model for the development of these materials that yields to both numerical and analytical evaluation. This model reveals that a single dimensionless parameter that captures both chemical and environmental variables dictates the dynamics and ultimate structure formed in bijels. We also demonstrate that this parameter must fall within a fixed range in order for jamming to occur during spinodal decomposition, as wellmore » as show that known experimental trends for the characteristic domain sizes and time scales for formation are recovered by this model.« less
Generalized Pseudo-Reaction Zone Model for Non-Ideal Explosives
NASA Astrophysics Data System (ADS)
Wescott, B. L.
2007-12-01
The pseudo-reaction zone model was proposed to improve engineering scale simulations with high explosives that have a slow reaction component. In this work an extension of the pseudo-reaction zone model is developed for non-ideal explosives that propagate well below the steady-planar Chapman-Jouguet velocity. A programmed burn method utilizing Detonation Shock Dynamics (DSD) and a detonation velocity dependent pseudo-reaction rate has been developed for non-ideal explosives and applied to the explosive mixture of ammonium nitrate and fuel oil (ANFO). The pseudo-reaction rate is calibrated to the experimentally obtained normal detonation velocity—shock curvature relation. Cylinder test simulations predict the proper expansion to within 1% even though significant reaction occurs as the cylinder expands.
A multi agent model for the limit order book dynamics
NASA Astrophysics Data System (ADS)
Bartolozzi, M.
2010-11-01
In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book. The agents follow a noise decision making process where their actions are related to a stochastic variable, the market sentiment, which we define as a mixture of public and private information. The model, despite making just few basic assumptions over the trading strategies of the agents, is able to reproduce several empirical features of the high-frequency dynamics of the market microstructure not only related to the price movements but also to the deposition of the orders in the book.
Stability of the accelerated expansion in nonlinear electrodynamics
NASA Astrophysics Data System (ADS)
Sharif, M.; Mumtaz, Saadia
2017-02-01
This paper is devoted to the phase space analysis of an isotropic and homogeneous model of the universe by taking a noninteracting mixture of the electromagnetic and viscous radiating fluids whose viscous pressure satisfies a nonlinear version of the Israel-Stewart transport equation. We establish an autonomous system of equations by introducing normalized dimensionless variables. In order to analyze the stability of the system, we find corresponding critical points for different values of the parameters. We also evaluate the power-law scale factor whose behavior indicates different phases of the universe in this model. It is concluded that the bulk viscosity as well as electromagnetic field enhances the stability of the accelerated expansion of the isotropic and homogeneous model of the universe.
Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong
2012-09-01
We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.
Generalized Pseudo-Reaction Zone Model for Non-Ideal Explosives
NASA Astrophysics Data System (ADS)
Wescott, Bradley
2007-06-01
The pseudo-reaction zone model was proposed to improve engineering scale simulations when using Detonation Shock Dynamics with high explosives that have a slow reaction component. In this work an extension of the pseudo-reaction zone model is developed for non-ideal explosives that propagate well below their steady-planar Chapman-Jouguet velocity. A programmed burn method utilizing Detonation Shock Dynamics and a detonation velocity dependent pseudo-reaction rate has been developed for non-ideal explosives and applied to the explosive mixture of ammonium nitrate and fuel oil (ANFO). The pseudo-reaction rate is calibrated to the experimentally obtained normal detonation velocity---shock curvature relation. The generalized pseudo-reaction zone model proposed here predicts the cylinder expansion to within 1% by accounting for the slow reaction in ANFO.
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-03-13
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.
Some comments on thermodynamic consistency for equilibrium mixture equations of state
Grove, John W.
2018-03-28
We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.
NASA Astrophysics Data System (ADS)
Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin
2016-03-01
A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.
Hansen-Goos, Hendrik; Mortazavifar, Mostafa; Oettel, Martin; Roth, Roland
2015-05-01
Based on Santos' general solution for the scaled-particle differential equation [Phys. Rev. E 86, 040102(R) (2012)], we construct a free-energy functional for the hard-sphere system. The functional is obtained by a suitable generalization and extension of the set of scaled-particle variables using the weighted densities from Rosenfeld's fundamental measure theory for the hard-sphere mixture [Phys. Rev. Lett. 63, 980 (1989)]. While our general result applies to the hard-sphere mixture, we specify remaining degrees of freedom by requiring the functional to comply with known properties of the pure hard-sphere system. Both for mixtures and pure systems, the functional can be systematically extended following the lines of our derivation. We test the resulting functionals regarding their behavior upon dimensional reduction of the fluid as well as their ability to accurately describe the hard-sphere crystal and the liquid-solid transition.
Multiscale Modeling of Mesoscale and Interfacial Phenomena
NASA Astrophysics Data System (ADS)
Petsev, Nikolai Dimitrov
With rapidly emerging technologies that feature interfaces modified at the nanoscale, traditional macroscopic models are pushed to their limits to explain phenomena where molecular processes can play a key role. Often, such problems appear to defy explanation when treated with coarse-grained continuum models alone, yet remain prohibitively expensive from a molecular simulation perspective. A prominent example is surface nanobubbles: nanoscopic gaseous domains typically found on hydrophobic surfaces that have puzzled researchers for over two decades due to their unusually long lifetimes. We show how an entirely macroscopic, non-equilibrium model explains many of their anomalous properties, including their stability and abnormally small gas-side contact angles. From this purely transport perspective, we investigate how factors such as temperature and saturation affect nanobubbles, providing numerous experimentally testable predictions. However, recent work also emphasizes the relevance of molecular-scale phenomena that cannot be described in terms of bulk phases or pristine interfaces. This is true for nanobubbles as well, whose nanoscale heights may require molecular detail to capture the relevant physics, in particular near the bubble three-phase contact line. Therefore, there is a clear need for general ways to link molecular granularity and behavior with large-scale continuum models in the treatment of many interfacial problems. In light of this, we have developed a general set of simulation strategies that couple mesoscale particle-based continuum models to molecular regions simulated through conventional molecular dynamics (MD). In addition, we derived a transport model for binary mixtures that opens the possibility for a wide range of applications in biological and drug delivery problems, and is readily reconciled with our hybrid MD-continuum techniques. Approaches that couple multiple length scales for fluid mixtures are largely absent in the literature, and we provide a novel and general framework for multiscale modeling of systems featuring one or more dissolved species. This makes it possible to retain molecular detail for parts of the problem that require it while using a simple, continuum description for parts where high detail is unnecessary, reducing the number of degrees of freedom (i.e. number of particles) dramatically. This opens the possibility for modeling ion transport in biological processes and biomolecule assembly in ionic solution, as well as electrokinetic phenomena at interfaces such as corrosion. The number of particles in the system is further reduced through an integrated boundary approach, which we apply to colloidal suspensions. In this thesis, we describe this general framework for multiscale modeling single- and multicomponent systems, provide several simple equilibrium and non-equilibrium case studies, and discuss future applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teodori, Sven-Peter; Ruedi, Jorg; Reinhold, Matthias
2013-07-01
The main aim of a gas-permeable seal is to increase the gas transport capacity of the backfilled underground structures without compromising the radionuclide retention capacity of the engineered barrier system or the host rock. Such a seal, proposed by NAGRA as part of the 'Engineered Gas Transport System' in a L/ILW repository, considers specially designed backfill and sealing materials such as sand/bentonite (S/B) mixtures with a bentonite content of 20- 30%. NAGRA's RD and D plan foresees demonstrating the construction and performance of repository seals and improving the understanding and the database for reliably predicting water and gas transport throughmore » these systems. The fluid flow and gas transport properties of these backfills have been determined at the laboratory scale and through modelling the maximum gas pressures in the near field of a repository system and the gas flow rates have been evaluated. Within this context, the Gas-permeable Seal Test (GAST) was constructed at Grimsel Test Site (GTS) to validate the effective functioning of gas-permeable seals at realistic scale. The intrinsic permeability of such seals should be in the order of 10-18 m2. Because the construction of S/B seals is not common practice for construction companies, a stepwise approach was followed to evaluate different construction and quality assurance methods. As a first step, an investigation campaign with simple tests in the laboratory and in the field followed by 1:1 scale pre-tests at GTS was performed. Through this gradual increase of the degree of complexity, practical experience was gained and confidence in the methods and procedures to be used was built, which allowed reliably producing and working with S/B mixtures at a realistic scale. During the whole pre-testing phase, a quality assurance (QA) programme for S/B mixtures was developed and different methods were assessed. They helped to evaluate and choose appropriate emplacement techniques and methodologies to achieve the target S/B dry density of 1.70 g/cm{sup 3}, which results in the desired intrinsic permeability throughout the experiment. The final QA methodology was targeted at engineering measures to decide if the work can proceed, and at producing high resolution material properties database for future water and gas transport modelling activities. The different applied QA techniques included standard core cutter tests, the application of neutron-gamma (Troxler) probes and two mass balance methods (2D and 3D). The methods, looking at different representative scales, have provided only slightly different results and showed that the average density of the emplaced S/B plug was between 1.65 and 1.73 g/cm{sup 3}. Spatial variability of dry densities was observed at decimeter scale. Overall, the pre-testing and QA programme performed for the GAST project demonstrated how the given design criteria and requirements can be met by appropriately planning and designing the material emplacement. (authors)« less
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.
Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
A Quadriparametric Model to Describe the Diversity of Waves Applied to Hormonal Data.
Abdullah, Saman; Bouchard, Thomas; Klich, Amna; Leiva, Rene; Pyper, Cecilia; Genolini, Christophe; Subtil, Fabien; Iwaz, Jean; Ecochard, René
2018-05-01
Even in normally cycling women, hormone level shapes may widely vary between cycles and between women. Over decades, finding ways to characterize and compare cycle hormone waves was difficult and most solutions, in particular polynomials or splines, do not correspond to physiologically meaningful parameters. We present an original concept to characterize most hormone waves with only two parameters. The modelling attempt considered pregnanediol-3-alpha-glucuronide (PDG) and luteinising hormone (LH) levels in 266 cycles (with ultrasound-identified ovulation day) in 99 normally fertile women aged 18 to 45. The study searched for a convenient wave description process and carried out an extended search for the best fitting density distribution. The highly flexible beta-binomial distribution offered the best fit of most hormone waves and required only two readily available and understandable wave parameters: location and scale. In bell-shaped waves (e.g., PDG curves), early peaks may be fitted with a low location parameter and a low scale parameter; plateau shapes are obtained with higher scale parameters. I-shaped, J-shaped, and U-shaped waves (sometimes the shapes of LH curves) may be fitted with high scale parameter and, respectively, low, high, and medium location parameter. These location and scale parameters will be later correlated with feminine physiological events. Our results demonstrate that, with unimodal waves, complex methods (e.g., functional mixed effects models using smoothing splines, second-order growth mixture models, or functional principal-component- based methods) may be avoided. The use, application, and, especially, result interpretation of four-parameter analyses might be advantageous within the context of feminine physiological events. Schattauer GmbH.
In prevaporation, a liquid mixture contacts a membrane surface that preferentially permeates one of the liquid components as a vapor. Our approach to improving pervaporation performance is to replace the one-stage condenser traditionally used to condense the permeate with a frac...
Human Factors Engineering Bibliographic Series. Volume 2: 1960-1964 Literature
1966-10-01
flutter discrimination, melodic and temporal) binaural vs. monaural equipment and methods (e.g., anechoic chambers, audiometric devices, communication...brightness, duration, timbre, vocality) stimulus mixtures (e.g., harmonics, beats , combination tones, modulations) thresholds training, nonverbal--see Training...scales and aids) Beats --see Audition (stimulus mixtures) Bells--see Auditory (displays, nonverbal) Belts, Harnesses, and other Restraining Devices--see
NASA Astrophysics Data System (ADS)
Kim, Juntae; Helgeson, Matthew E.
2016-08-01
We investigate shear-induced clustering and its impact on fluid rheology in polymer-colloid mixtures at moderate colloid volume fraction. By employing a thermoresponsive system that forms associative polymer-colloid networks, we present experiments of rheology and flow-induced microstructure on colloid-polymer mixtures in which the relative magnitudes of the time scales associated with relaxation of viscoelasticity and suspension microstructure are widely and controllably varied. In doing so, we explore several limits of relative magnitude of the relevant dimensionless shear rates, the Weissenberg number Wi and the Péclet number Pe. In all of these limits, we find that the fluid exhibits two distinct regimes of shear thinning at relatively low and high shear rates, in which the rheology collapses by scaling with Wi and Pe, respectively. Using three-dimensionally-resolved flow small-angle neutron scattering measurements, we observe clustering of the suspension above a critical shear rate corresponding to Pe ˜0.1 over a wide range of fluid conditions, having anisotropy with projected orientation along both the vorticity and compressional axes of shear. The degree of anisotropy is shown to scale with Pe. From this we formulate an empirical model for the shear stress and viscosity, in which the viscoelastic network stress is augmented by an asymptotic shear thickening contribution due to hydrodynamic clustering. Overall, our results elucidate the significant role of hydrodynamic interactions in contributing to shear-induced clustering of Brownian suspensions in viscoelastic liquids.
Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun
2017-03-01
In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard
2016-08-01
Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.
Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten
2017-11-01
Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.
Monte Carlo study of four dimensional binary hard hypersphere mixtures
NASA Astrophysics Data System (ADS)
Bishop, Marvin; Whitlock, Paula A.
2012-01-01
A multithreaded Monte Carlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.
NASA Astrophysics Data System (ADS)
Price, D. J.; Laibe, G.
2015-10-01
Dust-gas mixtures are the simplest example of a two fluid mixture. We show that when simulating such mixtures with particles or with particles coupled to grids a problem arises due to the need to resolve a very small length scale when the coupling is strong. Since this is occurs in the limit when the fluids are well coupled, we show how the dust-gas equations can be reformulated to describe a single fluid mixture. The equations are similar to the usual fluid equations supplemented by a diffusion equation for the dust-to-gas ratio or alternatively the dust fraction. This solves a number of numerical problems as well as making the physics clear.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.
Numerical simulation of asphalt mixtures fracture using continuum models
NASA Astrophysics Data System (ADS)
Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz
2018-01-01
The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.
Diky, Vladimir; Chirico, Robert D; Muzny, Chris D; Kazakov, Andrei F; Kroenlein, Kenneth; Magee, Joseph W; Abdulagatov, Ilmutdin; Frenkel, Michael
2013-12-23
ThermoData Engine (TDE) is the first full-scale software implementation of the dynamic data evaluation concept, as reported in this journal. The present article describes the background and implementation for new additions in latest release of TDE. Advances are in the areas of program architecture and quality improvement for automatic property evaluations, particularly for pure compounds. It is shown that selection of appropriate program architecture supports improvement of the quality of the on-demand property evaluations through application of a readily extensible collection of constraints. The basis and implementation for other enhancements to TDE are described briefly. Other enhancements include the following: (1) implementation of model-validity enforcement for specific equations that can provide unphysical results if unconstrained, (2) newly refined group-contribution parameters for estimation of enthalpies of formation for pure compounds containing carbon, hydrogen, and oxygen, (3) implementation of an enhanced group-contribution method (NIST-Modified UNIFAC) in TDE for improved estimation of phase-equilibrium properties for binary mixtures, (4) tools for mutual validation of ideal-gas properties derived through statistical calculations and those derived independently through combination of experimental thermodynamic results, (5) improvements in program reliability and function that stem directly from the recent redesign of the TRC-SOURCE Data Archival System for experimental property values, and (6) implementation of the Peng-Robinson equation of state for binary mixtures, which allows for critical evaluation of mixtures involving supercritical components. Planned future developments are summarized.
Introduction to the special section on mixture modeling in personality assessment.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.
A Survey of Studies on Ignition and Burn of Inertially Confined Fuels
NASA Astrophysics Data System (ADS)
Atzeni, Stefano
2016-10-01
A survey of studies on ignition and burn of inertial fusion fuels is presented. Potentials and issues of different approaches to ignition (central ignition, fast ignition, volume ignition) are addressed by means of simple models and numerical simulations. Both equimolar DT and T-lean mixtures are considered. Crucial issues concerning hot spot formation (implosion symmetry for central ignition; igniting pulse parameters for fast ignition) are briefly discussed. Recent results concerning the scaling of the ignition energy with the implosion velocity and constrained gain curves are also summarized.
Formation mechanism of complex pattern on fishes' skin
NASA Astrophysics Data System (ADS)
Li, Xia; Liu, Shuhua
2009-10-01
In this paper, the formation mechanism of the complex patterns observed on the skin of fishes has been investigated by a two-coupled reaction diffusion model. The effects of coupling strength between two layers play an important role in the pattern-forming process. It is found that only the epidermis layer can produce complicated patterns that have structures on more than one length scale. These complicated patterns including super-stripe pattern, mixture of spots and stripe, and white-eye pattern are similar to the pigmentation patterns on fishes' skin.
D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.
Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W
2005-12-01
Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.
NASA Astrophysics Data System (ADS)
Konishi, C.
2014-12-01
Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.
A numerical study of granular dam-break flow
NASA Astrophysics Data System (ADS)
Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.
2017-12-01
Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.
Freezable Radiator Coupon Testing and Full Scale Radiator Design
NASA Technical Reports Server (NTRS)
Lillibridge, Sean T.; Guinn, John; Cognata, Thomas; Navarro, Moses
2009-01-01
Freezable radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the space craft s surroundings and because of different thermal loads during different mission phases. However, freezing and thawing (recovering) a radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. This paper summarizes tests on three test articles that were performed to further empirically quantify the behavior of a simple freezable radiator, and the culmination of those tests into a full scale design. Each test article explored the bounds of freezing and recovery behavior, as well as providing thermo-physical data of the working fluid, a 50-50 mixture of DowFrost HD and water. These results were then used as a tool for developing correlated thermal model in Thermal Desktop which could be used for modeling the behavior of a full scale thermal control system for a lunar mission. The final design of a thermal control system for a lunar mission is also documented in this paper.
Mixture theory-based poroelasticity as a model of interstitial tissue growth
Cowin, Stephen C.; Cardoso, Luis
2011-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481
Mixture theory-based poroelasticity as a model of interstitial tissue growth.
Cowin, Stephen C; Cardoso, Luis
2012-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.
Experimental and Numerical Study of Ammonium Perchlorate Counterflow Diffusion Flames
NASA Technical Reports Server (NTRS)
Smooke, M. D.; Yetter, R. A.; Parr, T. P.; Hanson-Parr, D. M.; Tanoff, M. A.
1999-01-01
Many solid rocket propellants are based on a composite mixture of ammonium perchlorate (AP) oxidizer and polymeric binder fuels. In these propellants, complex three-dimensional diffusion flame structures between the AP and binder decomposition products, dependent upon the length scales of the heterogeneous mixture, drive the combustion via heat transfer back to the surface. Changing the AP crystal size changes the burn rate of such propellants. Large AP crystals are governed by the cooler AP self-deflagration flame and burn slowly, while small AP crystals are governed more by the hot diffusion flame with the binder and burn faster. This allows control of composite propellant ballistic properties via particle size variation. Previous measurements on these diffusion flames in the planar two-dimensional sandwich configuration yielded insight into controlling flame structure, but there are several drawbacks that make comparison with modeling difficult. First, the flames are two-dimensional and this makes modeling much more complex computationally than with one-dimensional problems, such as RDX self- and laser-supported deflagration. In addition, little is known about the nature, concentration, and evolution rates of the gaseous chemical species produced by the various binders as they decompose. This makes comparison with models quite difficult. Alternatively, counterflow flames provide an excellent geometric configuration within which AP/binder diffusion flames can be studied both experimentally and computationally.
NASA Astrophysics Data System (ADS)
Benyamine, Mebirika; Aussillous, Pascale; Dalloz-Dubrujeaud, Blanche
2017-06-01
Silos are widely used in the industry. While empirical predictions of the flow rate, based on scaling laws, have existed for more than a century (Hagen 1852, translated in [1] - Beverloo et al. [2]), recent advances have be made on the understanding of the control parameters of the flow. In particular, using continuous modeling together with a mu(I) granular rheology seem to be successful in predicting the flow rate for large numbers of beads at the aperture (Staron et al.[3], [4]). Moreover Janda et al.[5] have shown that the packing fraction at the outlet plays an important role when the number of beads at the apeture decreases. Based on these considerations, we have studied experimentally the discharge flow of a granular media from a rectangular silo. We have varied two main parameters: the angle of the hopper, and the bulk packing fraction of the granular material by using bidisperse mixtures. We propose a simple physical model to describe the effect of these parameters, considering a continuous granular media with a dilatancy law at the outlet. This model predicts well the dependance of the flow rate on the hopper angle as well as the dependance of the flow rate on the fine mass fraction of a bidisperse mixture.
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Geraldi, Nicasio R; Dodd, Linzi E; Xu, Ben B; Wood, David; Wells, Gary G; McHale, Glen; Newton, Michael I
2018-02-02
Much of the inspiration for the creation of superhydrophobic surfaces has come from nature, from plants such as the sacred lotus (Nelumbo nucifera), where the micro-scale papillae epidermal cells on the surfaces of the leaves are covered with nano-scale epicuticular wax crystalloids. The combination of the surface roughness and the hydrophobic wax coating produces a superhydrophobic wetting state on the leaves, allowing them to self-clean and easily shed water. Here, a simple scaled-up carbon nanoparticle spray coating is presented that mimics the surface of sacred lotus leaves and can be applied to a wide variety of materials, complex structures, and flexible substrates, rendering them superhydrophobic, with contact angles above 160°. The sprayable mixture is produced by combining toluene, polydimethylsiloxane, and inherently hydrophobic rapeseed soot. The ability to spray the superhydrophobic coating allows for the hydrophobisation of complex structures such as metallic meshes, which allows for the production of flexible porous superhydrophobic materials that, when formed into U-shaped channels, can be used to direct flows. The porous meshes, whilst being superhydrophobic, are also oleophilic. Being both superhydrophobic and oleophilic allows oil to pass through the mesh, whilst water remains on the surface. The meshes were tested for their ability to separate mixtures of oil and water in flow conditions. When silicone oil/water mixtures were passed over the meshes, all meshes tested were capable of separating more than 93% of the oil from the mixture.
A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.
Chen, D G; Pounds, J G
1998-12-01
The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.
A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.
Chen, D G; Pounds, J G
1998-01-01
The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-01-01
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prinja, A. K.
The Karhunen-Loeve stochastic spectral expansion of a random binary mixture of immiscible fluids in planar geometry is used to explore asymptotic limits of radiation transport in such mixtures. Under appropriate scalings of mixing parameters - correlation length, volume fraction, and material cross sections - and employing multiple- scale expansion of the angular flux, previously established atomic mix and diffusion limits are reproduced. When applied to highly contrasting material properties in the small cor- relation length limit, the methodology yields a nonstandard reflective medium transport equation that merits further investigation. Finally, a hybrid closure is proposed that produces both small andmore » large correlation length limits of the closure condition for the material averaged equations.« less
NGMIX: Gaussian mixture models for 2D images
NASA Astrophysics Data System (ADS)
Sheldon, Erin
2015-08-01
NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.
Simultaneous head tissue conductivity and EEG source location estimation.
Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott
2016-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.
Simultaneous head tissue conductivity and EEG source location estimation
Acar, Can E.; Makeig, Scott
2015-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675
Liaw, Horng-Jang; Wang, Tzu-Ai
2007-03-06
Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.
Horton, Leslie E; Tarbox, Sarah I; Olino, Thomas M; Haas, Gretchen L
2015-06-30
Evidence of social and behavioral problems preceding the onset of schizophrenia-spectrum psychoses is consistent with a neurodevelopmental model of these disorders. Here we predict that individuals with a first episode of schizophrenia-spectrum psychoses will evidence one of three patterns of premorbid adjustment: an early deficit, a deteriorating pattern, or adequate or good social adjustment. Participants were 164 (38% female; 31% black) individuals ages 15-50 with a first episode of schizophrenia-spectrum psychoses. Premorbid adjustment was assessed using the Cannon-Spoor Premorbid Adjustment Scale. We compared the fit of a series of growth mixture models to examine premorbid adjustment trajectories, and found the following 3-class model provided the best fit with: a "stable-poor" adjustment class (54%), a "stable-good" adjustment class (39%), and a "deteriorating" adjustment class (7%). Relative to the "stable-good" class, the "stable-poor" class experienced worse negative symptoms at 1-year follow-up, particularly in the social amotivation domain. This represents the first known growth mixture modeling study to examine premorbid functioning patterns in first-episode schizophrenia-spectrum psychoses. Given that the stable-poor adjustment pattern was most prevalent, detection of social and academic maladjustment as early as childhood may help identify people at increased risk for schizophrenia-spectrum psychoses, potentially increasing feasibility of early interventions. Published by Elsevier Ireland Ltd.
Dondi, Daniele; Merli, Daniele; Albini, Angelo; Zeffiro, Alberto; Serpone, Nick
2012-05-01
When a chemical system is submitted to high energy sources (UV, ionizing radiation, plasma sparks, etc.), as is expected to be the case of prebiotic chemistry studies, a plethora of reactive intermediates could form. If oxygen is present in excess, carbon dioxide and water are the major products. More interesting is the case of reducing conditions where synthetic pathways are also possible. This article examines the theoretical modeling of such systems with random-generated chemical networks. Four types of random-generated chemical networks were considered that originated from a combination of two connection topologies (viz., Poisson and scale-free) with reversible and irreversible chemical reactions. The results were analyzed taking into account the number of the most abundant products required for reaching 50% of the total number of moles of compounds at equilibrium, as this may be related to an actual problem of complex mixture analysis. The model accounts for multi-component reaction systems with no a priori knowledge of reacting species and the intermediates involved if system components are sufficiently interconnected. The approach taken is relevant to an earlier study on reactions that may have occurred in prebiotic systems where only a few compounds were detected. A validation of the model was attained on the basis of results of UVC and radiolytic reactions of prebiotic mixtures of low molecular weight compounds likely present on the primeval Earth.
Coupled nonequilibrium flow, energy and radiation transport for hypersonic planetary entry
NASA Astrophysics Data System (ADS)
Frederick, Donald Jerome
An ever increasing demand for energy coupled with a need to mitigate climate change necessitates technology (and lifestyle) changes globally. An aspect of the needed change is a decrease in the amount of anthropogenically generated CO2 emitted to the atmosphere. The decrease needed cannot be expected to be achieved through only one source of change or technology, but rather a portfolio of solutions are needed. One possible technology is Carbon Capture and Storage (CCS), which is likely to play some role due to its combination of mature and promising emerging technologies, such as the burning of hydrogen in gas turbines created by pre-combustion CCS separation processes. Thus research on effective methods of burning turbulent hydrogen jet flames (mimicking gas turbine environments) are needed, both in terms of experimental investigation and model development. The challenge in burning (and modeling the burning of) hydrogen lies in its wide range of flammable conditions, its high diffusivity (often requiring a diluent such as nitrogen to produce a lifted turbulent jet flame), and its behavior under a wide range of pressures. In this work, numerical models are used to simulate the environment of a gas turbine combustion chamber. Concurrent experimental investigations are separately conducted using a vitiated coflow burner (which mimics the gas turbine environment) to guide the numerical work in this dissertation. A variety of models are used to simulate, and occasionally guide, the experiment. On the fundamental side, mixing and chemistry interactions motivated by a H2/N2 jet flame in a vitiated coflow are investigated using a 1-D numerical model for laminar flows and the Linear Eddy Model for turbulent flows. A radial profile of the jet in coflow can be modeled as fuel and oxidizer separated by an initial mixing width. The effects of species diffusion model, pressure, coflow composition, and turbulent mixing on the predicted autoignition delay times and mixture composition at ignition are considered. We find that in laminar simulations the differential diffusion model allows the mixture to autoignite sooner and at a fuel-richer mixture than the equal diffusion model. The effect of turbulence on autoignition is classified in two regimes, which are dependent on a reference laminar autoignition delay and turbulence time scale. For a turbulence timescale larger than the reference laminar autoignition time, turbulence has little influence on autoignition or the mixture at ignition. However, for a turbulence timescale smaller than the reference laminar timescale, the influence of turbulence on autoignition depends on the diffusion model. Differential diffusion simulations show an increase in autoignition delay time and a subsequent change in mixture composition at ignition with increasing turbulence. Equal diffusion simulations suggest the effect of increasing turbulence on autoignition delay time and the mixture fraction at ignition is minimal. More practically, the stabilizing mechanism of a lifted jet flame is thought to be controlled by either autoignition, flame propagation, or a combination of the two. Experimental data for a turbulent hydrogen diluted with nitrogen jet flame in a vitiated coflow at atmospheric pressure, demonstrates distinct stability regimes where the jet flame is either attached, lifted, lifted-unsteady, or blown out. A 1-D parabolic RANS model is used, where turbulence-chemistry interactions are modeled with the joint scalar-PDF approach, and mixing is modeled with the Linear Eddy Model. The model only accounts for autoignition as a flame stabilization mechanism. However, by comparing the local turbulent flame speed to the local turbulent mean velocity, maps of regions where the flame speed is greater than the flow speed are created, which allow an estimate of lift-off heights based on flame propagation. Model results for the attached, lifted, and lifted-unsteady regimes show that the correct trend is captured. Additionally, at lower coflow equivalence ratios flame propagation appears dominant, while at higher coflow equivalence ratios autoignition appears dominant.
Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method
NASA Astrophysics Data System (ADS)
Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.
2018-03-01
The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.
Model Selection Methods for Mixture Dichotomous IRT Models
ERIC Educational Resources Information Center
Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo
2009-01-01
This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…
Considering the cumulative risk of mixtures of chemicals – A challenge for policy makers
2012-01-01
Background The current paradigm for the assessment of the health risk of chemical substances focuses primarily on the effects of individual substances for determining the doses of toxicological concern in order to inform appropriately the regulatory process. These policy instruments place varying requirements on health and safety data of chemicals in the environment. REACH focuses on safety of individual substances; yet all the other facets of public health policy that relate to chemical stressors put emphasis on the effects of combined exposure to mixtures of chemical and physical agents. This emphasis brings about methodological problems linked to the complexity of the respective exposure pathways; the effect (more complex than simple additivity) of mixtures (the so-called 'cocktail effect'); dose extrapolation, i.e. the extrapolation of the validity of dose-response data to dose ranges that extend beyond the levels used for the derivation of the original dose-response relationship; the integrated use of toxicity data across species (including human clinical, epidemiological and biomonitoring data); and variation in inter-individual susceptibility associated with both genetic and environmental factors. Methods In this paper we give an overview of the main methodologies available today to estimate the human health risk of environmental chemical mixtures, ranging from dose addition to independent action, and from ignoring interactions among the mixture constituents to modelling their biological fate taking into account the biochemical interactions affecting both internal exposure and the toxic potency of the mixture. Results We discuss their applicability, possible options available to policy makers and the difficulties and potential pitfalls in implementing these methodologies in the frame of the currently existing policy framework in the European Union. Finally, we suggest a pragmatic solution for policy/regulatory action that would facilitate the evaluation of the health effects of chemical mixtures in the environment and consumer products. Conclusions One universally applicable methodology does not yet exist. Therefore, a pragmatic, tiered approach to regulatory risk assessment of chemical mixtures is suggested, encompassing (a) the use of dose addition to calculate a hazard index that takes into account interactions among mixture components; and (b) the use of the connectivity approach in data-rich situations to integrate mechanistic knowledge at different scales of biological organization. PMID:22759500
Quality improvement of melt extruded laminar systems using mixture design.
Hasa, D; Perissutti, B; Campisi, B; Grassi, M; Grabnar, I; Golob, S; Mian, M; Voinovich, D
2015-07-30
This study investigates the application of melt extrusion for the development of an oral retard formulation with a precise drug release over time. Since adjusting the formulation appears to be of the utmost importance in achieving the desired drug release patterns, different formulations of laminar extrudates were prepared according to the principles of Experimental Design, using a design for mixtures to assess the influence of formulation composition on the in vitro drug release from the extrudates after 1h and after 8h. The effect of each component on the two response variables was also studied. Ternary mixtures of theophylline (model drug), monohydrate lactose and microcrystalline wax (as thermoplastic binder) were extruded in a lab scale vertical ram extruder in absence of solvents at a temperature below the melting point of the binder (so that the crystalline state of the drug could be maintained), through a rectangular die to obtain suitable laminar systems. Thanks to the desirability approach and a reliability study for ensuring the quality of the formulation, a very restricted optimal zone was defined within the experimental domain. Among the mixture components, the variation of microcrystalline wax content played the most significant role in overall influence on the in vitro drug release. The formulation theophylline:lactose:wax, 57:14:29 (by weight), selected based on the desirability zone, was subsequently used for in vivo studies. The plasma profile, obtained after oral administration of the laminar extruded system in hard gelatine capsules, revealed the typical trend of an oral retard formulation. The application of the mixture experimental design associated to a desirability function permitted to optimize the extruded system and to determine the composition space that ensures final product quality. Copyright © 2015 Elsevier B.V. All rights reserved.
Campetella, Marco; Mariani, Alessandro; Sadun, Claudia; Wu, Boning; Castner, Edward W; Gontrani, Lorenzo
2018-04-07
In this article, we report the study of structural and dynamical properties for a series of acetonitrile/propylammonium nitrate mixtures as a function of their composition. These systems display an unusual increase in intensity in their X-ray diffraction patterns in the low-q regime, and their 1 H-NMR diffusion-ordered NMR spectroscopy (DOSY) spectra display unusual diffusivities. However, the magnitude of both phenomena for mixtures of propylammonium nitrate is smaller than those observed for ethylammonium nitrate mixtures with the same cosolvent, suggesting that the cation alkyl tail plays an important role in these observations. The experimental X-ray scattering data are compared with the results of molecular dynamics simulations, including both ab initio studies used to interpret short-range interactions and classical simulations to describe longer range interactions. The higher level calculations highlight the presence of a strong hydrogen bond network within the ionic liquid, only slightly perturbed even at high acetonitrile concentration. These strong interactions lead to the symmetry breaking of the NO 3 - vibrations, with a splitting of about 88 cm -1 in the ν 3 antisymmetric stretch. The classical force field simulations use a greater number of ion pairs, but are not capable of fully describing the longest range interactions, although they do successfully account for the observed concentration trend, and the analysis of the models confirms the nano-inhomogeneity of these kinds of samples.
NASA Astrophysics Data System (ADS)
Campetella, Marco; Mariani, Alessandro; Sadun, Claudia; Wu, Boning; Castner, Edward W.; Gontrani, Lorenzo
2018-04-01
In this article, we report the study of structural and dynamical properties for a series of acetonitrile/propylammonium nitrate mixtures as a function of their composition. These systems display an unusual increase in intensity in their X-ray diffraction patterns in the low-q regime, and their 1H-NMR diffusion-ordered NMR spectroscopy (DOSY) spectra display unusual diffusivities. However, the magnitude of both phenomena for mixtures of propylammonium nitrate is smaller than those observed for ethylammonium nitrate mixtures with the same cosolvent, suggesting that the cation alkyl tail plays an important role in these observations. The experimental X-ray scattering data are compared with the results of molecular dynamics simulations, including both ab initio studies used to interpret short-range interactions and classical simulations to describe longer range interactions. The higher level calculations highlight the presence of a strong hydrogen bond network within the ionic liquid, only slightly perturbed even at high acetonitrile concentration. These strong interactions lead to the symmetry breaking of the NO3 - vibrations, with a splitting of about 88 cm-1 in the ν3 antisymmetric stretch. The classical force field simulations use a greater number of ion pairs, but are not capable of fully describing the longest range interactions, although they do successfully account for the observed concentration trend, and the analysis of the models confirms the nano-inhomogeneity of these kinds of samples.
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Concepts and methods for describing critical phenomena in fluids
NASA Technical Reports Server (NTRS)
Sengers, J. V.; Sengers, J. M. H. L.
1977-01-01
The predictions of theoretical models for a critical-point phase transistion in fluids, namely the classical equation with third-degree critical isotherm, that with fifth-degree critical isotherm, and the lattice gas, are reviewed. The renormalization group theory of critical phenomena and the hypothesis of universality of critical behavior supported by this theory are discussed as well as the nature of gravity effects and how they affect cricital-region experimentation in fluids. The behavior of the thermodynamic properties and the correlation function is formulated in terms of scaling laws. The predictions of these scaling laws and of the hypothesis of universality of critical behavior are compared with experimental data for one-component fluids and it is indicated how the methods can be extended to describe critical phenomena in fluid mixtures.
Solvent effects on the polar network of ionic liquid solutions
NASA Astrophysics Data System (ADS)
Bernardes, Carlos E. S.; Shimizu, Karina; Canongia Lopes, José N.
2015-05-01
Molecular dynamics simulations were used to probe mixtures of ionic liquids (ILs) with common molecular solvents. Four types of systems were considered: (i) 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide plus benzene, hexafluorobenzene or 1,2-difluorobenzene mixtures; (ii) choline-based ILs plus ether mixtures (iii) choline-based ILs plus n-alkanol mixtures; and (iv) 1-butyl-3-methylimidazolium nitrate and 1-ethyl-3-methylimidazolium ethyl sulfate aqueous mixtures. The results produced a wealth of structural and aggregation information that highlight the resilience of the polar network of the ILs (formed by clusters of alternating ions and counter-ions) to the addition of different types of molecular solvent. The analysis of the MD data also shows that the intricate balance between different types of interaction (electrostatic, van der Waals, H-bond-like) between the different species present in the mixtures has a profound effect on the morphology of the mixtures at a mesoscopic scale. In the case of the IL aqueous solutions, the present results suggest an alternative interpretation for very recently published x-ray and neutron diffraction data on similar systems.
NASA Astrophysics Data System (ADS)
Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.
2012-12-01
Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu
Neale, Peta A; Leusch, Frederic D L; Escher, Beate I
2017-04-01
Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC 50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC 50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC 50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Self-diffusion Coefficient and Structure of Binary n-Alkane Mixtures at the Liquid-Vapor Interfaces.
Chilukoti, Hari Krishna; Kikugawa, Gota; Ohara, Taku
2015-10-15
The self-diffusion coefficient and molecular-scale structure of several binary n-alkane liquid mixtures in the liquid-vapor interface regions have been examined using molecular dynamics simulations. It was observed that in hexane-tetracosane mixture hexane molecules are accumulated in the liquid-vapor interface region and the accumulation intensity decreases with increase in a molar fraction of hexane in the examined range. Molecular alignment and configuration in the interface region of the liquid mixture change with a molar fraction of hexane. The self-diffusion coefficient in the direction parallel to the interface of both tetracosane and hexane in their binary mixture increases in the interface region. It was found that the self-diffusion coefficient of both tetracosane and hexane in their binary mixture is considerably higher in the vapor side of the interface region as the molar fraction of hexane goes lower, which is mostly due to the increase in local free volume caused by the local structure of the liquid in the interface region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grove, John W.
We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation
Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan
2014-01-01
Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Kumar, Rajesh; Pant, H J; Goswami, Sunil; Sharma, V K; Dash, A; Mishra, S; Bhanja, K; Mohan, Sadhana; Mahajani, S M
2017-03-01
Holdup and axial dispersion of liquid phase in a catalytic exchange column were investigated by measuring residence time distributions (RTD) using a radiotracer technique. RTD experiments were independently carried out with two different types of packings i.e. hydrophobic water-repellent supported platinum catalyst and a mixture (50% (v/v)) of hydrophobic catalyst and a hydrophillic wettable packing were used in the column. Mean residence times and hold-ups of the liquid phase were estimated at different operating conditions. Axial dispersion model (ADM) and axial dispersion with exchange model (ADEM) were used to simulate the measured RTD data. Both the models were found equally suitable to describe the measured data. The degree of axial mixing was estimated in terms of Peclet number (Pe) and Bodenstein number (Bo). Based on the obtained parameters of the ADM, correlations for total liquid hold-up (H T ) and axial mixing in terms of Bo were proposed for design and scale up of the full-scale catalytic exchange column. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
NASA Astrophysics Data System (ADS)
Ballesteros, Guillermo; Redondo, Javier; Ringwald, Andreas; Tamarit, Carlos
2017-08-01
We present a minimal extension of the Standard Model (SM) providing a consistent picture of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today. Three right-handed neutrinos Ni, a new color triplet Q and a complex SM-singlet scalar σ, whose vacuum expectation value vσ ~ 1011 GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously, are added to the SM. At low energies, the model reduces to the SM, augmented by seesaw generated neutrino masses and mixing, plus the axion. The latter solves the strong CP problem and accounts for the cold dark matter in the Universe. The inflaton is comprised by a mixture of σ and the SM Higgs, and reheating of the Universe after inflation proceeds via the Higgs portal. Baryogenesis occurs via thermal leptogenesis. Thus, five fundamental problems of particle physics and cosmology are solved at one stroke in this unified Standard Model—axion—seesaw—Higgs portal inflation (SMASH) model. It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.
Estimating and modeling the cure fraction in population-based cancer survival analysis.
Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W
2007-07-01
In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.
Process dissociation and mixture signal detection theory.
DeCarlo, Lawrence T
2008-11-01
The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.
Statistical-thermodynamic model for light scattering from eye lens protein mixtures
NASA Astrophysics Data System (ADS)
Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.
2017-02-01
We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model indicates that increased γ-γ attraction can raise γ-α mixture light scattering far more than it does for solutions of γ-crystallin alone, and can produce marked turbidity tens of degrees celsius above liquid-liquid separation.
Investigation of surface fluctuating pressures on a 1/4 scale YC-14 upper surface blown flap model
NASA Technical Reports Server (NTRS)
Pappa, R. S.
1979-01-01
Fluctuating pressures were measured at 30 positions on the surface of a 1/4-scale YC-14 wing and fuselage model during an outdoor static testing program. These data were obtained as part of a NASA program to study the fluctuating loads imposed on STOL aircraft configurations and to further the understanding of the scaling laws of unsteady surface pressure fields. Fluctuating pressure data were recorded at several discrete engine thrust settings for each of 16 configurations of the model. These data were reduced using the technique of random data analysis to obtain auto-and cross-spectral density functions and coherence functions for frequencies from 0 to 10 kHz, and cross-correlation functions for time delays from 0 to 10.24 ms. Results of this program provide the following items of particular interest: (1) Good collapse of normalized PSD functions on the USB flap was found using a technique applied by Lilley and Hodgson to data from a laboratory wall-jet apparatus. (2) Results indicate that the fluctuating pressure loading on surfaces washed by the jet exhaust flow was dominated by hydrodynamic pressure variations, loading on surface well outside the flow region dominated by acoustic pressure variations, and loading near the flow boundaries from a mixture of the two.
An Active Fire Temperature Retrieval Model Using Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Quigley, K. W.; Roberts, D. A.; Miller, D.
2017-12-01
Wildfire is both an important ecological process and a dangerous natural threat that humans face. In situ measurements of wildfire temperature are notoriously difficult to collect due to dangerous conditions. Imaging spectrometry data has the potential to provide some of the most accurate and highest temporally-resolved active fire temperature retrieval information for monitoring and modeling. Recent studies on fire temperature retrieval have used have used Multiple Endmember Spectral Mixture Analysis applied to Airborne Visible applied to Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) bands to model fire temperatures within the regions marked to contain fire, but these methods are less effective at coarser spatial resolutions, as linear mixing methods are degraded by saturation within the pixel. The assumption of a distribution of temperatures within pixels allows us to model pixels with an effective maximum and likely minimum temperature. This assumption allows a more robust approach to modeling temperature at different spatial scales. In this study, instrument-corrected radiance is forward-modeled for different ranges of temperatures, with weighted temperatures from an effective maximum temperature to a likely minimum temperature contributing to the total radiance of the modeled pixel. Effective maximum fire temperature is estimated by minimizing the Root Mean Square Error (RMSE) between modeled and measured fires. The model was tested using AVIRIS collected over the 2016 Sherpa Fire in Santa Barbara County, California,. While only in situ experimentation would be able to confirm active fire temperatures, the fit of the data to modeled radiance can be assessed, as well as the similarity in temperature distributions seen on different spatial resolution scales. Results show that this model improves upon current modeling methods in producing similar effective temperatures on multiple spatial scales as well as a similar modeled area distribution of those temperatures.
Lab-Scale Stimulation Results on Surrogate Fused Silica Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlos Fernandez
Lab-scale stimulation work on non-porous fused silica (similar mechanical properties to igneous rock) was performed using pure water, pure CO2 and water/CO2 mixtures to compare back to back fracturing performance of these fluids with PNNL's StimuFrac.
Advertising and Irreversible Opinion Spreading in Complex Social Networks
NASA Astrophysics Data System (ADS)
Candia, Julián
Irreversible opinion spreading phenomena are studied on small-world and scale-free networks by means of the magnetic Eden model, a nonequilibrium kinetic model for the growth of binary mixtures in contact with a thermal bath. In this model, the opinion of an individual is affected by those of their acquaintances, but opinion changes (analogous to spin flips in an Ising-like model) are not allowed. We focus on the influence of advertising, which is represented by external magnetic fields. The interplay and competition between temperature and fields lead to order-disorder transitions, which are found to also depend on the link density and the topology of the complex network substrate. The effects of advertising campaigns with variable duration, as well as the best cost-effective strategies to achieve consensus within different scenarios, are also discussed.
Statistical processing of large image sequences.
Khellah, F; Fieguth, P; Murray, M J; Allen, M
2005-01-01
The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.
Model of chromosomal loci dynamics in bacteria as fractional diffusion with intermittent transport
NASA Astrophysics Data System (ADS)
Gherardi, Marco; Calabrese, Ludovico; Tamm, Mikhail; Cosentino Lagomarsino, Marco
2017-10-01
The short-time dynamics of bacterial chromosomal loci is a mixture of subdiffusive and active motion, in the form of rapid relocations with near-ballistic dynamics. While previous work has shown that such rapid motions are ubiquitous, we still have little grasp on their physical nature, and no positive model is available that describes them. Here, we propose a minimal theoretical model for loci movements as a fractional Brownian motion subject to a constant but intermittent driving force, and compare simulations and analytical calculations to data from high-resolution dynamic tracking in E. coli. This analysis yields the characteristic time scales for intermittency. Finally, we discuss the possible shortcomings of this model, and show that an increase in the effective local noise felt by the chromosome associates to the active relocations.
Evolutionary model of stock markets
NASA Astrophysics Data System (ADS)
Kaldasch, Joachim
2014-12-01
The paper presents an evolutionary economic model for the price evolution of stocks. Treating a stock market as a self-organized system governed by a fast purchase process and slow variations of demand and supply the model suggests that the short term price distribution has the form a logistic (Laplace) distribution. The long term return can be described by Laplace-Gaussian mixture distributions. The long term mean price evolution is governed by a Walrus equation, which can be transformed into a replicator equation. This allows quantifying the evolutionary price competition between stocks. The theory suggests that stock prices scaled by the price over all stocks can be used to investigate long-term trends in a Fisher-Pry plot. The price competition that follows from the model is illustrated by examining the empirical long-term price trends of two stocks.
NASA Astrophysics Data System (ADS)
Farhidzadeh, Alireza; Dehghan-Niri, Ehsan; Salamone, Salvatore
2013-04-01
Reinforced Concrete (RC) has been widely used in construction of infrastructures for many decades. The cracking behavior in concrete is crucial due to the harmful effects on structural performance such as serviceability and durability requirements. In general, in loading such structures until failure, tensile cracks develop at the initial stages of loading, while shear cracks dominate later. Therefore, monitoring the cracking modes is of paramount importance as it can lead to the prediction of the structural performance. In the past two decades, significant efforts have been made toward the development of automated structural health monitoring (SHM) systems. Among them, a technique that shows promises for monitoring RC structures is the acoustic emission (AE). This paper introduces a novel probabilistic approach based on Gaussian Mixture Modeling (GMM) to classify AE signals related to each crack mode. The system provides an early warning by recognizing nucleation of numerous critical shear cracks. The algorithm is validated through an experimental study on a full-scale reinforced concrete shear wall subjected to a reversed cyclic loading. A modified conventional classification scheme and a new criterion for crack classification are also proposed.
Lu, Cailing; Svoboda, Kurt R; Lenz, Kade A; Pattison, Claire; Ma, Hongbo
2018-06-01
Manganese (Mn) is considered as an emerging metal contaminant in the environment. However, its potential interactions with companying toxic metals and the associated mixture effects are largely unknown. Here, we investigated the toxicity interactions between Mn and two commonly seen co-occurring toxic metals, Pb and Cd, in a model organism the nematode Caenorhabditis elegans. The acute lethal toxicity of mixtures of Mn+Pb and Mn+Cd were first assessed using a toxic unit model. Multiple toxicity endpoints including reproduction, lifespan, stress response, and neurotoxicity were then examined to evaluate the mixture effects at sublethal concentrations. Stress response was assessed using a daf-16::GFP transgenic strain that expresses GFP under the control of DAF-16 promotor. Neurotoxicity was assessed using a dat-1::GFP transgenic strain that expresses GFP in dopaminergic neurons. The mixture of Mn+Pb induced a more-than-additive (synergistic) lethal toxicity in the worm whereas the mixture of Mn+Cd induced a less-than-additive (antagonistic) toxicity. Mixture effects on sublethal toxicity showed more complex patterns and were dependent on the toxicity endpoints as well as the modes of toxic action of the metals. The mixture of Mn+Pb induced additive effects on both reproduction and lifespan, whereas the mixture of Mn+Cd induced additive effects on lifespan but not reproduction. Both mixtures seemed to induce additive effects on stress response and neurotoxicity, although a quantitative assessment was not possible due to the single concentrations used in mixture tests. Our findings demonstrate the complexity of metal interactions and the associated mixture effects. Assessment of metal mixture toxicity should take into consideration the unique property of individual metals, their potential toxicity mechanisms, and the toxicity endpoints examined.
Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity
NASA Astrophysics Data System (ADS)
Chen, Hsieh; Panagiotopoulos, Athanassios Z.
2018-01-01
We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
Modeling of First-Passage Processes in Financial Markets
NASA Astrophysics Data System (ADS)
Inoue, Jun-Ichi; Hino, Hikaru; Sazuka, Naoya; Scalas, Enrico
2010-03-01
In this talk, we attempt to make a microscopic modeling the first-passage process (or the first-exit process) of the BUND future by minority game with market history. We find that the first-passage process of the minority game with appropriate history length generates the same properties as the BTP future (the middle and long term Italian Government bonds with fixed interest rates), namely, both first-passage time distributions have a crossover at some specific time scale as is the case for the Mittag-Leffler function. We also provide a macroscopic (or a phenomenological) modeling of the first-passage process of the BTP future and show analytically that the first-passage time distribution of a simplest mixture of the normal compound Poisson processes does not have such a crossover.
New Challenges in Computational Thermal Hydraulics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadigaroglu, George; Lakehal, Djamel
New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less
Mixture IRT Model with a Higher-Order Structure for Latent Traits
ERIC Educational Resources Information Center
Huang, Hung-Yu
2017-01-01
Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…
NASA Astrophysics Data System (ADS)
Corradini, Dario; Coudert, François-Xavier; Vuilleumier, Rodolphe
2016-03-01
We use molecular dynamics simulations to study the thermodynamics, structure, and dynamics of the Li2CO3-K2CO3 (62:38 mol. %) eutectic mixture. We present a new classical non-polarizable force field for this molten salt mixture, optimized using experimental and first principles molecular dynamics simulations data as reference. This simple force field allows efficient molecular simulations of phenomena at long time scales. We use this optimized force field to describe the behavior of the eutectic mixture in the 900-1100 K temperature range, at pressures between 0 and 5 GPa. After studying the equation of state in these thermodynamic conditions, we present molecular insight into the structure and dynamics of the melt. In particular, we present an analysis of the temperature and pressure dependence of the eutectic mixture's self-diffusion coefficients, viscosity, and ionic conductivity.
Corradini, Dario; Coudert, François-Xavier; Vuilleumier, Rodolphe
2016-03-14
We use molecular dynamics simulations to study the thermodynamics, structure, and dynamics of the Li2CO3-K2CO3 (62:38 mol. %) eutectic mixture. We present a new classical non-polarizable force field for this molten salt mixture, optimized using experimental and first principles molecular dynamics simulations data as reference. This simple force field allows efficient molecular simulations of phenomena at long time scales. We use this optimized force field to describe the behavior of the eutectic mixture in the 900-1100 K temperature range, at pressures between 0 and 5 GPa. After studying the equation of state in these thermodynamic conditions, we present molecular insight into the structure and dynamics of the melt. In particular, we present an analysis of the temperature and pressure dependence of the eutectic mixture's self-diffusion coefficients, viscosity, and ionic conductivity.
A regularized vortex-particle mesh method for large eddy simulation
NASA Astrophysics Data System (ADS)
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
Beta Regression Finite Mixture Models of Polarization and Priming
ERIC Educational Resources Information Center
Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay
2011-01-01
This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…
Huang, Wei Ying; Liu, Fei; Liu, Shu Shen; Ge, Hui Lin; Chen, Hong Han
2011-09-01
The predictions of mixture toxicity for chemicals are commonly based on two models: concentration addition (CA) and independent action (IA). Whether the CA and IA can predict mixture toxicity of phenolic compounds with similar and dissimilar action mechanisms was studied. The mixture toxicity was predicted on the basis of the concentration-response data of individual compounds. Test mixtures at different concentration ratios and concentration levels were designed using two methods. The results showed that the Weibull function fit well with the concentration-response data of all the components and their mixtures, with all relative coefficients (Rs) greater than 0.99 and root mean squared errors (RMSEs) less than 0.04. The predicted values from CA and IA models conformed to observed values of the mixtures. Therefore, it can be concluded that both CA and IA can predict reliable results for the mixture toxicity of the phenolic compounds with similar and dissimilar action mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.
Plastination of macroparasites: An eco-friendly method of long-term preservation
Kumar, Niranjan; Das, Bhupamani; Solanki, Jayesh B.; Jadav, Mehul M.; Menaka, Ramasamy
2017-01-01
Aim: Preservation of macroparasites by infiltrating the polymer in the tissues can defy the inherited shortcoming of classical wet preservation method. Materials and Methods: Preservation was done by infiltrating the melamine alone or with xylene (MX)/chloroform (MC)/turpentine oil (MT) in 1:1 and hardener (MH) in 9:1 ratio in the tissues of the gross specimen of the animal parasites. Results: The plastinated models withstand the process of microbial decomposition, and remain intact in the environmental conditions. The polymer mixture resists the entry of the water molecule, and model dried just after taking out it from the water tank. Overall, the plastinated parasites were dry, non-sticky, glossy, odorless, chemical free, and harmless, to some extent flexible, with detectable morphological structure, and retain their natural form but lost their natural color. Full marks were assigned to the degree of dryness, non-stickiness, and odorlessness to the model plastinated in different solutions on a five-point scale. For flexibility, the score was 1.2, 2.2, and 2.4 for the plastinated model in melamine/MH, MX/MC, and MT solutions, respectively. The average score of glossiness was 4.6 and 5 for the specimen plastinated in melamine/MH and MX/MC/MT solutions, respectively. The degree of dryness, glossiness, stickiness, and flexibility varies non-significantly, with the polymer mixtures used. Conclusion: The prepared model can be used to educate the students/general mass population. PMID:29263605
Mixture optimization for mixed gas Joule-Thomson cycle
NASA Astrophysics Data System (ADS)
Detlor, J.; Pfotenhauer, J.; Nellis, G.
2017-12-01
An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.
Existence, uniqueness and positivity of solutions for BGK models for mixtures
NASA Astrophysics Data System (ADS)
Klingenberg, C.; Pirner, M.
2018-01-01
We consider kinetic models for a multi component gas mixture without chemical reactions. In the literature, one can find two types of BGK models in order to describe gas mixtures. One type has a sum of BGK type interaction terms in the relaxation operator, for example the model described by Klingenberg, Pirner and Puppo [20] which contains well-known models of physicists and engineers for example Hamel [16] and Gross and Krook [15] as special cases. The other type contains only one collision term on the right-hand side, for example the well-known model of Andries, Aoki and Perthame [1]. For each of these two models [20] and [1], we prove existence, uniqueness and positivity of solutions in the first part of the paper. In the second part, we use the first model [20] in order to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described by Dellacherie [11].
New methods to quantify the cracking performance of cementitious systems made with internal curing
NASA Astrophysics Data System (ADS)
Schlitter, John L.
The use of high performance concretes that utilize low water-cement ratios have been promoted for use in infrastructure based on their potential to increase durability and service life because they are stronger and less porous. Unfortunately, these benefits are not always realized due to the susceptibility of high performance concrete to undergo early age cracking caused by shrinkage. This problem is widespread and effects federal, state, and local budgets that must maintain or replace deterioration caused by cracking. As a result, methods to reduce or eliminate early age shrinkage cracking have been investigated. Internal curing is one such method in which a prewetted lightweight sand is incorporated into the concrete mixture to provide internal water as the concrete cures. This action can significantly reduce or eliminate shrinkage and in some cases causes a beneficial early age expansion. Standard laboratory tests have been developed to quantify the shrinkage cracking potential of concrete. Unfortunately, many of these tests may not be appropriate for use with internally cured mixtures and only provide limited amounts of information. Most standard tests are not designed to capture the expansive behavior of internally cured mixtures. This thesis describes the design and implementation of two new testing devices that overcome the limitations of current standards. The first device discussed in this thesis is called the dual ring. The dual ring is a testing device that quantifies the early age restrained shrinkage performance of cementitious mixtures. The design of the dual ring is based on the current ASTM C 1581-04 standard test which utilizes one steel ring to restrain a cementitious specimen. The dual ring overcomes two important limitations of the standard test. First, the standard single ring test cannot restrain the expansion that takes place at early ages which is not representative of field conditions. The dual ring incorporates a second restraining ring which is located outside of the sample to provide restraint against expansion. Second, the standard ring test is a passive test that only relies on the autogenous and drying shrinkage of the mixture to induce cracking. The dual ring test can be an active test because it has the ability to vary the temperature of the specimen in order to induce thermal stress and produce cracking. This ability enables the study of the restrained cracking capacity as the mixture ages in order to quantify crack sensitive periods of time. Measurements made with the dual ring quantify the benefits from using larger amounts of internal curing. Mixtures that resupplied internal curing water to match that of chemical shrinkage could sustain three times the magnitude of thermal change before cracking. The second device discussed in this thesis is a large scale slab testing device. This device tests the cracking potential of 15' long by 4" thick by 24" wide slab specimens in an environmentally controlled chamber. The current standard testing devices can be considered small scale and encounter problems when linking their results to the field due to size effects. Therefore, the large scale slab testing device was developed in order to calibrate the results of smaller scale tests to real world field conditions such as a pavement or bridge deck. Measurements made with the large scale testing device showed that the cracking propensity of the internally cured mixtures was reduced and that a significant benefit could be realized.
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
Natural deep eutectic solvents: cytotoxic profile.
Hayyan, Maan; Mbous, Yves Paul; Looi, Chung Yeng; Wong, Won Fen; Hayyan, Adeeb; Salleh, Zulhaziman; Mohd-Ali, Ozair
2016-01-01
The purpose of this study was to investigate the cytotoxic profiles of different ternary natural deep eutectic solvents (NADESs) containing water. For this purpose, five different NADESs were prepared using choline chloride as a salt, alongside five hydrogen bond donors (HBD) namely glucose, fructose, sucrose, glycerol, and malonic acid. Water was added as a tertiary component during the eutectics preparation, except for the malonic acid-based mixture. Coincidentally, the latter was found to be more toxic than any of the water-based NADESs. A trend was observed between the cellular requirements of cancer cells, the viscosity of the NADESs, and their cytotoxicity. This study also highlights the first time application of the conductor-like screening model for real solvent (COSMO-RS) software for the analysis of the cytotoxic mechanism of NADESs. COSMO-RS simulation of the interactions between NADESs and cellular membranes' phospholipids suggested that NADESs strongly interacted with cell surfaces and that their accumulation and aggregation possibly defined their cytotoxicity. This reinforced the idea that careful selection of NADESs components is necessary, as it becomes evident that organic acids as HBD highly contribute to the increasing toxicity of these neoteric mixtures. Nevertheless, NADESs in general seem to possess relatively less acute toxicity profiles than their DESs parents. This opens the door for future large scale utilization of these mixtures.
Diffusion of Magnetized Binary Ionic Mixtures at Ultracold Plasma Conditions
NASA Astrophysics Data System (ADS)
Vidal, Keith R.; Baalrud, Scott D.
2017-10-01
Ultracold plasma experiments offer an accessible means to test transport theories for strongly coupled systems. Application of an external magnetic field might further increase their utility by inhibiting heating mechanisms of ions and electrons and increasing the temperature at which strong coupling effects are observed. We present results focused on developing and validating a transport theory to describe binary ionic mixtures across a wide range of coupling and magnetization strengths relevant to ultracold plasma experiments. The transport theory is an extension of the Effective Potential Theory (EPT), which has been shown to accurately model correlation effects at these conditions, to include magnetization. We focus on diffusion as it can be measured in ultracold plasma experiments. Using EPT within the framework of the Chapman-Enskog expansion, the parallel and perpendicular self and interdiffusion coefficients for binary ionic mixtures with varying mass ratios are calculated and are compared to molecular dynamics simulations. The theory is found to accurately extend Braginskii-like transport to stronger coupling, but to break down when the magnetization strength becomes large enough that the typical gyroradius is smaller than the interaction scale length. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0221.
Finite mixture modeling for vehicle crash data with application to hotspot identification.
Park, Byung-Jung; Lord, Dominique; Lee, Chungwon
2014-10-01
The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.
2017-03-01
We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.
Xia, Junchao; Case, David A.
2012-01-01
We report 100 ns molecular dynamics simulations, at various temperatures, of sucrose in water (with concentrations of sucrose ranging from 0.02 to 4 M), and in a 7:3 water-DMSO mixture. Convergence of the resulting conformational ensembles was checked using adaptive-biased simulations along the glycosidic φ and ψ torsion angles. NMR relaxation parameters, including longitudinal (R1) and transverse (R2) relaxation rates, nuclear Overhauser enhancements (NOE), and generalized order parameter (S2) were computed from the resulting time-correlation functions. The amplitude and time scales of molecular motions change with temperature and concentration in ways that track closely with experimental results, and are consistent with a model in which sucrose conformational fluctuations are limited (with 80–90% of the conformations having φ – ψ values within 20° of an average conformation), but with some important differences in conformation between pure water and DMSO-water mixtures. PMID:22058066
Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M
2015-09-15
A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup. © 2015 Wiley Periodicals, Inc.
Evaluation of fuel preparation systems for lean premixing-prevaporizing combustors
NASA Technical Reports Server (NTRS)
Dodds, W. J.; Ekstedt, E. E.
1985-01-01
A series of experiments was carried out in order to produce design data for a premixing prevaporizing fuel-air mixture preparation system for aircraft gas turbine engine combustors. The fuel-air mixture uniformity of four different system design concepts was evaluated over a range of conditions representing the cruise operation of a modern commercial turbofan engine. Operating conditions including pressure, temperature, fuel-to-air ratio, and velocity, exhibited no clear effect on mixture uniformity of systems using pressure-atomizing fuel nozzles and large-scale mixing devices. However, the performance of systems using atomizing fuel nozzles and large-scale mixing devices was found to be sensitive to operating conditions. Variations in system design variables were also evaluated and correlated. Mixing uniformity was found to improve with system length, pressure drop, and the number of fuel injection points per unit area. A premixing system capable of providing mixing uniformity to within 15 percent over a typical range of cruise operating conditions is demonstrated.
NASA Astrophysics Data System (ADS)
Danel, J.-F.; Kazandjian, L.
2018-06-01
It is shown that the equation of state (EOS) and the radial distribution functions obtained by density-functional theory molecular dynamics (DFT-MD) obey a simple scaling law. At given temperature, the thermodynamic properties and the radial distribution functions given by a DFT-MD simulation remain unchanged if the mole fractions of nuclei of given charge and the average volume per atom remain unchanged. A practical interest of this scaling law is to obtain an EOS table for a fluid from that already obtained for another fluid if it has the right characteristics. Another practical interest of this result is that an asymmetric mixture made up of light and heavy atoms requiring very different time steps can be replaced by a mixture of atoms of equal mass, which facilitates the exploration of the configuration space in a DFT-MD simulation. The scaling law is illustrated by numerical results.
Six-Tube Freezable Radiator Testing and Model Correlation
NASA Technical Reports Server (NTRS)
Lillibridge, Sean; Navarro, Moses
2011-01-01
Freezable radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recovering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TradeMark) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested, namely MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.
Six-Tube Freezable Radiator Testing and Model Correlation
NASA Technical Reports Server (NTRS)
Lilibridge, Sean T.; Navarro, Moses
2012-01-01
Freezable Radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft?s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recov ering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TM) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested: MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.
The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations
NASA Astrophysics Data System (ADS)
Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi; Salaris, Maurizio; Mucciarelli, Alessio; Savino, Alessandro; Aparicio, Antonio; Silva Aguirre, Victor; Verma, Kuldeep
2018-04-01
We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction opacities, a few nuclear reaction rates, bolometric corrections, and the treatment of the overshooting efficiency for shrinking convective cores. The new model calculations cover a mass range between 0.1 and 15 M ⊙, 22 initial chemical compositions between [Fe/H] = ‑3.20 and +0.45, with helium to metal enrichment ratio dY/dZ = 1.31. The isochrones cover an age range between 20 Myr and 14.5 Gyr, consistently take into account the pre-main-sequence phase, and have been translated to a large number of popular photometric systems. Asteroseismic properties of the theoretical models have also been calculated. We compare our isochrones with results from independent databases and with several sets of observations to test the accuracy of the calculations. All stellar evolution tracks, asteroseismic properties, and isochrones are made available through a dedicated web site.
NASA Astrophysics Data System (ADS)
Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de
2018-03-01
Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.