Sample records for adaptive mixture modelling

  1. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  2. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  3. A Mixture Rasch Model-Based Computerized Adaptive Test for Latent Class Identification

    ERIC Educational Resources Information Center

    Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi

    2012-01-01

    This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…

  4. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  5. A Mathematical Model of the Olfactory Bulb for the Selective Adaptation Mechanism in the Rodent Olfactory System.

    PubMed

    Soh, Zu; Nishikawa, Shinya; Kurita, Yuichi; Takiguchi, Noboru; Tsuji, Toshio

    2016-01-01

    To predict the odor quality of an odorant mixture, the interaction between odorants must be taken into account. Previously, an experiment in which mice discriminated between odorant mixtures identified a selective adaptation mechanism in the olfactory system. This paper proposes an olfactory model for odorant mixtures that can account for selective adaptation in terms of neural activity. The proposed model uses the spatial activity pattern of the mitral layer obtained from model simulations to predict the perceptual similarity between odors. Measured glomerular activity patterns are used as input to the model. The neural interaction between mitral cells and granular cells is then simulated, and a dissimilarity index between odors is defined using the activity patterns of the mitral layer. An odor set composed of three odorants is used to test the ability of the model. Simulations are performed based on the odor discrimination experiment on mice. As a result, we observe that part of the neural activity in the glomerular layer is enhanced in the mitral layer, whereas another part is suppressed. We find that the dissimilarity index strongly correlates with the odor discrimination rate of mice: r = 0.88 (p = 0.019). We conclude that our model has the ability to predict the perceptual similarity of odorant mixtures. In addition, the model also accounts for selective adaptation via the odor discrimination rate, and the enhancement and inhibition in the mitral layer may be related to this selective adaptation.

  6. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  7. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  8. A Gaussian mixture model based adaptive classifier for fNIRS brain-computer interfaces and its testing via simulation

    NASA Astrophysics Data System (ADS)

    Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe

    2017-08-01

    Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus  <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.

  9. Large-eddy simulation of turbulent cavitating flow in a micro channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Hickel, Stefan; Schmidt, Steffen J.

    2014-08-15

    Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data ofmore » the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.« less

  10. Adapting cultural mixture modeling for continuous measures of knowledge and memory fluency.

    PubMed

    Tan, Yin-Yin Sarah; Mueller, Shane T

    2016-09-01

    Previous research (e.g., cultural consensus theory (Romney, Weller, & Batchelder, American Anthropologist, 88, 313-338, 1986); cultural mixture modeling (Mueller & Veinott, 2008)) has used overt response patterns (i.e., responses to questionnaires and surveys) to identify whether a group shares a single coherent attitude or belief set. Yet many domains in social science have focused on implicit attitudes that are not apparent in overt responses but still may be detected via response time patterns. We propose a method for modeling response times as a mixture of Gaussians, adapting the strong-consensus model of cultural mixture modeling to model this implicit measure of knowledge strength. We report the results of two behavioral experiments and one simulation experiment that establish the usefulness of the approach, as well as some of the boundary conditions under which distinct groups of shared agreement might be recovered, even when the group identity is not known. The results reveal that the ability to recover and identify shared-belief groups depends on (1) the level of noise in the measurement, (2) the differential signals for strong versus weak attitudes, and (3) the similarity between group attitudes. Consequently, the method shows promise for identifying latent groups among a population whose overt attitudes do not differ, but whose implicit or covert attitudes or knowledge may differ.

  11. Human Language Technology: Opportunities and Challenges

    DTIC Science & Technology

    2005-01-01

    because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with

  12. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  13. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  14. A Mixture Modeling Framework for Differential Analysis of High-Throughput Data

    PubMed Central

    Taslim, Cenny; Lin, Shili

    2014-01-01

    The inventions of microarray and next generation sequencing technologies have revolutionized research in genomics; platforms have led to massive amount of data in gene expression, methylation, and protein-DNA interactions. A common theme among a number of biological problems using high-throughput technologies is differential analysis. Despite the common theme, different data types have their own unique features, creating a “moving target” scenario. As such, methods specifically designed for one data type may not lead to satisfactory results when applied to another data type. To meet this challenge so that not only currently existing data types but also data from future problems, platforms, or experiments can be analyzed, we propose a mixture modeling framework that is flexible enough to automatically adapt to any moving target. More specifically, the approach considers several classes of mixture models and essentially provides a model-based procedure whose model is adaptive to the particular data being analyzed. We demonstrate the utility of the methodology by applying it to three types of real data: gene expression, methylation, and ChIP-seq. We also carried out simulations to gauge the performance and showed that the approach can be more efficient than any individual model without inflating type I error. PMID:25057284

  15. Youngs-Type Material Strength Model in the Besnard-Harlow-Rauenzahn Turbulence Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denissen, Nicholas Allen; Plohr, Bradley J.

    Youngs [AWE Report Number 96/96, 1992] has augmented a two-phase turbulence model to account for material strength. Here we adapt the model of Youngs to the turbulence model for the mixture developed by Besnard, Harlow, and Rauenzahn [LANL Report LA-10911, 1987].

  16. Design guidelines for adapting scientific research articles: An example from an introductory level, interdisciplinary program on soft matter

    NASA Astrophysics Data System (ADS)

    Langbeheim, Elon; Safran, Samuel A.; Yerushalmi, Edit

    2013-01-01

    We present design guidelines for using Adapted Primary Literature (APL) as part of current interdisciplinary topics to introductory physics students. APL is a text genre that allows students to comprehend a scientific article, while maintaining the core features of the communication among scientists, thus representing an authentic scientific discourse. We describe the adaptation of a research paper by Nobel Laureate Paul Flory on phase equilibrium in polymer-solvent mixtures that was presented to high school students in a project-based unit on soft matter. The adaptation followed two design strategies: a) Making explicit the interplay between the theory and experiment. b) Re-structuring the text to map the theory onto the students' prior knowledge. Specifically, we map the theory of polymer-solvent systems onto a model for binary mixtures of small molecules of equal size that was already studied in class.

  17. Recognition of the Component Odors in Mixtures

    PubMed Central

    Fletcher, Dane B; Hettinger, Thomas P

    2017-01-01

    Abstract Natural olfactory stimuli are volatile-chemical mixtures in which relative perceptual saliencies determine which odor-components are identified. Odor identification also depends on rapid selective adaptation, as shown for 4 odor stimuli in an earlier experimental simulation of natural conditions. Adapt-test pairs of mixtures of water-soluble, distinct odor stimuli with chemical features in common were studied. Identification decreased for adapted components but increased for unadapted mixture-suppressed components, showing compound identities were retained, not degraded to individual molecular features. Four additional odor stimuli, 1 with 2 perceptible odor notes, and an added “water-adapted” control tested whether this finding would generalize to other 4-compound sets. Selective adaptation of mixtures of the compounds (odors): 3 mM benzaldehyde (cherry), 5 mM maltol (caramel), 1 mM guaiacol (smoke), and 4 mM methyl anthranilate (grape-smoke) again reciprocally unmasked odors of mixture-suppressed components in 2-, 3-, and 4-component mixtures with 2 exceptions. The cherry note of “benzaldehyde” (itself) and the shared note of “methyl anthranilate and guaiacol” (together) were more readily identified. The pervasive mixture-component dominance and dynamic perceptual salience may be mediated through peripheral adaptation and central mutual inhibition of neural responses. Originating in individual olfactory receptor variants, it limits odor identification and provides analytic properties for momentary recognition of a few remaining mixture-components. PMID:28641388

  18. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    PubMed

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  19. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    NASA Astrophysics Data System (ADS)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  20. The Professional Context as a Predictor for Response Distortion in the Adaption-Innovation Inventory--An Investigation Using Mixture Distribution Item Response Theory Models

    ERIC Educational Resources Information Center

    Fischer, Sebastian; Freund, Philipp Alexander

    2014-01-01

    The Adaption-Innovation Inventory (AII), originally developed by Kirton (1976), is a widely used self-report instrument for measuring problem-solving styles at work. The present study investigates how scores on the AII are affected by different response styles. Data are collected from a combined sample (N = 738) of students, employees, and…

  1. Portable Brain-Computer Interface for the Intensive Care Unit Patient Communication Using Subject-Dependent SSVEP Identification.

    PubMed

    Dehzangi, Omid; Farooq, Muhamed

    2018-01-01

    A major predicament for Intensive Care Unit (ICU) patients is inconsistent and ineffective communication means. Patients rated most communication sessions as difficult and unsuccessful. This, in turn, can cause distress, unrecognized pain, anxiety, and fear. As such, we designed a portable BCI system for ICU communications (BCI4ICU) optimized to operate effectively in an ICU environment. The system utilizes a wearable EEG cap coupled with an Android app designed on a mobile device that serves as visual stimuli and data processing module. Furthermore, to overcome the challenges that BCI systems face today in real-world scenarios, we propose a novel subject-specific Gaussian Mixture Model- (GMM-) based training and adaptation algorithm. First, we incorporate subject-specific information in the training phase of the SSVEP identification model using GMM-based training and adaptation. We evaluate subject-specific models against other subjects. Subsequently, from the GMM discriminative scores, we generate the transformed vectors, which are passed to our predictive model. Finally, the adapted mixture mean scores of the subject-specific GMMs are utilized to generate the high-dimensional supervectors. Our experimental results demonstrate that the proposed system achieved 98.7% average identification accuracy, which is promising in order to provide effective and consistent communication for patients in the intensive care.

  2. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, S.; Tebby, C.

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less

  3. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    PubMed

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Lattice model for water-solute mixtures.

    PubMed

    Furlan, A P; Almarza, N G; Barbosa, M C

    2016-10-14

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.

  5. Modelling carotid artery adaptations to dynamic alterations in pressure and flow over the cardiac cycle

    PubMed Central

    Cardamone, L.; Valentín, A.; Eberth, J. F.; Humphrey, J. D.

    2010-01-01

    Motivated by recent clinical and laboratory findings of important effects of pulsatile pressure and flow on arterial adaptations, we employ and extend an established constrained mixture framework of growth (change in mass) and remodelling (change in structure) to include such dynamical effects. New descriptors of cell and tissue behavior (constitutive relations) are postulated and refined based on new experimental data from a transverse aortic arch banding model in the mouse that increases pulsatile pressure and flow in one carotid artery. In particular, it is shown that there was a need to refine constitutive relations for the active stress generated by smooth muscle, to include both stress- and stress rate-mediated control of the turnover of cells and matrix and to account for a cyclic stress-mediated loss of elastic fibre integrity and decrease in collagen stiffness in order to capture the reported evolution, over 8 weeks, of luminal radius, wall thickness, axial force and in vivo axial stretch of the hypertensive mouse carotid artery. We submit, therefore, that complex aspects of adaptation by elastic arteries can be predicted by constrained mixture models wherein individual constituents are produced or removed at individual rates and to individual extents depending on changes in both stress and stress rate from normal values. PMID:20484365

  6. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    NASA Astrophysics Data System (ADS)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model indicates that increased γ-γ attraction can raise γ-α mixture light scattering far more than it does for solutions of γ-crystallin alone, and can produce marked turbidity tens of degrees celsius above liquid-liquid separation.

  7. Numerical simulation by the molecular collision theory of two-phase mixture explosion characteristics in closed or vented vessels

    NASA Astrophysics Data System (ADS)

    Pascaud, J. M.; Brossard, J.; Lombard, J. M.

    1999-09-01

    The aim of this work consists in presenting a simple modelling (the molecular collision theory), easily usable in an industrial environment in order to predict the evolution of thermodynamical characteristics of the combustion of two-phase mixtures in a closed or a vented vessel. Basic characteristics of the modelling have been developed for ignition and combustion of propulsive powders and adapted with appropriate parameters linked to simplified kinetics. A simple representation of the combustion phenomena based on energy transfers and the action of specific molecules is presented. The model is generalized to various mixtures such as dust suspensions, liquid fuel drops and hybrid mixtures composed of dust and a gaseous supply such as methane or propane in the general case of vented explosions. The pressure venting due to the vent breaking is calculated from thermodynamical characteristics given by the model and taking into account, the mass rate of discharge of the different products deduced from the standard orifice equations. The application conditions determine the fuel ratio of the used mixtures, the nature of the chemical kinetics and the calculation of a universal set of parameters. The model allows to study the influence of the fuel concentration and the supply of gaseous additives, the influence of the vessel volume (2400ell leq V_bleq 250 000ell) and the influence of the venting pressure or the vent area. The first results have been compared with various experimental works available for two phase mixtures and indicate quite correct predictions.

  8. Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.

    PubMed

    Li, Haoxiang; Hua, Gang

    2018-04-01

    Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.

  9. Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent Reactive Mixtures

    DTIC Science & Technology

    2012-03-27

    pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent

  10. Adaptive Annealed Importance Sampling for Multimodal Posterior Exploration and Model Selection with Application to Extrasolar Planet Detection

    NASA Astrophysics Data System (ADS)

    Liu, Bin

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.

  11. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  12. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  13. Adaptation of aeronautical engines to high altitude flying

    NASA Technical Reports Server (NTRS)

    Kutzbach, K

    1923-01-01

    Issues and techniques relative to the adaptation of aircraft engines to high altitude flight are discussed. Covered here are the limits of engine output, modifications and characteristics of high altitude engines, the influence of air density on the proportions of fuel mixtures, methods of varying the proportions of fuel mixtures, the automatic prevention of fuel waste, and the design and application of air pressure regulators to high altitude flying. Summary: 1. Limits of engine output. 2. High altitude engines. 3. Influence of air density on proportions of mixture. 4. Methods of varying proportions of mixture. 5. Automatic prevention of fuel waste. 6. Design and application of air pressure regulators to high altitude flying.

  14. ADAPTIVE ANNEALED IMPORTANCE SAMPLING FOR MULTIMODAL POSTERIOR EXPLORATION AND MODEL SELECTION WITH APPLICATION TO EXTRASOLAR PLANET DETECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bin, E-mail: bins@ieee.org

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior.more » To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.« less

  15. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  17. Primordial soup was edible: abiotically produced Miller-Urey mixture supports bacterial growth.

    PubMed

    Xie, Xueshu; Backman, Daniel; Lebedev, Albert T; Artaev, Viatcheslav B; Jiang, Liying; Ilag, Leopold L; Zubarev, Roman A

    2015-09-28

    Sixty years after the seminal Miller-Urey experiment that abiotically produced a mixture of racemized amino acids, we provide a definite proof that this primordial soup, when properly cooked, was edible for primitive organisms. Direct admixture of even small amounts of Miller-Urey mixture strongly inhibits E. coli bacteria growth due to the toxicity of abundant components, such as cyanides. However, these toxic compounds are both volatile and extremely reactive, while bacteria are highly capable of adaptation. Consequently, after bacterial adaptation to a mixture of the two most abundant abiotic amino acids, glycine and racemized alanine, dried and reconstituted MU soup was found to support bacterial growth and even accelerate it compared to a simple mixture of the two amino acids. Therefore, primordial Miller-Urey soup was perfectly suitable as a growth media for early life forms.

  18. Mixture-based gatekeeping procedures in adaptive clinical trials.

    PubMed

    Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji

    2018-01-01

    Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.

  19. Quantitative analysis of terahertz spectra for illicit drugs using adaptive-range micro-genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Ma, Yong; Lu, Zheng; Peng, Bei; Chen, Qin

    2011-08-01

    In the field of anti-illicit drug applications, many suspicious mixture samples might consist of various drug components—for example, a mixture of methamphetamine, heroin, and amoxicillin—which makes spectral identification very difficult. A terahertz spectroscopic quantitative analysis method using an adaptive range micro-genetic algorithm with a variable internal population (ARVIPɛμGA) has been proposed. Five mixture cases are discussed using ARVIPɛμGA driven quantitative terahertz spectroscopic analysis in this paper. The devised simulation results show agreement with the previous experimental results, which suggested that the proposed technique has potential applications for terahertz spectral identifications of drug mixture components. The results show agreement with the results obtained using other experimental and numerical techniques.

  20. Unconventional signal detection techniques with Gaussian probability mixtures adaptation in non-AWGN channels: full resolution receiver

    NASA Astrophysics Data System (ADS)

    Chabdarov, Shamil M.; Nadeev, Adel F.; Chickrin, Dmitry E.; Faizullin, Rashid R.

    2011-04-01

    In this paper we discuss unconventional detection technique also known as «full resolution receiver». This receiver uses Gaussian probability mixtures for interference structure adaptation. Full resolution receiver is alternative to conventional matched filter receivers in the case of non-Gaussian interferences. For the DS-CDMA forward channel with presence of complex interferences sufficient performance increasing was shown.

  1. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  2. Origin and Function of Tuning Diversity in Macaque Visual Cortex

    PubMed Central

    Goris, Robbe L.T.; Simoncelli, Eero P.; Movshon, J. Anthony

    2016-01-01

    SUMMARY Neurons in visual cortex vary in their orientation selectivity. We measured responses of V1 and V2 cells to orientation mixtures and fit them with a model whose stimulus selectivity arises from the combined effects of filtering, suppression, and response nonlinearity. The model explains the diversity of orientation selectivity with neuron-to-neuron variability in all three mechanisms, of which variability in the orientation bandwidth of linear filtering is the most important. The model also accounts for the cells’ diversity of spatial frequency selectivity. Tuning diversity is matched to the needs of visual encoding. The orientation content found in natural scenes is diverse, and neurons with different selectivities are adapted to different stimulus configurations. Single orientations are better encoded by highly selective neurons, while orientation mixtures are better encoded by less selective neurons. A diverse population of neurons therefore provides better overall discrimination capabilities for natural images than any homogeneous population. PMID:26549331

  3. A flamelet model for transcritical LOx/GCH4 flames

    NASA Astrophysics Data System (ADS)

    Müller, Hagen; Pfitzner, Michael

    2017-03-01

    This work presents a numerical framework to efficiently simulate methane combustion at supercritical pressures. A LES flamelet approach is adapted to account for real-gas thermodynamics effects which are a prominent feature of flames at near-critical injection conditions. The thermodynamics model is based on the Peng-Robinson equation of state (PR-EoS) in conjunction with a novel volume-translation method to correct deficiencies in the transcritical regime. The resulting formulation is more accurate than standard cubic EoSs without deteriorating their good computational performance. To consistently account for pressure and strain fluctuations in the flamelet model, an additional enthalpy equation is solved along with the transport equations for mixture fraction and mixture fraction variance. The method is validated against available experimental data for a laboratory scale LOx/GCH4 flame at conditions that resemble those in liquid-propellant rocket engines. The LES result is in good agreement with the measured OH* radiation.

  4. Strategies to intervene on causal systems are adaptively selected.

    PubMed

    Coenen, Anna; Rehder, Bob; Gureckis, Todd M

    2015-06-01

    How do people choose interventions to learn about causal systems? Here, we considered two possibilities. First, we test an information sampling model, information gain, which values interventions that can discriminate between a learner's hypotheses (i.e. possible causal structures). We compare this discriminatory model to a positive testing strategy that instead aims to confirm individual hypotheses. Experiment 1 shows that individual behavior is described best by a mixture of these two alternatives. In Experiment 2 we find that people are able to adaptively alter their behavior and adopt the discriminatory model more often after experiencing that the confirmatory strategy leads to a subjective performance decrement. In Experiment 3, time pressure leads to the opposite effect of inducing a change towards the simpler positive testing strategy. These findings suggest that there is no single strategy that describes how intervention decisions are made. Instead, people select strategies in an adaptive fashion that trades off their expected performance and cognitive effort. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Resonant Laser Ignition Study of HAN-HEHN Propellant Mixture (Preprint)

    DTIC Science & Technology

    2008-07-17

    results to larger samples can be predicted by the adaptation of modeling 4 formalism previously reported for solid propellant laser ignition (15-17...The inclusion of a chemical heat release term in the form of an Arrhenius expression within a heat conduction model can also give valuable...the face of the pressure transducer. In this case the BaF2 cell entrance window failed quietly at 30 µs following the initial shock sequence. The

  6. Sucrose in Aqueous Solution Revisited: 2. Adaptively Biased Molecular Dynamics Simulations and Computational Analysis of NMR Relaxation

    PubMed Central

    Xia, Junchao; Case, David A.

    2012-01-01

    We report 100 ns molecular dynamics simulations, at various temperatures, of sucrose in water (with concentrations of sucrose ranging from 0.02 to 4 M), and in a 7:3 water-DMSO mixture. Convergence of the resulting conformational ensembles was checked using adaptive-biased simulations along the glycosidic φ and ψ torsion angles. NMR relaxation parameters, including longitudinal (R1) and transverse (R2) relaxation rates, nuclear Overhauser enhancements (NOE), and generalized order parameter (S2) were computed from the resulting time-correlation functions. The amplitude and time scales of molecular motions change with temperature and concentration in ways that track closely with experimental results, and are consistent with a model in which sucrose conformational fluctuations are limited (with 80–90% of the conformations having φ – ψ values within 20° of an average conformation), but with some important differences in conformation between pure water and DMSO-water mixtures. PMID:22058066

  7. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  8. Adaptive Gaussian mixture models for pre-screening in GPR data

    NASA Astrophysics Data System (ADS)

    Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.

    2011-06-01

    Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.

  9. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClanahan, Richard; De Leon, Phillip L.

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  10. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE PAGES

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  11. Physiological responses to salt stress of salt-adapted and directly salt (NaCl and NaCl+Na2SO4 mixture)-stressed cyanobacterium Anabaena fertilissima.

    PubMed

    Swapnil, Prashant; Rai, Ashwani K

    2018-05-01

    Soil salinity in nature is generally mixed type; however, most of the studies on salt toxicity are performed with NaCl and little is known about sulfur type of salinity (Na 2 SO 4 ). Present study discerns the physiologic mechanisms responsible for salt tolerance in salt-adapted Anabaena fertilissima, and responses of directly stressed parent cells to NaCl and NaCl+Na 2 SO 4 mixture. NaCl at 500 mM was lethal to the cyanobacterium, whereas salt-adapted cells grew luxuriantly. Salinity impaired gross photosynthesis, electron transport activities, and respiration in parent cells, but not in the salt-adapted cells, except a marginal increase in PSI activity. Despite higher Na + concentration in the salt mixture, equimolar NaCl appeared more inhibitive to growth. Sucrose and trehalose content and antioxidant activities were maximal in 250 mM NaCl-treated cells, followed by salt mixture and was almost identical in salt-adapted (exposed to 500 mm NaCl) and control cells, except a marginal increase in ascorbate peroxidase activity and an additional fourth superoxide dismutase isoform. Catalase isoform of 63 kDa was induced only in salt-stressed cells. Salinity increased the uptake of intracellular Na + and Ca 2+ and leakage of K + in parent cells, while cation level in salt-adapted cells was comparable to control. Though there was differential increase in intracellular Ca 2+ under different salt treatments, ratio of Ca 2+ /Na + remained the same. It is inferred that stepwise increment in the salt concentration enabled the cyanobacterium to undergo priming effect and acquire robust and efficient defense system involving the least energy.

  12. Teaching Thermodynamics of Ideal Solutions: An Entropy-Based Approach to Help Students Better Understand and Appreciate the Subtleties of Solution Models

    ERIC Educational Resources Information Center

    Tomba, J. Pablo

    2015-01-01

    The thermodynamic formalism of ideal solutions is developed in most of the textbooks postulating a form for the chemical potential of a generic component, which is adapted from the thermodynamics of ideal gas mixtures. From this basis, the rest of useful thermodynamic properties can be derived straightforwardly without further hypothesis. Although…

  13. 3D PIC-MCC simulations of discharge inception around a sharp anode in nitrogen/oxygen mixtures

    NASA Astrophysics Data System (ADS)

    Teunissen, Jannis; Ebert, Ute

    2016-08-01

    We investigate how photoionization, electron avalanches and space charge affect the inception of nanosecond pulsed discharges. Simulations are performed with a 3D PIC-MCC (particle-in-cell, Monte Carlo collision) model with adaptive mesh refinement for the field solver. This model, whose source code is available online, is described in the first part of the paper. Then we present simulation results in a needle-to-plane geometry, using different nitrogen/oxygen mixtures at atmospheric pressure. In these mixtures non-local photoionization is important for the discharge growth. The typical length scale for this process depends on the oxygen concentration. With 0.2% oxygen the discharges grow quite irregularly, due to the limited supply of free electrons around them. With 2% or more oxygen the development is much smoother. An almost spherical ionized region can form around the electrode tip, which increases in size with the electrode voltage. Eventually this inception cloud destabilizes into streamer channels. In our simulations, discharge velocities are almost independent of the oxygen concentration. We discuss the physical mechanisms behind these phenomena and compare our simulations with experimental observations.

  14. A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.

    2013-11-01

    A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.

  15. Getting More Ecologically Relevant Information from Laboratory Tests: Recovery of Lemna minor After Exposure to Herbicides and Their Mixtures.

    PubMed

    Knežević, Varja; Tunić, Tanja; Gajić, Pero; Marjan, Patricija; Savić, Danko; Tenji, Dina; Teodorović, Ivana

    2016-11-01

    Recovery after exposure to herbicides-atrazine, isoproturon, and trifluralin-their binary and ternary mixtures, was studied under laboratory conditions using a slightly adapted standard protocol for Lemna minor. The objectives of the present study were (1) to compare empirical to predicted toxicity of selected herbicide mixtures; (2) to assess L. minor recovery potential after exposure to selected individual herbicides and their mixtures; and (3) to suggest an appropriate recovery potential assessment approach and endpoint in a modified laboratory growth inhibition test. The deviation of empirical from predicted toxicity was highest in binary mixtures of dissimilarly acting herbicides. The concentration addition model slightly underestimated mixture effects, indicating potential synergistic interactions between photosynthetic inhibitors (atrazine and isoproturon) and a cell mitosis inhibitor (trifluralin). Recovery after exposure to the binary mixture of atrazine and isoproturon was fast and concentration-independent: no significant differences between relative growth rates (RGRs) in any of the mixtures (IC10 Mix , 25 Mix , and 50 Mix ) versus control level were recorded in the last interval of the recovery phase. The recovery of the plants exposed to binary and ternary mixtures of dissimilarly acting herbicides was strictly concentration-dependent. Only plants exposed to IC10 Mix , regardless of the herbicides, recovered RGRs close to control level in the last interval of the recovery phase. The inhibition of the RGRs in the last interval of the recovery phase compared with the control level is a proposed endpoint that could inform on reversibility of the effects and indicate possible mixture effects on plant population recovery potential.

  16. Using a genetic mixture model to study phenotypic traits: Differential fecundity among Yukon river Chinook Salmon

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.

    2011-01-01

    Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length‐adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large‐mesh gill‐net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size‐selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock‐specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error. The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length‐adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted to study any observable trait or condition that may vary among populations.

  17. Diversifying mechanisms in the on-farm evolution of crop mixtures.

    PubMed

    Thomas, Mathieu; Thépot, Stéphanie; Galic, Nathalie; Jouanne-Pin, Sophie; Remoué, Carine; Goldringer, Isabelle

    2015-06-01

    While modern agriculture relies on genetic homogeneity, diversifying practices associated with seed exchange and seed recycling may allow crops to adapt to their environment. This socio-genetic model is an original experimental evolution design referred to as on-farm dynamic management of crop diversity. Investigating such model can help in understanding how evolutionary mechanisms shape crop diversity submitted to diverse agro-environments. We studied a French farmer-led initiative where a mixture of four wheat landraces called 'Mélange de Touselles' (MDT) was created and circulated within a farmers' network. The 15 sampled MDT subpopulations were simultaneously submitted to diverse environments (e.g. altitude, rainfall) and diverse farmers' practices (e.g. field size, sowing and harvesting date). Twenty-one space-time samples of 80 individuals each were genotyped using 17 microsatellite markers and characterized for their heading date in a 'common-garden' experiment. Gene polymorphism was studied using four markers located in earliness genes. An original network-based approach was developed to depict the particular and complex genetic structure of the landraces composing the mixture. Rapid differentiation among populations within the mixture was detected, larger at the phenotypic and gene levels than at the neutral genetic level, indicating potential divergent selection. We identified two interacting selection processes: variation in the mixture component frequencies, and evolution of within-variety diversity, that shaped the standing variability available within the mixture. These results confirmed that diversifying practices and environments maintain genetic diversity and allow for crop evolution in the context of global change. Including concrete measurements of farmers' practices is critical to disentangle crop evolution processes. © 2015 John Wiley & Sons Ltd.

  18. The evaluation of distributed damage in concrete based on sinusoidal modeling of the ultrasonic response.

    PubMed

    Sepehrinezhad, Alireza; Toufigh, Vahab

    2018-05-25

    Ultrasonic wave attenuation is an effective descriptor of distributed damage in inhomogeneous materials. Methods developed to measure wave attenuation have the potential to provide an in-site evaluation of existing concrete structures insofar as they are accurate and time-efficient. In this study, material classification and distributed damage evaluation were investigated based on the sinusoidal modeling of the response from the through-transmission ultrasonic tests on polymer concrete specimens. The response signal was modeled as single or the sum of damping sinusoids. Due to the inhomogeneous nature of concrete materials, model parameters may vary from one specimen to another. Therefore, these parameters are not known in advance and should be estimated while the response signal is being received. The modeling procedure used in this study involves a data-adaptive algorithm to estimate the parameters online. Data-adaptive algorithms are used due to a lack of knowledge of the model parameters. The damping factor was estimated as a descriptor of the distributed damage. The results were compared in two different cases as follows: (1) constant excitation frequency with varying concrete mixtures and (2) constant mixture with varying excitation frequencies. The specimens were also loaded up to their ultimate compressive strength to investigate the effect of distributed damage in the response signal. The results of the estimation indicated that the damping was highly sensitive to the change in material inhomogeneity, even in comparable mixtures. In addition to the proposed method, three methods were employed to compare the results based on their accuracy in the classification of materials and the evaluation of the distributed damage. It is shown that the estimated damping factor is not only sensitive to damage in the final stages of loading, but it is also applicable in evaluating micro damages in the earlier stages providing a reliable descriptor of damage. In addition, the modified amplitude ratio method is introduced as an improvement of the classical method. The proposed methods were validated to be effective descriptors of distributed damage. The presented models were also in good agreement with the experimental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    NASA Astrophysics Data System (ADS)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  20. Accelerated Gaussian mixture model and its application on image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi

    2013-03-01

    Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.

  1. Electro-olfactogram and multiunit olfactory receptor responses to binary and trinary mixtures of amino acids in the channel catfish, Ictalurus punctatus

    PubMed Central

    1989-01-01

    In vivo electrophysiological recordings from populations of olfactory receptor neurons in the channel catfish, Ictalurus punctatus, clearly showed that responses to binary and trinary mixtures of amino acids were predictable with knowledge obtained from previous cross-adaptation studies of the relative independence of the respective binding sites of the component stimuli. All component stimuli, from which equal aliquots were drawn to form the mixtures, were adjusted in concentration to provide for approximately equal response magnitudes. The magnitude of the response to a mixture whose component amino acids showed significant cross-reactivity was equivalent to the response to any single component used to form that mixture. A mixture whose component amino acids showed minimal cross-adaptation produced a significantly larger relative response than a mixture whose components exhibited considerable cross-reactivity. This larger response approached the sum of the responses to the individual component amino acids tested at the resulting concentrations in the mixture, even though olfactory receptor dose-response functions for amino acids in this species are characterized by extreme sensory compression (i.e., successive concentration increments produce progressively smaller physiological responses). Thus, the present study indicates that the response to sensory stimulation of olfactory receptor sites is more enhanced by the activation of different receptor site types than by stimulus interaction at a single site type. PMID:2703818

  2. Origin and Function of Tuning Diversity in Macaque Visual Cortex.

    PubMed

    Goris, Robbe L T; Simoncelli, Eero P; Movshon, J Anthony

    2015-11-18

    Neurons in visual cortex vary in their orientation selectivity. We measured responses of V1 and V2 cells to orientation mixtures and fit them with a model whose stimulus selectivity arises from the combined effects of filtering, suppression, and response nonlinearity. The model explains the diversity of orientation selectivity with neuron-to-neuron variability in all three mechanisms, of which variability in the orientation bandwidth of linear filtering is the most important. The model also accounts for the cells' diversity of spatial frequency selectivity. Tuning diversity is matched to the needs of visual encoding. The orientation content found in natural scenes is diverse, and neurons with different selectivities are adapted to different stimulus configurations. Single orientations are better encoded by highly selective neurons, while orientation mixtures are better encoded by less selective neurons. A diverse population of neurons therefore provides better overall discrimination capabilities for natural images than any homogeneous population. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Climate change adaptation frameworks: an evaluation of plans for coastal Suffolk, UK

    NASA Astrophysics Data System (ADS)

    Armstrong, J.; Wilby, R.; Nicholls, R. J.

    2015-11-01

    This paper asserts that three principal frameworks for climate change adaptation can be recognised in the literature: scenario-led (SL), vulnerability-led (VL) and decision-centric (DC) frameworks. A criterion is developed to differentiate these frameworks in recent adaptation projects. The criterion features six key hallmarks as follows: (1) use of climate model information; (2) analysis of metrics/units; (3) socio-economic knowledge; (4) stakeholder engagement; (5) adaptation of implementation mechanisms; (6) tier of adaptation implementation. The paper then tests the validity of this approach using adaptation projects on the Suffolk coast, UK. Fourteen adaptation plans were identified in an online survey. They were analysed in relation to the hallmarks outlined above and assigned to an adaptation framework. The results show that while some adaptation plans are primarily SL, VL or DC, the majority are hybrid, showing a mixture of DC/VL and DC/SL characteristics. Interestingly, the SL/VL combination is not observed, perhaps because the DC framework is intermediate and attempts to overcome weaknesses of both SL and VL approaches. The majority (57 %) of adaptation projects generated a risk assessment or advice notes. Further development of this type of framework analysis would allow better guidance on approaches for organisations when implementing climate change adaptation initiatives, and other similar proactive long-term planning.

  4. Climate change adaptation frameworks: an evaluation of plans for coastal, Suffolk, UK

    NASA Astrophysics Data System (ADS)

    Armstrong, J.; Wilby, R.; Nicholls, R. J.

    2015-06-01

    This paper asserts that three principal frameworks for climate change adaptation can be recognised in the literature: Scenario-Led (SL), Vulnerability-Led (VL) and Decision-Centric (DC) frameworks. A criterion is developed to differentiate these frameworks in recent adaptation projects. The criterion features six key hallmarks as follows: (1) use of climate model information; (2) analysis metrics/units; (3) socio-economic knowledge; (4) stakeholder engagement; (5) adaptation implementation mechanisms; (6) tier of adaptation implementation. The paper then tests the validity of this approach using adaptation projects on the Suffolk coast, UK. Fourteen adaptation plans were identified in an online survey. They were analysed in relation to the hallmarks outlined above and assigned to an adaptation framework. The results show that while some adaptation plans are primarily SL, VL or DC, the majority are hybrid showing a mixture of DC/VL and DC/SL characteristics. Interestingly, the SL/VL combination is not observed, perhaps because the DC framework is intermediate and attempts to overcome weaknesses of both SL and VL approaches. The majority (57 %) of adaptation projects generated a risk assessment or advice notes. Further development of this type of framework analysis would allow better guidance on approaches for organisations when implementing climate change adaptation initiatives, and other similar proactive long-term planning.

  5. Adaptive skin detection based on online training

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang

    2007-11-01

    Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.

  6. Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI

    PubMed Central

    Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.

    2016-01-01

    We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524

  7. Developmental exposure to a complex PAH mixture causes persistent behavioral effects in naive Fundulus heteroclitus (killifish) but not in a population of PAH-adapted killifish.

    PubMed

    Brown, D R; Bailey, J M; Oliveri, A N; Levin, E D; Di Giulio, R T

    2016-01-01

    Acute exposures to some individual polycyclic aromatic hydrocarbons (PAHs) and complex PAH mixtures are known to cause cardiac malformations and edema in the developing fish embryo. However, the heart is not the only organ impacted by developmental PAH exposure. The developing brain is also affected, resulting in lasting behavioral dysfunction. While acute exposures to some PAHs are teratogenically lethal in fish, little is known about the later life consequences of early life, lower dose subteratogenic PAH exposures. We sought to determine and characterize the long-term behavioral consequences of subteratogenic developmental PAH mixture exposure in both naive killifish and PAH-adapted killifish using sediment pore water derived from the Atlantic Wood Industries Superfund Site. Killifish offspring were embryonically treated with two low-level PAH mixture dilutions of Elizabeth River sediment extract (ERSE) (TPAH 5.04 μg/L and 50.4 μg/L) at 24h post fertilization. Following exposure, killifish were raised to larval, juvenile, and adult life stages and subjected to a series of behavioral tests including: a locomotor activity test (4 days post-hatch), a sensorimotor response tap/habituation test (3 months post hatch), and a novel tank diving and exploration test (3months post hatch). Killifish were also monitored for survival at 1, 2, and 5 months over 5-month rearing period. Developmental PAH exposure caused short-term as well as persistent behavioral impairments in naive killifish. In contrast, the PAH-adapted killifish did not show behavioral alterations following PAH exposure. PAH mixture exposure caused increased mortality in reference killifish over time; yet, the PAH-adapted killifish, while demonstrating long-term rearing mortality, had no significant changes in mortality associated with ERSE exposure. This study demonstrated that early embryonic exposure to PAH-contaminated sediment pore water caused long-term locomotor and behavioral alterations in killifish, and that locomotor alterations could be observed in early larval stages. Additionally, our study highlights the resistance to behavioral alterations caused by low-level PAH mixture exposure in the adapted killifish population. Furthermore, this is the first longitudinal behavioral study to use killifish, an environmentally important estuarine teleost fish, and this testing framework can be used for future contaminant assessment. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures.

    PubMed

    Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-05

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. The Use of Growth Mixture Modeling for Studying Resilience to Major Life Stressors in Adulthood and Old Age: Lessons for Class Size and Identification and Model Selection.

    PubMed

    Infurna, Frank J; Grimm, Kevin J

    2017-12-15

    Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. A Bayesian framework based on a Gaussian mixture model and radial-basis-function Fisher discriminant analysis (BayGmmKda V1.1) for spatial prediction of floods

    NASA Astrophysics Data System (ADS)

    Tien Bui, Dieu; Hoang, Nhat-Duc

    2017-09-01

    In this study, a probabilistic model, named as BayGmmKda, is proposed for flood susceptibility assessment in a study area in central Vietnam. The new model is a Bayesian framework constructed by a combination of a Gaussian mixture model (GMM), radial-basis-function Fisher discriminant analysis (RBFDA), and a geographic information system (GIS) database. In the Bayesian framework, GMM is used for modeling the data distribution of flood-influencing factors in the GIS database, whereas RBFDA is utilized to construct a latent variable that aims at enhancing the model performance. As a result, the posterior probabilistic output of the BayGmmKda model is used as flood susceptibility index. Experiment results showed that the proposed hybrid framework is superior to other benchmark models, including the adaptive neuro-fuzzy inference system and the support vector machine. To facilitate the model implementation, a software program of BayGmmKda has been developed in MATLAB. The BayGmmKda program can accurately establish a flood susceptibility map for the study region. Accordingly, local authorities can overlay this susceptibility map onto various land-use maps for the purpose of land-use planning or management.

  11. WNN 92; Proceedings of the 3rd Workshop on Neural Networks: Academic/Industrial/NASA/Defense, Auburn Univ., AL, Feb. 10-12, 1992 and South Shore Harbour, TX, Nov. 4-6, 1992

    NASA Technical Reports Server (NTRS)

    Padgett, Mary L. (Editor)

    1993-01-01

    The present conference discusses such neural networks (NN) related topics as their current development status, NN architectures, NN learning rules, NN optimization methods, NN temporal models, NN control methods, NN pattern recognition systems and applications, biological and biomedical applications of NNs, VLSI design techniques for NNs, NN systems simulation, fuzzy logic, and genetic algorithms. Attention is given to missileborne integrated NNs, adaptive-mixture NNs, implementable learning rules, an NN simulator for travelling salesman problem solutions, similarity-based forecasting, NN control of hypersonic aircraft takeoff, NN control of the Space Shuttle Arm, an adaptive NN robot manipulator controller, a synthetic approach to digital filtering, NNs for speech analysis, adaptive spline networks, an anticipatory fuzzy logic controller, and encoding operations for fuzzy associative memories.

  12. Mitochondrial phylogenomics of Hemiptera reveals adaptive innovations driving the diversification of true bugs

    PubMed Central

    Li, Hu; Leavengood, John M.; Chapman, Eric G.; Burkhardt, Daniel; Song, Fan; Jiang, Pei; Liu, Jinpeng; Cai, Wanzhi

    2017-01-01

    Hemiptera, the largest non-holometabolous order of insects, represents approximately 7% of metazoan diversity. With extraordinary life histories and highly specialized morphological adaptations, hemipterans have exploited diverse habitats and food sources through approximately 300 Myr of evolution. To elucidate the phylogeny and evolutionary history of Hemiptera, we carried out the most comprehensive mitogenomics analysis on the richest taxon sampling to date covering all the suborders and infraorders, including 34 newly sequenced and 94 published mitogenomes. With optimized branch length and sequence heterogeneity, Bayesian analyses using a site-heterogeneous mixture model resolved the higher-level hemipteran phylogeny as (Sternorrhyncha, (Auchenorrhyncha, (Coleorrhyncha, Heteroptera))). Ancestral character state reconstruction and divergence time estimation suggest that the success of true bugs (Heteroptera) is probably due to angiosperm coevolution, but key adaptive innovations (e.g. prognathous mouthpart, predatory behaviour, and haemelytron) facilitated multiple independent shifts among diverse feeding habits and multiple independent colonizations of aquatic habitats. PMID:28878063

  13. A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.

    PubMed

    Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F

    2016-01-01

    Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.

  14. An Overview of Literature Topics Related to Current Concepts, Methods, Tools, and Applications for Cumulative Risk Assessment (2007-2016).

    PubMed

    Fox, Mary A; Brewer, L Elizabeth; Martin, Lawrence

    2017-04-07

    Cumulative risk assessments (CRAs) address combined risks from exposures to multiple chemical and nonchemical stressors and may focus on vulnerable communities or populations. Significant contributions have been made to the development of concepts, methods, and applications for CRA over the past decade. Work in both human health and ecological cumulative risk has advanced in two different contexts. The first context is the effects of chemical mixtures that share common modes of action, or that cause common adverse outcomes. In this context two primary models are used for predicting mixture effects, dose addition or response addition. The second context is evaluating the combined effects of chemical and nonchemical (e.g., radiation, biological, nutritional, economic, psychological, habitat alteration, land-use change, global climate change, and natural disasters) stressors. CRA can be adapted to address risk in many contexts, and this adaptability is reflected in the range in disciplinary perspectives in the published literature. This article presents the results of a literature search and discusses a range of selected work with the intention to give a broad overview of relevant topics and provide a starting point for researchers interested in CRA applications.

  15. An Overview of Literature Topics Related to Current Concepts, Methods, Tools, and Applications for Cumulative Risk Assessment (2007–2016)

    PubMed Central

    Fox, Mary A.; Brewer, L. Elizabeth; Martin, Lawrence

    2017-01-01

    Cumulative risk assessments (CRAs) address combined risks from exposures to multiple chemical and nonchemical stressors and may focus on vulnerable communities or populations. Significant contributions have been made to the development of concepts, methods, and applications for CRA over the past decade. Work in both human health and ecological cumulative risk has advanced in two different contexts. The first context is the effects of chemical mixtures that share common modes of action, or that cause common adverse outcomes. In this context two primary models are used for predicting mixture effects, dose addition or response addition. The second context is evaluating the combined effects of chemical and nonchemical (e.g., radiation, biological, nutritional, economic, psychological, habitat alteration, land-use change, global climate change, and natural disasters) stressors. CRA can be adapted to address risk in many contexts, and this adaptability is reflected in the range in disciplinary perspectives in the published literature. This article presents the results of a literature search and discusses a range of selected work with the intention to give a broad overview of relevant topics and provide a starting point for researchers interested in CRA applications. PMID:28387705

  16. EuroForMix: An open source software based on a continuous model to evaluate STR DNA profiles from a mixture of contributors with artefacts.

    PubMed

    Bleka, Øyvind; Storvik, Geir; Gill, Peter

    2016-03-01

    We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Folding of a salivary intrinsically disordered protein upon binding to tannins.

    PubMed

    Canon, Francis; Ballivian, Renaud; Chirot, Fabien; Antoine, Rodolphe; Sarni-Manchado, Pascale; Lemoine, Jérôme; Dugourd, Philippe

    2011-05-25

    We used ion mobility spectrometry to explore conformational adaptability of intrinsically disordered proteins bound to their targets in complex mixtures. We investigated the interactions between a human salivary proline-rich protein IB5 and a model of wine and tea tannin: epigallocatechin gallate (EgCG). Collisional cross sections of naked IB5 and IB5 complexed with N = 1-15 tannins were recorded. The data demonstrate that IB5 undergoes an unfolded to folded structural transition upon binding with EgCG.

  18. Real time tracking by LOPF algorithm with mixture model

    NASA Astrophysics Data System (ADS)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  19. Managing hardwood-softwood mixtures for future forests in eastern North America: assessing suitability to projected climate change

    Treesearch

    John M. Kabrick; Kenneth L. Clark; Anthony W. D' Amato; Daniel C. Dey; Laura S. Kenefic; Christel C. Kern; Benjamin O. Knapp; David A. MacLean; Patricia Raymond; Justin D. Waskiewicz

    2017-01-01

    Despite growing interest in management strategies for climate change adaptation, there are few methods for assessing the ability of stands to endure or adapt to projected future climates. We developed a means for assigning climate "Compatibility" and "Adaptability" scores to stands for assessing the suitability of tree species for projected climate...

  20. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  1. Regiospecific dechlorination of pentachlorophenol by dichlorophenol-adapted microorganisms in freshwater, anaerobic sediment slurries.

    PubMed Central

    Bryant, F O; Hale, D D; Rogers, J E

    1991-01-01

    The reductive dechlorination of pentachlorophenol (PCP) was investigated in anaerobic sediments that contained nonadapted or 2,4- or 3,4-dichlorophenol (DCP)-adapted microbial communities. Adaptation of sediment communities increased the rate of conversion of 2,4- or 3,4-DCP to monochlorophenols (CPs) and eliminated the lag phase before dechlorination was observed. Both 2,4- and 3,4-DCP-adapted sediment communities dechlorinated the six DCP isomers to CPs. The specificity of chlorine removal from the DCP isomers indicated a preference for ortho-chlorine removal by 2,4-DCP-adapted sediment communities and for para-chlorine removal by 3,4-DCP-adapted sediment communities. Sediment slurries containing nonadapted microbial communities either did not dechlorinate PCP or did so following a lag phase of at least 40 days. Sediment communities adapted to dechlorinate 2,4- or 3,4-DCP dechlorinated PCP without an initial lag phase. The 2,4-DCP-adapted communities initially removed the ortho-chlorine from PCP, whereas the 3,4-DCP-adapted communities initially removed the para-chlorine from PCP. A 1:1 mixture of the adapted sediment communities also dechlorinated PCP without a lag phase. Dechlorination by the mixture was regiospecific, following a para greater than ortho greater than meta order of chlorine removal. Intermediate products of degradation, 2,3,5,6-tetrachlorophenol, 2,3,5-trichlorophenol, 3,5-DCP, 3-CP, and phenol, were identified by a combination of cochromatography (high-pressure liquid chromatography) with standards and gas chromatography-mass spectrometry. PMID:1768102

  2. A hydrodynamic model for granular material flows including segregation effects

    NASA Astrophysics Data System (ADS)

    Gilberg, Dominik; Klar, Axel; Steiner, Konrad

    2017-06-01

    The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.

  3. In vitro immunotoxicology of quantum dots and comparison with dissolved cadmium and tellurium.

    PubMed

    Bruneau, Audrey; Fortier, Marlene; Gagne, Francois; Gagnon, Christian; Turcotte, Patrice; Tayabali, Azam; Davis, Thomas A; Auffret, Michel; Fournier, Michel

    2015-01-01

    The increasing use of products derived from nanotechnology has raised concerns about their potential toxicity, especially at the immunocompetence level in organisms. This study compared the immunotoxicity of cadmium sulfate/cadmium telluride (CdS/Cd-Te) mixture quantum dots (QDs) and their dissolved components, cadmium chloride (CdCl2 )/sodium telluride (NaTeO3 ) salts, and a CdCl2 /NaTeO3 mixture on four animal models commonly used in risk assessment studies: one bivalve (Mytilus edulis), one fish (Oncorhynchus mykiss), and two mammals (mice and humans). Our results of viability and phagocytosis biomarkers revealed that QDs were more toxic than dissolved metals for blue mussels. For other species, dissolved metals (Cd, Te, and Cd-Te mixture) were more toxic than the nanoparticles (NPs). The most sensitive species toward QDs, according to innate immune cells, was humans (inhibitory concentration [IC50 ] = 217 μg/mL). However, for adaptative immunity, lymphoblastic transformation in mice was decreased for small QD concentrations (EC50 = 4 μg/mL), and was more sensitive than other model species tested. Discriminant function analysis revealed that blue mussel hemocytes were able to discriminate the toxicity of QDs, Cd, Te, and Cd-Te mixture (Partial Wilk's λ = 0.021 and p < 0.0001). For rainbow trout and human cells, the immunotoxic effects of QDs were similar to those obtained with the dissolved fraction of Cd and Te mixture. For mice, the toxicity of QDs markedly differed from those observed with Cd, Te, and dissolved Cd-Te mixture. The results also suggest that aquatic species responded more differently than vertebrates to these compounds. The results lead to the recommendation that mussels and mice were most able to discriminate the effects of Cd-based NPs from the effects of dissolved Cd and Te at the immunocompetence level. © 2013 Wiley Periodicals, Inc.

  4. Perceptual Characterization and Analysis of Aroma Mixtures Using Gas Chromatography Recomposition-Olfactometry

    PubMed Central

    Johnson, Arielle J.; Hirson, Gregory D.; Ebeler, Susan E.

    2012-01-01

    This paper describes the design of a new instrumental technique, Gas Chromatography Recomposition-Olfactometry (GC-R), that adapts the reconstitution technique used in flavor chemistry studies by extracting volatiles from a sample by headspace solid-phase microextraction (SPME), separating the extract on a capillary GC column, and recombining individual compounds selectively as they elute off of the column into a mixture for sensory analysis (Figure 1). Using the chromatogram of a mixture as a map, the GC-R instrument allows the operator to “cut apart" and recombine the components of the mixture at will, selecting compounds, peaks, or sections based on retention time to include or exclude in a reconstitution for sensory analysis. Selective recombination is accomplished with the installation of a Deans Switch directly in-line with the column, which directs compounds either to waste or to a cryotrap at the operator's discretion. This enables the creation of, for example, aroma reconstitutions incorporating all of the volatiles in a sample, including instrumentally undetectable compounds as well those present at concentrations below sensory thresholds, thus correcting for the “reconstitution discrepancy" sometimes noted in flavor chemistry studies. Using only flowering lavender (Lavandula angustifola ‘Hidcote Blue’) as a source for volatiles, we used the instrument to build mixtures of subsets of lavender volatiles in-instrument and characterized their aroma qualities with a sensory panel. We showed evidence of additive, masking, and synergistic effects in these mixtures and of “lavender' aroma character as an emergent property of specific mixtures. This was accomplished without the need for chemical standards, reductive aroma models, or calculation of Odor Activity Values, and is broadly applicable to any aroma or flavor. PMID:22912722

  5. Perceptual characterization and analysis of aroma mixtures using gas chromatography recomposition-olfactometry.

    PubMed

    Johnson, Arielle J; Hirson, Gregory D; Ebeler, Susan E

    2012-01-01

    This paper describes the design of a new instrumental technique, Gas Chromatography Recomposition-Olfactometry (GC-R), that adapts the reconstitution technique used in flavor chemistry studies by extracting volatiles from a sample by headspace solid-phase microextraction (SPME), separating the extract on a capillary GC column, and recombining individual compounds selectively as they elute off of the column into a mixture for sensory analysis (Figure 1). Using the chromatogram of a mixture as a map, the GC-R instrument allows the operator to "cut apart" and recombine the components of the mixture at will, selecting compounds, peaks, or sections based on retention time to include or exclude in a reconstitution for sensory analysis. Selective recombination is accomplished with the installation of a Deans Switch directly in-line with the column, which directs compounds either to waste or to a cryotrap at the operator's discretion. This enables the creation of, for example, aroma reconstitutions incorporating all of the volatiles in a sample, including instrumentally undetectable compounds as well those present at concentrations below sensory thresholds, thus correcting for the "reconstitution discrepancy" sometimes noted in flavor chemistry studies. Using only flowering lavender (Lavandula angustifola 'Hidcote Blue') as a source for volatiles, we used the instrument to build mixtures of subsets of lavender volatiles in-instrument and characterized their aroma qualities with a sensory panel. We showed evidence of additive, masking, and synergistic effects in these mixtures and of "lavender' aroma character as an emergent property of specific mixtures. This was accomplished without the need for chemical standards, reductive aroma models, or calculation of Odor Activity Values, and is broadly applicable to any aroma or flavor.

  6. Grass-legume mixtures sustain strong yield advantage over monocultures under cool maritime growing conditions over a period of 5 years.

    PubMed

    Helgadóttir, Áslaug; Suter, Matthias; Gylfadóttir, Thórey Ó; Kristjánsdóttir, Thórdís A; Lüscher, Andreas

    2018-05-22

    Grassland-based livestock systems in cool maritime regions are commonly dominated by grass monocultures receiving relatively high levels of fertilizer. The current study investigated whether grass-legume mixtures can improve the productivity, resource efficiency and robustness of yield persistence of cultivated grassland under extreme growing conditions over a period of 5 years. Monocultures and mixtures of two grasses (Phleum pratense and Festuca pratensis) and two legumes (Trifolium pratense and Trifolium repens), one of which was fast establishing and the other temporally persistent, were sown in a field trial. Relative abundance of the four species in the mixtures was systematically varied at sowing. The plots were maintained under three N levels (20, 70 and 220 kg N ha-1 year-1) and harvested twice a year for five consecutive years. Yields of individual species and interactions between all species present were modelled to estimate the species diversity effects. Significant positive diversity effects in all individual years and averaged across the 5 years were observed. Across years, the four-species equi-proportional mixture was 71 % (N20: 20 kg N ha-1 year-1) and 51 % (N70: 70 kg N ha-1 year-1) more productive than the average of monocultures, and the highest yielding mixture was 36 % (N20) and 39 % (N70) more productive than the highest yielding monoculture. Importantly, diversity effects were also evident at low relative abundances of either species group, grasses or legumes in the mixture. Mixtures suppressed weeds significantly better than monocultures consistently during the course of the experiment at all N levels. The results show that even in the less productive agricultural systems in the cool maritime regions grass-legume mixtures can contribute substantially and persistently to a more sustainable agriculture. Positive grass-legume interactions suggest that symbiotic N2 fixation is maintained even under these marginal conditions, provided that adapted species and cultivars are used.

  7. The Manhattan Frame Model-Manhattan World Inference in the Space of Surface Normals.

    PubMed

    Straub, Julian; Freifeld, Oren; Rosman, Guy; Leonard, John J; Fisher, John W

    2018-01-01

    Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches utilize these regularities via the restrictive, and rather local, Manhattan World (MW) assumption which posits that every plane is perpendicular to one of the axes of a single coordinate system. The aforementioned regularities are especially evident in the surface normal distribution of a scene where they manifest as orthogonally-coupled clusters. This motivates the introduction of the Manhattan-Frame (MF) model which captures the notion of an MW in the surface normals space, the unit sphere, and two probabilistic MF models over this space. First, for a single MF we propose novel real-time MAP inference algorithms, evaluate their performance and their use in drift-free rotation estimation. Second, to capture the complexity of real-world scenes at a global scale, we extend the MF model to a probabilistic mixture of Manhattan Frames (MMF). For MMF inference we propose a simple MAP inference algorithm and an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that let us infer the unknown number of mixture components. We demonstrate the versatility of the MMF model and inference algorithm across several scales of man-made environments.

  8. An Improved Gaussian Mixture Model for Damage Propagation Monitoring of an Aircraft Wing Spar under Changing Structural Boundary Conditions.

    PubMed

    Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Fang, Fang

    2016-02-26

    Structural Health Monitoring (SHM) technology is considered to be a key technology to reduce the maintenance cost and meanwhile ensure the operational safety of aircraft structures. It has gradually developed from theoretic and fundamental research to real-world engineering applications in recent decades. The problem of reliable damage monitoring under time-varying conditions is a main issue for the aerospace engineering applications of SHM technology. Among the existing SHM methods, Guided Wave (GW) and piezoelectric sensor-based SHM technique is a promising method due to its high damage sensitivity and long monitoring range. Nevertheless the reliability problem should be addressed. Several methods including environmental parameter compensation, baseline signal dependency reduction and data normalization, have been well studied but limitations remain. This paper proposes a damage propagation monitoring method based on an improved Gaussian Mixture Model (GMM). It can be used on-line without any structural mechanical model and a priori knowledge of damage and time-varying conditions. With this method, a baseline GMM is constructed first based on the GW features obtained under time-varying conditions when the structure under monitoring is in the healthy state. When a new GW feature is obtained during the on-line damage monitoring process, the GMM can be updated by an adaptive migration mechanism including dynamic learning and Gaussian components split-merge. The mixture probability distribution structure of the GMM and the number of Gaussian components can be optimized adaptively. Then an on-line GMM can be obtained. Finally, a best match based Kullback-Leibler (KL) divergence is studied to measure the migration degree between the baseline GMM and the on-line GMM to reveal the weak cumulative changes of the damage propagation mixed in the time-varying influence. A wing spar of an aircraft is used to validate the proposed method. The results indicate that the crack propagation under changing structural boundary conditions can be monitored reliably. The method is not limited by the properties of the structure, and thus it is feasible to be applied to composite structure.

  9. The influence of oxygen concentration on the combustion of a fuel/oxidizer mixture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biteau, H.; Institut National de l'Environnement Industriel et des Risques, Parc Technologique Alata, Verneuil en Halatte; Fuentes, A.

    2010-04-15

    The aim of the present study is to investigate the influence of the O{sub 2} concentration on the combustion behaviour of a fuel/oxidizer mixture. The material tested is a ternary mixture of lactose, starch, and potassium nitrate, which has already been used in an attempt to estimate heat release rate using the FM-Global Fire Propagation Apparatus. It provides a well-controlled combustion chamber to study the evolution of the combustion products when varying the O{sub 2} concentration, between air and low oxidizer conditions. Different chemical behaviours have been exhibited. When the O{sub 2} concentration was reduced beyond 18%, large variations weremore » observed in the CO{sub 2} and CO concentrations. This critical O{sub 2} concentration seems to be the limit before which the material only uses its own oxidizer to react. On the other hand, mass loss did not highlight this change in chemical reactions and remained similar whatever the test conditions. This presumes that the oxidation of CO into CO{sub 2} are due to reactions occurring in the gas phase especially for large O{sub 2} concentrations. This actual behaviour can be verified using a simplified flammability limit model adapted for the current work. Finally, a sensitivity analysis has been carried out to underline the influence of CO concentration in the evaluation of heat release rate using typical calorimetric methods. The results of this study provide a critical basis for the investigation of the combustion of a fuel/oxidizer mixture and for the validation of future numerical models. (author)« less

  10. Lamellar pro-inflammatory cytokine expression patterns in laminitis at the developmental stage and at the onset of lameness: innate vs. adaptive immune response.

    PubMed

    Belknap, J K; Giguère, S; Pettigrew, A; Cochran, A M; Van Eps, A W; Pollitt, C C

    2007-01-01

    Recent research has indicated that inflammation plays a role in the early stages of laminitis and that, similar to organ failure in human sepsis, early inflammatory mechanisms may lead to downstream events resulting in lamellar failure. Characterisation of the type of immune response (i.e. innate vs. adaptive) is essential in order to develop therapeutic strategies to counteract these deleterious events. To quantitate gene expression of pro-inflammatory cytokines known to be important in the innate and adaptive immune response during the early stages of laminitis, using both the black walnut extract (BWE) and oligofructose (OF) models of laminitis. Real-time qPCR was used to assess lamellar mRNA expression of interleukins-1beta, 2, 4, 6, 8, 10, 12 and 18, and tumour necrosis factor alpha and interferon gamma at the developmental stage and at the onset of lameness. Significantly increased lamellar mRNA expression of cytokines important in the innate immune response were present at the developmental stage of the BWE model, and at the onset of acute lameness in both the BWE model and OF model. Of the cytokines characteristic of the Th1 and Th2 arms of the adaptive immune response, a mixed response was noted at the onset of acute lameness in the BWE model, whereas the response was skewed towards a Th1 response at the onset of lameness in the OF model. Lamellar inflammation is characterised by strong innate immune response in the developmental stages of laminitis; and a mixture of innate and adaptive immune responses at the onset of lameness. These results indicate that anti-inflammatory treatment of early stage laminitis (and the horse at risk of laminitis) should include not only therapeutic drugs that address prostanoid activity, but should also address the marked increases in lamellar cytokine expression.

  11. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less

  12. Modeling Joint Effects of Mixtures of Chemicals on Microorganisms Using Quantitative Structure Activity Relationships

    DTIC Science & Technology

    1993-08-22

    Cyclohexane Alk 74 133 26 Pentane Alk 70 150 27 Hexane Alk 38 47 28 Heptane Alk 18 58 29 Octane Alk 8 60 30 Bis (2-chloroethyl) ether Alc 1,600 3,025 31...Triethanolarnine Amni 900 741 SAro- aromatic; Hal- balogemmaed aliphatic; Alk - alkanes; Alc- alcohols, este’s, ketones and et Aji- amineL -5- Correlation...chemicals using laboratory grown activated sludge by synthetic feed. They adapted the OECD Method 209, using inhibition of oxygen uptake rate as the measure

  13. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  14. Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.

    PubMed

    de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos

    2011-01-01

    In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Climate-change influences on the response of macroinvertebrate communities to pesticide contamination in the Sacramento River, California watershed.

    PubMed

    Chiu, Ming-Chih; Hunt, Lisa; Resh, Vincent H

    2017-03-01

    Limited studies have addressed how future climate-change scenarios may alter the effects of pesticides on biotic assemblages or the effects of exposures to repeated pulses of pesticide mixtures. We used reported pesticide-use data as input to a hydrological fate and transport model (Soil and Water Assessment Tool) under multiple climate-change scenarios to simulate spatiotemporal dynamics of pesticides mixtures in streams on a daily time-step in the Sacramento River watershed of California. We predicted that there will be increased pesticide application with warming across the watershed, especially in upstream areas. Using a statistical model describing the relationship between macroinvertebrate communities and pesticide dynamics, we found that compared to the baseline period of 1970-1999: (1) most climate-change scenarios predicted increased rainfall and warming across the watershed during 2070-2099; and (2) increasing pesticide contamination and increased impact on macroinvertebrates will likely occur in most areas of the watershed by 2070-2099; and (3) lower increases in effects of pesticides on macroinvertebrates were predicted for the downstream areas with intensive agriculture compared to some upstream areas with less-intensive agriculture. Future efforts on practical adaptation and mitigation strategies can be improved by awareness of altered threats of pesticide mixtures under future climate-change conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  17. Identity-Specific Face Adaptation Effects: Evidence for Abstractive Face Representations

    ERIC Educational Resources Information Center

    Hole, Graham

    2011-01-01

    The effects of selective adaptation on familiar face perception were examined. After prolonged exposure to photographs of a celebrity, participants saw a series of ambiguous morphs that were varying mixtures between the face of that person and a different celebrity. Participants judged fewer of the morphs to resemble the celebrity to which they…

  18. Financial Data Analysis by means of Coupled Continuous-Time Random Walk in Rachev-Rűschendorf Model

    NASA Astrophysics Data System (ADS)

    Jurlewicz, A.; Wyłomańska, A.; Żebrowski, P.

    2008-09-01

    We adapt the continuous-time random walk formalism to describe asset price evolution. We expand the idea proposed by Rachev and Rűschendorf who analyzed the binomial pricing model in the discrete time with randomization of the number of price changes. As a result, in the framework of the proposed model we obtain a mixture of the Gaussian and a generalized arcsine laws as the limiting distribution of log-returns. Moreover, we derive an European-call-option price that is an extension of the Black-Scholes formula. We apply the obtained theoretical results to model actual financial data and try to show that the continuous-time random walk offers alternative tools to deal with several complex issues of financial markets.

  19. Methods and apparatuses for making cathodes for high-temperature, rechargeable batteries

    DOEpatents

    Meinhardt, Kerry D; Sprenkle, Vincent L; Coffey, Gregory W

    2014-05-20

    The approaches for fabricating cathodes can be adapted to improve control over cathode composition and to better accommodate batteries of any shape and their assembly. For example, a first solid having an alkali metal halide, a second solid having a transition metal, and a third solid having an alkali metal aluminum halide are combined into a mixture. The mixture can be heated in a vacuum to a temperature that is greater than or equal to the melting point of the third solid. When the third solid is substantially molten liquid, the mixture is compressed into a desired cathode shape and then cooled to solidify the mixture in the desired cathode shape.

  20. Apparatuses for making cathodes for high-temperature, rechargeable batteries

    DOEpatents

    Meinhardt, Kerry D.; Sprenkle, Vincent L.; Coffey, Gregory W.

    2016-09-13

    The approaches and apparatuses for fabricating cathodes can be adapted to improve control over cathode composition and to better accommodate batteries of any shape and their assembly. For example, a first solid having an alkali metal halide, a second solid having a transition metal, and a third solid having an alkali metal aluminum halide are combined into a mixture. The mixture can be heated in a vacuum to a temperature that is greater than or equal to the melting point of the third solid. When the third solid is substantially molten liquid, the mixture is compressed into a desired cathode shape and then cooled to solidify the mixture in the desired cathode shape.

  1. Effects of Post-Treatment Hydrogen Gas Inhalation on Uveitis Induced by Endotoxin in Rats.

    PubMed

    Yan, Weiming; Chen, Tao; Long, Pan; Zhang, Zhe; Liu, Qian; Wang, Xiaocheng; An, Jing; Zhang, Zuoming

    2018-06-07

    BACKGROUND Molecular hydrogen (H2) has been widely reported to have benefiicial effects in diverse animal models and human disease through reduction of oxidative stress and inflammation. The aim of this study was to investigate whether hydrogen gas could ameliorate endotoxin-induced uveitis (EIU) in rats. MATERIAL AND METHODS Male Sprague-Dawley rats were divided into a normal group, a model group, a nitrogen-oxygen (N-O) group, and a hydrogen-oxygen (H-O) group. EIU was induced in rats of the latter 3 groups by injection of lipopolysaccharide (LPS). After that, rats in the N-O group inhaled a gas mixture of 67% N2 and 33% O2, while those in the H-O group inhaled a gas mixture of 67% H2 and 33% O2. All rats were graded according to the signs of uveitis after electroretinography (ERG) examination. Protein concentration in the aqueous humor (AqH) was measured. Furthermore, hematoxylin-eosin staining and immunostaining of anti-ionized calcium-binding adapter molecule 1 (Iba1) in the iris and ciliary body (ICB) were carried out. RESULTS No statistically significant differences existed in the graded score of uveitis and the b-wave peak time in the Dark-adapted 3.0 ERG among the model, N-O, and H-O groups (P>0.05), while rats of the H-O group showed a lower concentration of AqH protein than that of the model or N-O group (P<0.05). The number of the infiltrating cells in the ICB of rats from the H-O group was not significantly different from that of the model or N-O group (P>0.05), while the activation of microglia cells in the H-O group was somewhat reduced (P<0.05). CONCLUSIONS Post-treatment hydrogen gas inhalation did not ameliorate the clinical signs, or reduce the infiltrating cells of EIU. However, it inhibited the elevation of protein in the AqH and reduced the microglia activation.

  2. Inhibition of the acetoclastic methanogenic activity by phenol and alkyl phenols.

    PubMed

    Olguin-Lora, P; Puig-Grajales, L; Razo-Flores, E

    2003-08-01

    Chemical and petrochemical industries are important sources of aromatic pollutants. Petrochemical processes like caustic washing of middle distillates produce the spent caustic liquors highly concentrated in phenol and alkyl phenols. The anaerobic technology is considered a feasible strategy for petrochemical wastewater pre-treatment although high concentrations of phenol could limit its efficiency. The goal of this work was to determine the toxicity of both selected alkyl phenols and a synthetic "spent-caustic phenols mixture" on the acetoclastic Specific Methanogenic Activity (SMA) of unadapted and phenol-adapted granular sludge. Alkyl phenols were responsible for 50% (IC50) and 100% (IC100) inhibition of the SMA at concentrations ranging from 1.6 to 5.0 mM and from 4.1 to 27.5 mM, respectively, for un-adapted granular sludge. In the case of phenol-adapted granular sludge, the inhibitory concentrations ranged from 1.7 to 14.9 mM and from 4.0 to 83.0 for IC50 and IC100, respectively, highlighting the impact of sludge acclimation. The inhibition produced by 2-ethylphenol was more acute compared to phenol and was not reduced by the phenol acclimation process. The IC50 and IC100 values obtained for the synthetic "spent-caustic phenols mixture" were 9.5 mM and 88.4 mM, respectively. The inhibitory concentrations of phenol compounds were closely correlated with compound apolarity (log P), indicating that the lipophilic character of the tested compounds was responsible for their methanogenic toxicity. An inhibition model is confirmed to estimate the IC50 and IC100.

  3. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  4. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  5. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  6. Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1997-01-01

    An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.

  7. Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wildenschild, D; Berge, P A; Berryman, K G

    1999-01-15

    The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of themore » measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.« less

  8. Bayesian nonparametric regression with varying residual density

    PubMed Central

    Pati, Debdeep; Dunson, David B.

    2013-01-01

    We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053

  9. Mixture and odorant processing in the olfactory systems of insects: a comparative perspective.

    PubMed

    Clifford, Marie R; Riffell, Jeffrey A

    2013-11-01

    Natural olfactory stimuli are often complex mixtures of volatiles, of which the identities and ratios of constituents are important for odor-mediated behaviors. Despite this importance, the mechanism by which the olfactory system processes this complex information remains an area of active study. In this review, we describe recent progress in how odorants and mixtures are processed in the brain of insects. We use a comparative approach toward contrasting olfactory coding and the behavioral efficacy of mixtures in different insect species, and organize these topics around four sections: (1) Examples of the behavioral efficacy of odor mixtures and the olfactory environment; (2) mixture processing in the periphery; (3) mixture coding in the antennal lobe; and (4) evolutionary implications and adaptations for olfactory processing. We also include pertinent background information about the processing of individual odorants and comparative differences in wiring and anatomy, as these topics have been richly investigated and inform the processing of mixtures in the insect olfactory system. Finally, we describe exciting studies that have begun to elucidate the role of the processing of complex olfactory information in evolution and speciation.

  10. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  11. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  12. The Effect of Semi-Brittle Rheology on the Seismicity at the Subduction Interface: Coseismic and Aseismic Events

    NASA Astrophysics Data System (ADS)

    Tong, X.; Lavier, L.

    2017-12-01

    Cold and warm subduction zones usually have different seismicity and tectonic structure. Aseismic events like episodic tremor and slip (ETS) and slow slip event (SSE) are often observed in warm and young slabs which typically have less megathrust seismicity and smaller seismogenic area (e.g. southwest Japan). On the other hand, cold and old slabs (e.g. Northeast Japan) have more megathrust events and larger seismogenic area and few aseismic events. Recent studies have try to model the differences in seismic behaviors with different approaches, includes rheological heterogeneity (e.g. frictional vs. viscous), petrological heterogeneity (e.g. hydration-dehydration process and mineral phase changes), and the frictional heterogeneity (e.g. rate-and-state dependent friction). Following previous works, we proposed a new model in which the subduction channel has a temperature dependent material assembly which composed of an explicit mixture of basalt/eclogite and mantle peridotite. Our model also take into account rate and state dependent friction and pore fluid pressure. Depending on the temperature, the basalt and peridotite mixture can behave either as an elastoplastic frictional or a Maxwell viscoelastic material. To model the mixture numerically, we use DynEarthSol3D (DES3D). DES3D is a robust, adaptive, multi-dimensional, finite element method solver which has a composite Elasto-Visco-Plastic rheology. We vary the temperature profile, the ratio of basalt vs. peridotite, the rheology of the mantle peridotites and the loading rate of the subduction interface. Over multiple earthquake cycles, our two end member experiments show that megathrust earthquakes are dominate the seismicity for cold condition (e.g. Japan trench) while both coseismic and aseismic events account for the seismicity for warm condition (e.g. Nankai trench).

  13. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  14. One-step catalytic conversion of biomass-derived carbohydrates to liquid fuels

    DOEpatents

    Sen, Ayusman; Yang, Weiran

    2014-03-18

    The invention relates to a method for manufacture of hydrocarbon fuels and oxygenated hydrocarbon fuels such as alkyl substituted tetrahydrofurans such as 2,5-dimethyltetrahydrofuran, 2-methyltetrahydrofuran, 5-methylfurfural and mixtures thereof. The method generally entails forming a mixture of reactants that includes carbonaceous material, water, a metal catalyst and an acid reacting that mixture in the presence of hydrogen. The reaction is performed at a temperature and for a time sufficient to produce a furan type hydrocarbon fuel. The process may be adapted to provide continuous manufacture of hydrocarbon fuels such as a furan type fuel.

  15. Adrenal hormones and the anorectic response and adaptation of rats to amino acid imbalance.

    PubMed

    Hammer, V A; Gietzen, D W; Sworts, V D; Beverly, J L; Rogers, Q R

    1990-12-01

    The role of adrenal function in the anorectic response and adaptation of rats to a diet with an isoleucine (Ile) imbalance was investigated. In the first of four experiments, rats were fed a mildly Ile-imbalanced diet after treatment with metyrapone, and inhibitor of glucocorticoid synthesis. In two separate experiments, rats were presented with either a mildly or severely Ile-imbalanced diet (4.93 and 9.86% imbalanced amino acid mixture, respectively) after bilateral adrenalectomy. Finally, the effects of ICS 205-930, a serotonin-3 receptor antagonist, on the intake of mildly Ile-imbalanced diet were tested in adrenalectomized animals. In each experiment a 2 X 2 factorial design was used. Neither metyrapone nor adrenalectomy altered the initial depression in the intake of an imbalanced diet. The adaptation phase in the response of adrenalectomized rats fed a mildly Ile-imbalanced diet was not different from that of controls, but adrenalectomized rats fed severely Ile-imbalanced diets were unable to adapt. Adrenalectomy did not alter the anti-anoretic activity of ICS 205-930 in this model. These results suggest that adrenal hormones are not necessary for the initial anoretic response or adaptation of rats to an Ile-imbalanced diet, nor are they implicated in the anti-anorectic effect of serotonin-3 blockade.

  16. Model of experts for decision support in the diagnosis of leukemia patients.

    PubMed

    Corchado, Juan M; De Paz, Juan F; Rodríguez, Sara; Bajo, Javier

    2009-07-01

    Recent advances in the field of biomedicine, specifically in the field of genomics, have led to an increase in the information available for conducting expression analysis. Expression analysis is a technique used in transcriptomics, a branch of genomics that deals with the study of messenger ribonucleic acid (mRNA) and the extraction of information contained in the genes. This increase in information is reflected in the exon arrays, which require the use of new techniques in order to extract the information. The purpose of this study is to provide a tool based on a mixture of experts model that allows the analysis of the information contained in the exon arrays, from which automatic classifications for decision support in diagnoses of leukemia patients can be made. The proposed model integrates several cooperative algorithms characterized for their efficiency for data processing, filtering, classification and knowledge extraction. The Cancer Institute of the University of Salamanca is making an effort to develop tools to automate the evaluation of data and to facilitate de analysis of information. This proposal is a step forward in this direction and the first step toward the development of a mixture of experts tool that integrates different cognitive and statistical approaches to deal with the analysis of exon arrays. The mixture of experts model presented within this work provides great capacities for learning and adaptation to the characteristics of the problem in consideration, using novel algorithms in each of the stages of the analysis process that can be easily configured and combined, and provides results that notably improve those provided by the existing methods for exon arrays analysis. The material used consists of data from exon arrays provided by the Cancer Institute that contain samples from leukemia patients. The methodology used consists of a system based on a mixture of experts. Each one of the experts incorporates novel artificial intelligence techniques that improve the process of carrying out various tasks such as pre-processing, filtering, classification and extraction of knowledge. This article will detail the manner in which individual experts are combined so that together they generate a system capable of extracting knowledge, thus permitting patients to be classified in an automatic and efficient manner that is also comprehensible for medical personnel. The system has been tested in a real setting and has been used for classifying patients who suffer from different forms of leukemia at various stages. Personnel from the Cancer Institute supervised and participated throughout the testing period. Preliminary results are promising, notably improving the results obtained with previously used tools. The medical staff from the Cancer Institute considers the tools that have been developed to be positive and very useful in a supporting capacity for carrying out their daily tasks. Additionally the mixture of experts supplies a tool for the extraction of necessary information in order to explain the associations that have been made in simple terms. That is, it permits the extraction of knowledge for each classification made and generalized in order to be used in subsequent classifications. This allows for a large amount of learning and adaptation within the proposed system.

  17. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  18. Identification and Control of Aircrafts using Multiple Models and Adaptive Critics

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2007-01-01

    We compared two possible implementations of local linear models for control: one approach is based on a self-organizing map (SOM) to cluster the dynamics followed by a set of linear models operating at each cluster. Therefore the gating function is hard (a single local model will represent the regional dynamics). This simplifies the controller design since there is a one to one mapping between controllers and local models. The second approach uses a soft gate using a probabilistic framework based on a Gaussian Mixture Model (also called a dynamic mixture of experts). In this approach several models may be active at a given time, we can expect a smaller number of models, but the controller design is more involved, with potentially better noise rejection characteristics. Our experiments showed that the SOM provides overall best performance in high SNRs, but the performance degrades faster than with the GMM for the same noise conditions. The SOM approach required about an order of magnitude more models than the GMM, so in terms of implementation cost, the GMM is preferable. The design of the SOM is straight forward, while the design of the GMM controllers, although still reasonable, is more involved and needs more care in the selection of the parameters. Either one of these locally linear approaches outperform global nonlinear controllers based on neural networks, such as the time delay neural network (TDNN). Therefore, in essence the local model approach warrants practical implementations. In order to call the attention of the control community for this design methodology we extended successfully the multiple model approach to PID controllers (still today the most widely used control scheme in the industry), and wrote a paper on this subject. The echo state network (ESN) is a recurrent neural network with the special characteristics that only the output parameters are trained. The recurrent connections are preset according to the problem domain and are fixed. In a nutshell, the states of the reservoir of recurrent processing elements implement a projection space, where the desired response is optimally projected. This architecture trades training efficiency by a large increase in the dimension of the recurrent layer. However, the power of the recurrent neural networks can be brought to bear on practical difficult problems. Our goal was to implement an adaptive critic architecture implementing Bellman s approach to optimal control. However, we could only characterize the ESN performance as a critic in value function evaluation, which is just one of the pieces of the overall adaptive critic controller. The results were very convincing, and the simplicity of the implementation was unparalleled.

  19. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  20. Characterization of Metal Matrix Composites

    NASA Technical Reports Server (NTRS)

    Daniel, I. M.; Chun, H. J.; Karalekas, D.

    1994-01-01

    Experimental methods were developed, adapted, and applied to the characterization of a metal matrix composite system, namely, silicon carbide/aluminim (SCS-2/6061 Al), and its constituents. The silicon carbide fiber was characterized by determining its modulus, strength, and coefficient of thermal expansion. The aluminum matrix was characterized thermomechanically up to 399 C (750 F) at two strain rates. The unidirectional SiC/Al composite was characterized mechanically under longitudinal, transverse, and in-plane shear loading up to 399 C (750 F). Isothermal and non-isothermal creep behavior was also measured. The applicability of a proposed set of multifactor thermoviscoplastic nonlinear constitutive relations and a computer code was investigated. Agreement between predictions and experimental results was shown in a few cases. The elastoplastic thermomechanical behavior of the composite was also described by a number of new analytical models developed or adapted for the material system studied. These models include the rule of mixtures, composite cylinder model with various thermoelastoplastic analyses and a model based on average field theory. In most cases satisfactory agreement was demonstrated between analytical predictions and experimental results for the cases of stress-strain behavior and thermal deformation behavior at different temperatures. In addition, some models yielded detailed three-dimensional stress distributions in the constituents within the composite.

  1. A Topical Overview of Cumulative Risk Assessment Concepts ...

    EPA Pesticide Factsheets

    Cumulative risk assessments (CRAs) address combined risks from exposures to multiple chemical and nonchemical stressors and may focus on vulnerable communities or populations. Significant contributions have been made to the development of concepts, methods, and applications for CRA over the past decade. Work in both human health and ecological cumulative risk has advanced in two different contexts. First, in assessing the effects of chemical mixtures that share common modes of action, or that cause common adverse outcomes. In this context two primary models are used for predicting mixture effects, dose addition or response addition. The second context is evaluating the combined effects of chemical and nonchemical (e.g., radiation, biological, nutritional, economic, psychological, habitat alteration, land-use change, global climate change, and natural disasters) stressors. CRA can be adapted to address risk in many contexts, and this adaptability is reflected in the range in disciplinary perspectives in the published literature. This article presents the results of a literature search by presenting a range of selected work with the intention to give a broad overview of relevant topics and provide a starting point for researchers interested in CRA applications. This is a select literature review of topics in CRA. As a published article it will allow the citation of an analysis conducted on a rich and diverse set of CRA publications relevant to assessment methods

  2. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    PubMed

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906

  4. Detecting Mixtures from Structural Model Differences Using Latent Variable Mixture Modeling: A Comparison of Relative Model Fit Statistics

    ERIC Educational Resources Information Center

    Henson, James M.; Reise, Steven P.; Kim, Kevin H.

    2007-01-01

    The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…

  5. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  6. Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation.

    PubMed

    Marcel, Sébastien; Millán, José Del R

    2007-04-01

    In this paper, we investigate the use of brain activity for person authentication. It has been shown in previous studies that the brain-wave pattern of every individual is unique and that the electroencephalogram (EEG) can be used for biometric identification. EEG-based biometry is an emerging research topic and we believe that it may open new research directions and applications in the future. However, very little work has been done in this area and was focusing mainly on person identification but not on person authentication. Person authentication aims to accept or to reject a person claiming an identity, i.e., comparing a biometric data to one template, while the goal of person identification is to match the biometric data against all the records in a database. We propose the use of a statistical framework based on Gaussian Mixture Models and Maximum A Posteriori model adaptation, successfully applied to speaker and face authentication, which can deal with only one training session. We perform intensive experimental simulations using several strict train/test protocols to show the potential of our method. We also show that there are some mental tasks that are more appropriate for person authentication than others.

  7. Toxicity of Pesticide Tank Mixtures from Rice Crops Against Telenomus podisi Ashmead (Hymenoptera: Platygastridae).

    PubMed

    de B Pazini, J; Pasini, R A; Rakes, M; de Armas, F S; Seidel, E J; da S Martins, J F; Grützmacher, A D

    2017-08-01

    The use of insecticides, herbicides, and fungicides commonly occurs in mixtures in tanks in order to control phytosanitary problems in crops. However, there is no information regarding the effects of these mixtures on non-target organisms associated to the rice agroecosystem. The aim of this study was to know the toxicity of pesticide tank mixtures from rice crops against Telenomus podisi Ashmead (Hymenoptera: Platygastridae). Based on the methods adapted from the International Organisation for Biological and Integrated Control of Noxious Animals and Plants (IOBC), adults of T. podisi were exposed to residues of insecticides, herbicides, and fungicides, individually or in mixture commonly used by growers, in laboratory and on rice plants in a greenhouse. The mixture between fungicides tebuconazole, triclyclazole, and azoxystrobin and the mixture between herbicides cyhalofop-butyl, imazethapyr, imazapyr/imazapic, and penoxsulam are harmless to T. podisi and can be used in irrigated rice crops without harming the natural biological control. The insecticides cypermethin, thiamethoxam, and bifenthrin/carbosulfan increase the toxicity of the mixtures in tank with herbicides and fungicides, being more toxic to T. podisi and less preferred for use in phytosanitary treatments in the rice crop protection.

  8. Optimized Unlike-Pair Interactions for Water-Carbon Dioxide Mixtures described by the SPC/E and EPM2 Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlcek, Lukas; Chialvo, Ariel A; Cole, David

    The unlike- pair interaction parameters for the SPC/E- EPM2 models have been optimized to reproduce the mutual solubility of water and carbon dioxide at the conditions of liquid- supercritical fluid phase equilibria. An efficient global optimization of the parameters is achieved through an implementation of the coupling parameter approach, adapted to phase equilibria calculations in the Gibbs ensemble, that explicitly corrects for the over- polarization of the SPC/E water molecule in the non- polar CO2 environments. The resulting H2O- CO2 force field reproduces accurately the available experimental solubilities at the two fluid phases in equilibria as well as the correspondingmore » species tracer diffusion coefficients.« less

  9. Estimating vapor pressures of pure liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraburda, S.S.

    1996-03-01

    Calculating the vapor pressures for pure liquid chemicals is a key step in designing equipment for separation of liquid mixtures. Here is a useful way to develop an equation for predicting vapor pressures over a range of temperatures. The technique uses known vapor pressure points for different temperatures. Although a vapor-pressure equation is being showcased in this article, the basic method has much broader applicability -- in fact, users can apply it to develop equations for any temperature-dependent model. The method can be easily adapted for use in software programs for mathematics evaluation, minimizing the need for any programming. Themore » model used is the Antoine equation, which typically provides a good correlation with experimental or measured data.« less

  10. Silicon release coating, method of making same, and method of using same

    DOEpatents

    Jonczyk, Ralf [Wilmington, DE

    2011-11-22

    A method of making a release coating includes the following steps: forming a mixture that includes (a) solid components comprising (i) 20-99% silicon by weight and (ii) 1-80% silicon nitride by weight and (b) a solvent; applying the mixture to an inner portion of a crucible or graphite board adapted to form an ingot or wafer comprising silicon; and annealing the mixture in a nitrogen atmosphere at a temperature ranging from 1000 to 2000.degree. C. The invention may also relate to release coatings and methods of making a silicon ingot or wafer including the use of a release coating.

  11. Laboratory studies of cometary ice analogues

    NASA Astrophysics Data System (ADS)

    Schmitt, B.; Espinasse, S.; Grim, R. J. A.; Greenberg, J. M.; Klinger, J.

    1989-12-01

    Laboratory studies were performed in order to simulate the physico-chemical processes that are likely to occur in the near surface layers of short and intermediate period comets. Pure H2O ice as well as CO:H2O, CO2:H2O, CH4:H2O, CO:CO2:H2O, and NH3:H2O ice mixtures were studied in the temperature range between 10 and 180 K. The evolution of the composition of ice mixtures, the crystallization of H2O ice as well as the formation and decompostion of clathrate hydrate by different processes were studied as a function of temperature and time. Using the results together with numerical modeling, predictions are made about the survival of amorphous ice, CO, CO2, CH4, and NH3 in the near surface layers of short period comets. The likeliness of finding clathrate and molecular hydrates is discussed. It is proposed that the analytical methods developed here could be fruitfully adapted to the analysis of returned comet samples.

  12. Computational Study of Near-limit Propagation of Detonation in Hydrogen-air Mixtures

    NASA Technical Reports Server (NTRS)

    Yungster, S.; Radhakrishnan, K.

    2002-01-01

    A computational investigation of the near-limit propagation of detonation in lean and rich hydrogen-air mixtures is presented. The calculations were carried out over an equivalence ratio range of 0.4 to 5.0, pressures ranging from 0.2 bar to 1.0 bar and ambient initial temperature. The computations involved solution of the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing (TVD) scheme, and a point implicit, first-order-accurate, time marching algorithm. The hydrogen-air combustion was modeled with a 9-species, 19-step reaction mechanism. A multi-level, dynamically adaptive grid was utilized in order to resolve the structure of the detonation. The results of the computations indicate that when hydrogen concentrations are reduced below certain levels, the detonation wave switches from a high-frequency, low amplitude oscillation mode to a low frequency mode exhibiting large fluctuations in the detonation wave speed; that is, a 'galloping' propagation mode is established.

  13. Application of phase-trafficking methods to natural products research.

    PubMed

    Araya, Juan J; Montenegro, Gloria; Mitscher, Lester A; Timmermann, Barbara N

    2010-09-24

    A novel simultaneous phase-trafficking approach using spatially separated solid-supported reagents for rapid separation of neutral, basic, and acidic compounds from organic plant extracts with minimum labor is reported. Acidic and basic ion-exchange resins were physically separated into individual sacks ("tea bags") for trapping basic and acidic compounds, respectively, leaving behind in solution neutral components of the natural mixtures. Trapped compounds were then recovered from solid phase by appropriate suspension in acidic or basic solutions. The feasibility of the proposed separation protocol was demonstrated and optimized with an "artificial mixture" of model compounds. In addition, the utility of this methodology was illustrated with the successful separation of the alkaloid skytanthine from Skytanthus acutus Meyen and the main catechins and caffeine from Camellia sinensis L. (Kuntze). This novel approach offers multiple advantages over traditional extraction methods, as it is not labor intensive, makes use of only small quantities of solvents, produces fractions in adequate quantities for biological assays, and can be easily adapted to field conditions for bioprospecting activities.

  14. By Capturing Inflammatory Lipids Released from Dying Cells, the Receptor CD14 Induces Inflammasome-Dependent Phagocyte Hyperactivation.

    PubMed

    Zanoni, Ivan; Tan, Yunhao; Di Gioia, Marco; Springstead, James R; Kagan, Jonathan C

    2017-10-17

    A heterogeneous mixture of lipids called oxPAPC, derived from dying cells, can hyperactivate dendritic cells (DCs) but not macrophages. Hyperactive DCs are defined by their ability to release interleukin-1 (IL-1) while maintaining cell viability, endowing these cells with potent aptitude to stimulate adaptive immunity. Herein, we found that the bacterial lipopolysaccharide receptor CD14 captured extracellular oxPAPC and delivered these lipids into the cell to promote inflammasome-dependent DC hyperactivation. Notably, we identified two specific components within the oxPAPC mixture that hyperactivated macrophages, allowing these cells to release IL-1 for several days, by a CD14-dependent process. In murine models of sepsis, conditions that promoted cell hyperactivation resulted in inflammation but not lethality. Thus, multiple phagocytes are capable of hyperactivation in response to oxPAPC, with CD14 acting as the earliest regulator in this process, serving to capture and transport these lipids to promote inflammatory cell fate decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Laboratory studies of molecular growth in the Titan ionosphere.

    PubMed

    Thissen, Roland; Vuitton, Veronique; Lavvas, Panayotis; Lemaire, Joel; Dehon, Christophe; Dutuit, Odile; Smith, Mark A; Turchini, Stefano; Catone, Daniele; Yelle, Roger V; Pernot, Pascal; Somogyi, Arpad; Coreno, Marcello

    2009-10-22

    Experimental simulations of the initial steps of the ion-molecule reactions occurring in the ionosphere of Titan were performed at the synchrotron source Elettra in Italy. The measurements consisted of irradiating gas mixtures with a monochromatic photon beam, from the methane ionization threshold at 12.6 eV, up to and beyond the molecular nitrogen dissociative ionization threshold at 24.3 eV. Three gas mixtures of increasing complexity were used: N(2)/CH(4) (0.96/0.04), N(2)/CH(4)/C(2)H(2) (0.96/0.04/0.001), and N(2)/CH(4)/C(2)H(2)/C(2)H(4) (0.96/0.04/0.001/0.001). The resulting ions were detected with a high-resolution (1 T) FT-ICR mass spectrometer as a function of time and VUV photon energy. In order to interpret the experimental results, a Titan ionospheric model was adapted to the laboratory conditions. This model had previously allowed the identification of the ions detected in the Titan upper atmosphere by the ion neutral mass spectrometer (INMS) onboard the Cassini spacecraft. Comparison between observed and modeled ion densities validates the kinetic model (reactions, rate constants, product branching ratios) for the primary steps of molecular growth. It also reveals differences that we attribute to an intense surface chemistry. This result implies that heterogeneous chemistry on aerosols might efficiently produce HCN and NH(3) in the Titan upper atmosphere.

  16. Blended particle filters for large-dimensional chaotic dynamical systems

    PubMed Central

    Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.

    2014-01-01

    A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886

  17. Adaptation of Timing Behavior to a Regular Change in Criterion

    PubMed Central

    Sanabria, Federico; Oldenburg, Liliana

    2013-01-01

    This study examined how operant behavior adapted to an abrupt but regular change in the timing of reinforcement. Pigeons were trained on a fixed interval (FI) 15-s schedule of reinforcement during half of each experimental session, and on an FI 45-s (Experiment 1), FI 60-s (Experiment 2), or extinction schedule (Experiment 3) during the other half. FI performance was well characterized by a mixture of two gamma-shaped distributions of responses. When a longer FI schedule was in effect in the first half of the session (Experiment 1), a constant interference by the shorter FI was observed. When a shorter FI schedule was in effect in the first half of the session (Experiments 1, 2, and 3), the transition between schedules involved a decline in responding and a progressive rightward shift in the mode of the response distribution initially centered around the short FI. These findings are discussed in terms of the constraints they impose to quantitative models of timing, and in relation to the implications for information-based models of associative learning. PMID:23962672

  18. Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.

    PubMed

    Nagai, Takashi

    2017-10-01

    The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.

  19. Bayesian demosaicing using Gaussian scale mixture priors with local adaptivity in the dual tree complex wavelet packet transform domain

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried

    2013-02-01

    In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.

  20. The perception of odor objects in everyday life: a review on the processing of odor mixtures

    PubMed Central

    Thomas-Danguin, Thierry; Sinding, Charlotte; Romagny, Sébastien; El Mountassir, Fouzia; Atanasova, Boriana; Le Berre, Elodie; Le Bon, Anne-Marie; Coureaud, Gérard

    2014-01-01

    Smelling monomolecular odors hardly ever occurs in everyday life, and the daily functioning of the sense of smell relies primarily on the processing of complex mixtures of volatiles that are present in the environment (e.g., emanating from food or conspecifics). Such processing allows for the instantaneous recognition and categorization of smells and also for the discrimination of odors among others to extract relevant information and to adapt efficiently in different contexts. The neurophysiological mechanisms underpinning this highly efficient analysis of complex mixtures of odorants is beginning to be unraveled and support the idea that olfaction, as vision and audition, relies on odor-objects encoding. This configural processing of odor mixtures, which is empirically subject to important applications in our societies (e.g., the art of perfumers, flavorists, and wine makers), has been scientifically studied only during the last decades. This processing depends on many individual factors, among which are the developmental stage, lifestyle, physiological and mood state, and cognitive skills; this processing also presents striking similarities between species. The present review gathers the recent findings, as observed in animals, healthy subjects, and/or individuals with affective disorders, supporting the perception of complex odor stimuli as odor objects. It also discusses peripheral to central processing, and cognitive and behavioral significance. Finally, this review highlights that the study of odor mixtures is an original window allowing for the investigation of daily olfaction and emphasizes the need for knowledge about the underlying biological processes, which appear to be crucial for our representation and adaptation to the chemical environment. PMID:24917831

  1. Multiple-copy state discrimination: Thinking globally, acting locally

    NASA Astrophysics Data System (ADS)

    Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.

    2011-05-01

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.

  2. Multiple-copy state discrimination: Thinking globally, acting locally

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.

    2011-05-15

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less

  3. Degradation of pesticides in biobeds: the effect of concentration and pesticide mixtures.

    PubMed

    Fogg, Paul; Boxall, Alistair B A; Walker, Allan

    2003-08-27

    Biobeds aim to create an environment whereby any pesticide spills are retained and then degraded, thus reducing the potential for surface or groundwater contamination. Biobeds may receive high concentrations of relatively complex mixtures of pesticides. The effects of concentration and pesticide interaction on degradation rate were therefore investigated. At concentrations up to 20 times the maximum recommended application rate for isoproturon and chlorothalonil, the rate of degradation in topsoil and biomix decreased with increasing concentration. With the exception of isoproturon at concentrations above 11 mg kg(-1), degradation was quicker in biomix (a composted mixture of topsoil, compost, and wheat straw) than in topsoil. One possible explanation for faster isoproturon degradation in topsoil as compared to biomix may be that previous treatments of isoproturon applied to the field soil as part of normal agricultural practices had resulted in proliferation of microbial communities specifically adapted to use isoproturon as an energy source. Such microbial adaptation could enhance the performance of a biobed. Studies with a mixture of isoproturon and chlorothalonil showed that interactions between pesticides are possible. In biomix, the degradation of either isoproturon or chlorothalonil was unaffected by the presence of the other pesticide, whereas in topsoil, isoproturon DT(50) values increased from 18.5 to 71.5 days in the presence of chlorothalonil. These studies suggest that biobeds appear capable of treating high concentrations of more than one pesticide.

  4. The role of nucleotides in augmentation of lymphocyte locomotion: Adaptional countermeasure development in microgravity analog environments

    NASA Astrophysics Data System (ADS)

    Sundaresan, Alamelu; Kulkarni, Anil D.; Yamauchi, Keiko; Pellis, Neal R.

    2006-09-01

    Space travel and long-term space residence such as envisaged in the exploration era implicates burdens on the immune system. An optimal immune response is required to countered and with-stand exposure to pathogens. Countermeasure development is an important avenue in space research especially for long-term space exploration. Microgravity exposure causes detrimental effects in lymphocyte functions which may impair immune response. Impaired lymphocyte function can be remedied by bypassing cell membrane events. This is done by using compounds such as Phorbol Myristate Acetate (PMA). Since activation in mouse splenocytes was augmented using nucleotides, it was essential to observe their effects on human lymphocyte locomotion. A nucleotide/nucleoside (NT/NT) mixture from Otsuka Pharmaceuticals (Naruto, Japan) was used at recommended doses. In lymphocytes cultured in modeled microgravity, the NT/NT mixture used orchestrated locomotion recovery by more than 87%, similar to the response documented with PMA in lymphocytes. Both 12µM and 120µM doses worked similarly. These are preliminary results leading to the possible use of the NT/NT mixture to mitigate immune suppression in micro-gravity. More studies in this direction are required to delineate the role of NT/NT on the immune response in microgravity.

  5. Survival of Norway spruce remains higher in mixed stands under a dryer and warmer climate.

    PubMed

    Neuner, Susanne; Albrecht, Axel; Cullmann, Dominik; Engels, Friedrich; Griess, Verena C; Hahn, W Andreas; Hanewinkel, Marc; Härtl, Fabian; Kölling, Christian; Staupendahl, Kai; Knoke, Thomas

    2015-02-01

    Shifts in tree species distributions caused by climatic change are expected to cause severe losses in the economic value of European forestland. However, this projection disregards potential adaptation options such as tree species conversion, shorter production periods, or establishment of mixed species forests. The effect of tree species mixture has, as yet, not been quantitatively investigated for its potential to mitigate future increases in production risks. For the first time, we use survival time analysis to assess the effects of climate, species mixture and soil condition on survival probabilities for Norway spruce and European beech. Accelerated Failure Time (AFT) models based on an extensive dataset of almost 65,000 trees from the European Forest Damage Survey (FDS)--part of the European-wide Level I monitoring network--predicted a 24% decrease in survival probability for Norway spruce in pure stands at age 120 when unfavorable changes in climate conditions were assumed. Increasing species admixture greatly reduced the negative effects of unfavorable climate conditions, resulting in a decline in survival probabilities of only 7%. We conclude that future studies of forest management under climate change as well as forest policy measures need to take this, as yet unconsidered, strongly advantageous effect of tree species mixture into account. © 2014 John Wiley & Sons Ltd.

  6. Structure and Stability of One-Dimensional Detonations in Ethylene-Air Mixtures

    NASA Technical Reports Server (NTRS)

    Yungster, S.; Radhakrishnan, K.; Perkins, High D. (Technical Monitor)

    2003-01-01

    The propagation of one-dimensional detonations in ethylene-air mixtures is investigated numerically by solving the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing scheme and a point implicit, first-order-accurate, time marching algorithm. The ethylene-air combustion is modeled with a 20-species, 36-step reaction mechanism. A multi-level, dynamically adaptive grid is utilized, in order to resolve the structure of the detonation. Parametric studies over an equivalence ratio range of 0.5 less than phi less than 3 for different initial pressures and degrees of detonation overdrive demonstrate that the detonation is unstable for low degrees of overdrive, but the dynamics of wave propagation varies with fuel-air equivalence ratio. For equivalence ratios less than approximately 1.2 the detonation exhibits a short-period oscillatory mode, characterized by high-frequency, low-amplitude waves. Richer mixtures (phi greater than 1.2) exhibit a low-frequency mode that includes large fluctuations in the detonation wave speed; that is, a galloping propagation mode is established. At high degrees of overdrive, stable detonation wave propagation is obtained. A modified McVey-Toong short-period wave-interaction theory is in excellent agreement with the numerical simulations.

  7. Identifying patterns of adaptation in breast cancer patients with cancer-related fatigue using response shift analyses at subgroup level.

    PubMed

    Salmon, Maxime; Blanchin, Myriam; Rotonda, Christine; Guillemin, Francis; Sébille, Véronique

    2017-11-01

    Fatigue is the most prevalent symptom in breast cancer. It might be perceived differently among patients over time as a consequence of the differing patients' adaptation and psychological adjustment to their cancer experience which can be related to response shift (RS). RS analyses can provide important insights on patients' adaptation to cancer but it is usually assumed that RS occurs in the same way in all individuals which is unrealistic. This study aimed to identify patients' subgroups in which different RS effects on self-reported fatigue could occur over time using a combination of methods for manifest and latent variables. The FATSEIN study comprised 466 breast cancer patients followed over a 2-year period. Fatigue was measured with the Multidimensional Fatigue Inventory questionnaire (MFI-20) during 10 visits. A novel combination of Mixed Models, Growth Mixture Modeling, and Structural Equation Modeling was used to assess the occurrence of RS in fatigue changes to identify subgroups displaying different RS patterns over time. An increase in fatigue was evidenced over the 8-month follow-up, followed by a decrease between the 8- and 24-month. Four latent classes of patients were identified. Different RS patterns were detected in all latent classes between the inclusion and 8 months (last cycle of chemotherapy). No RS was evidenced between 8- and 24-month. Several RS effects were evidenced in different groups of patients. Women seemed to adapt differently to their treatment and breast cancer experience possibly indicating differing needs for medical/psychological support. © 2017 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  8. Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC

    ERIC Educational Resources Information Center

    Depaoli, Sarah

    2012-01-01

    Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…

  9. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  10. Variable selection in a flexible parametric mixture cure model with interval-censored data.

    PubMed

    Scolas, Sylvie; El Ghouch, Anouar; Legrand, Catherine; Oulhaj, Abderrahim

    2016-03-30

    In standard survival analysis, it is generally assumed that every individual will experience someday the event of interest. However, this is not always the case, as some individuals may not be susceptible to this event. Also, in medical studies, it is frequent that patients come to scheduled interviews and that the time to the event is only known to occur between two visits. That is, the data are interval-censored with a cure fraction. Variable selection in such a setting is of outstanding interest. Covariates impacting the survival are not necessarily the same as those impacting the probability to experience the event. The objective of this paper is to develop a parametric but flexible statistical model to analyze data that are interval-censored and include a fraction of cured individuals when the number of potential covariates may be large. We use the parametric mixture cure model with an accelerated failure time regression model for the survival, along with the extended generalized gamma for the error term. To overcome the issue of non-stable and non-continuous variable selection procedures, we extend the adaptive LASSO to our model. By means of simulation studies, we show good performance of our method and discuss the behavior of estimates with varying cure and censoring proportion. Lastly, our proposed method is illustrated with a real dataset studying the time until conversion to mild cognitive impairment, a possible precursor of Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  11. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  12. Accounting for non-independent detection when estimating abundance of organisms with a Bayesian approach

    USGS Publications Warehouse

    Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth

    2011-01-01

    Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.

  13. A competitive binding model predicts the response of mammalian olfactory receptors to mixtures

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay

    Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.

  14. Health risks associated with inhaled nasal toxicants.

    PubMed

    Feron, V J; Arts, J H; Kuper, C F; Slootweg, P J; Woutersen, R A

    2001-05-01

    Health risks of inhaled nasal toxicants were reviewed with emphasis on chemically induced nasal lesions in humans, sensory irritation, olfactory and trigeminal nerve toxicity, nasal immunopathology and carcinogenesis, nasal responses to chemical mixtures, in vitro models, and nasal dosimetry- and metabolism-based extrapolation of nasal data in animals to humans. Conspicuous findings in humans are the effects of outdoor air pollution on the nasal mucosa, and tobacco smoking as a risk factor for sinonasal squamous cell carcinoma. Objective methods in humans to discriminate between sensory irritation and olfactory stimulation and between adaptation and habituation have been introduced successfully, providing more relevant information than sensory irritation studies in animals. Against the background of chemoperception as a dominant window of the brain on the outside world, nasal neurotoxicology is rapidly developing, focusing on olfactory and trigeminal nerve toxicity. Better insight in the processes underlying neurogenic inflammation may increase our knowledge of the causes of the various chemical sensitivity syndromes. Nasal immunotoxicology is extremely complex, which is mainly due to the pivotal role of nasal lymphoid tissue in the defense of the middle ear, eye, and oral cavity against antigenic substances, and the important function of the nasal passages in brain drainage in rats. The crucial role of tissue damage and reactive epithelial hyperproliferation in nasal carcinogenesis has become overwhelmingly clear as demonstrated by the recently developed biologically based model for predicting formaldehyde nasal cancer risk in humans. The evidence of carcinogenicity of inhaled complex mixtures in experimental animals is very limited, while there is ample evidence that occupational exposure to mixtures such as wood, leather, or textile dust or chromium- and nickel-containing materials is associated with increased risk of nasal cancer. It is remarkable that these mixtures are aerosols, suggesting that their "particulate nature" may be a major factor in their potential to induce nasal cancer. Studies in rats have been conducted with defined mixtures of nasal irritants such as aldehydes, using a model for competitive agonism to predict the outcome of such mixed exposures. When exposure levels in a mixture of nasal cytotoxicants were equal to or below the "No-Observed-Adverse-Effect-Levels" (NOAELs) of the individual chemicals, neither additivity nor potentiation was found, indicating that the NOAEL of the "most risky chemical" in the mixture would also be the NOAEL of the mixture. In vitro models are increasingly being used to study mechanisms of nasal toxicity. However, considering the complexity of the nasal cavity and the many factors that contribute to nasal toxicity, it is unlikely that in vitro experiments ever will be substitutes for in vivo inhalation studies. It is widely recognized that a strategic approach should be available for the interpretation of nasal effects in experimental animals with regard to potential human health risk. Mapping of nasal lesions combined with airflow-driven dosimetry and knowledge about local metabolism is a solid basis for extrapolation of animal data to humans. However, more research is needed to better understand factors that determine the susceptibility of human and animal tissues to nasal toxicants, in particular nasal carcinogens.

  15. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  16. Improving traffic flow at a 2-to-1 lane reduction with wirelessly connected, adaptive cruise control vehicles

    NASA Astrophysics Data System (ADS)

    Davis, L. C.

    2016-06-01

    Wirelessly connected vehicles that exchange information about traffic conditions can reduce delays caused by congestion. At a 2-to-1 lane reduction, the improvement in flow past a bottleneck due to traffic with a random mixture of 40% connected vehicles is found to be 52%. Control is based on connected-vehicle-reported velocities near the bottleneck. In response to indications of congestion the connected vehicles, which are also adaptive cruise control vehicles, reduce their speed in slowdown regions. Early lane changes of manually driven vehicles from the terminated lane to the continuous lane are induced by the slowing connected vehicles. Self-organized congestion at the bottleneck is thus delayed or eliminated, depending upon the incoming flow magnitude. For the large majority of vehicles, travel times past the bottleneck are substantially reduced. Control is responsible for delaying the onset of congestion as the incoming flow increases. Adaptive cruise control increases the flow out of the congested state at the bottleneck. The nature of the congested state, when it occurs, appears to be similar under a variety of conditions. Typically 80-100 vehicles are approximately equally distributed between the lanes in the 500 m region prior to the end of the terminated lane. Without the adaptive cruise control capability, connected vehicles can delay the onset of congestion but do not increase the asymptotic flow past the bottleneck. Calculations are done using the Kerner-Klenov three-phase theory, stochastic discrete-time model for manual vehicles. The dynamics of the connected vehicles is given by a conventional adaptive cruise control algorithm plus commanded deceleration. Because time in the model for manual vehicles is discrete (one-second intervals), it is assumed that the acceleration of any vehicle immediately in front of a connected vehicle is constant during the time interval, thereby preserving the computational simplicity and speed of a discrete-time model.

  17. Distinguishing ferritin from apoferritin using magnetic force microscopy

    NASA Astrophysics Data System (ADS)

    Nocera, Tanya M.; Zeng, Yuzhi; Agarwal, Gunjan

    2014-11-01

    Estimating the amount of iron-replete ferritin versus iron-deficient apoferritin proteins is important in biomedical and nanotechnology applications. This work introduces a simple and novel approach to quantify ferritin by using magnetic force microscopy (MFM). We demonstrate how high magnetic moment probes enhance the magnitude of MFM signal, thus enabling accurate quantitative estimation of ferritin content in ferritin/apoferritin mixtures in vitro. We envisage MFM could be adapted to accurately determine ferritin content in protein mixtures or in small aliquots of clinical samples.

  18. Fluidizing a mixture of particulate coal and char

    DOEpatents

    Green, Norman W.

    1979-08-07

    Method of mixing particulate materials comprising contacting a primary source and a secondary source thereof whereby resulting mixture ensues; preferably at least one of the two sources has enough motion to insure good mixing and the particulate materials may be heat treated if desired. Apparatus for such mixing comprising an inlet for a primary source, a reactor communicating therewith, a feeding means for supplying a secondary source to the reactor, and an inlet for the secondary source. Feeding means is preferably adapted to supply fluidized materials.

  19. High pressure and temperature optical flow cell for near-infra-red spectroscopic analysis of gas mixtures.

    PubMed

    Norton, C G; Suedmeyer, J; Oderkerk, B; Fieback, T M

    2014-05-01

    A new optical flow cell with a new optical arrangement adapted for high pressures and temperatures using glass fibres to connect light source, cell, and spectrometer has been developed, as part of a larger project comprising new methods for in situ analysis of bio and hydrogen gas mixtures in high pressure and temperature applications. The analysis is based on measurements of optical, thermo-physical, and electromagnetic properties in gas mixtures with newly developed high pressure property sensors, which are mounted in a new apparatus which can generate gas mixtures with up to six components with an uncertainty of composition of as little as 0.1 mol. %. Measurements of several pure components of natural gases and biogases to a pressure of 20 MPa were performed on two isotherms, and with binary mixtures of the same pure gases at pressures to 17.5 MPa. Thereby a new method of analyzing the obtained spectra based on the partial density of methane was investigated.

  20. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  1. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  2. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  3. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    PubMed

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. Supplementary material is available at Bioinformatics online.

  4. Modeling and analysis of personal exposures to VOC mixtures using copulas

    PubMed Central

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2014-01-01

    Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991

  5. Adapting NDOR's Roadside Seed Mixture for Local Site Conditions

    DOT National Transportation Integrated Search

    2012-09-06

    The Nebraska Department of Roads (NDOR) has considerable challenges with its objectives of rapidly establishing and maintaining a diverse and vigorous vegetation cover on roadsides. Establishing vegetation quickly on NDOR roadsides is important becau...

  6. Adaptive dynamics of competition for nutritionally complementary resources: character convergence, displacement, and parallelism.

    PubMed

    Vasseur, David A; Fox, Jeremy W

    2011-10-01

    Consumers acquire essential nutrients by ingesting the tissues of resource species. When these tissues contain essential nutrients in a suboptimal ratio, consumers may benefit from ingesting a mixture of nutritionally complementary resource species. We investigate the joint ecological and evolutionary consequences of competition for complementary resources, using an adaptive dynamics model of two consumers and two resources that differ in their relative content of two essential nutrients. In the absence of competition, a nutritionally balanced diet rarely maximizes fitness because of the dynamic feedbacks between uptake rate and resource density, whereas in sympatry, nutritionally balanced diets maximize fitness because competing consumers with different nutritional requirements tend to equalize the relative abundances of the two resources. Adaptation from allopatric to sympatric fitness optima can generate character convergence, divergence, and parallel shifts, depending not on the degree of diet overlap but on the match between resource nutrient content and consumer nutrient requirements. Contrary to previous verbal arguments that suggest that character convergence leads to neutral stability, coadaptation of competing consumers always leads to stable coexistence. Furthermore, we show that incorporating costs of consuming or excreting excess nonlimiting nutrients selects for nutritionally balanced diets and so promotes character convergence. This article demonstrates that resource-use overlap has little bearing on coexistence when resources are nutritionally complementary, and it highlights the importance of using mathematical models to infer the stability of ecoevolutionary dynamics.

  7. Approximating the nonlinear density dependence of electron transport coefficients and scattering rates across the gas-liquid interface

    NASA Astrophysics Data System (ADS)

    Garland, N. A.; Boyle, G. J.; Cocks, D. G.; White, R. D.

    2018-02-01

    This study reviews the neutral density dependence of electron transport in gases and liquids and develops a method to determine the nonlinear medium density dependence of electron transport coefficients and scattering rates required for modeling transport in the vicinity of gas-liquid interfaces. The method has its foundations in Blanc’s law for gas-mixtures and adapts the theory of Garland et al (2017 Plasma Sources Sci. Technol. 26) to extract electron transport data across the gas-liquid transition region using known data from the gas and liquid phases only. The method is systematically benchmarked against multi-term Boltzmann equation solutions for Percus-Yevick model liquids. Application to atomic liquids highlights the utility and accuracy of the derived method.

  8. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    ERIC Educational Resources Information Center

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  9. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  10. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    PubMed

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    NASA Astrophysics Data System (ADS)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  12. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Prediction of the spectral reflectance of laser-generated color prints by combination of an optical model and learning methods.

    PubMed

    Nébouy, David; Hébert, Mathieu; Fournel, Thierry; Larina, Nina; Lesur, Jean-Luc

    2015-09-01

    Recent color printing technologies based on the principle of revealing colors on pre-functionalized achromatic supports by laser irradiation offer advanced functionalities, especially for security applications. However, for such technologies, the color prediction is challenging, compared to classic ink-transfer printing systems. The spectral properties of the coloring materials modified by the lasers are not precisely known and may strongly vary, depending on the laser settings, in a nonlinear manner. We show in this study, through the example of the color laser marking (CLM) technology, based on laser bleaching of a mixture of pigments, that the combination of an adapted optical reflectance model and learning methods to get the model's parameters enables prediction of the spectral reflectance of any printable color with rather good accuracy. Even though the pigment mixture is formulated from three colored pigments, an analysis of the dimensionality of the spectral space generated by CLM printing, thanks to a principal component analysis decomposition, shows that at least four spectral primaries are needed for accurate spectral reflectance predictions. A polynomial interpolation is then used to relate RGB laser intensities with virtual coordinates of new basis vectors. By studying the influence of the number of calibration patches on the prediction accuracy, we can conclude that a reasonable number of 130 patches are enough to achieve good accuracy in this application.

  14. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  15. Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution

    NASA Astrophysics Data System (ADS)

    Baldacchino, Tara; Worden, Keith; Rowson, Jennifer

    2017-02-01

    A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.

  16. Development and validation of a metal mixture bioavailability model (MMBM) to predict chronic toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia.

    PubMed

    Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-01-01

    Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix  = 16; RMSE Zn only  = 18; RMSE Ni only  = 17; RMSE Pb only  = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Adapting NDOR's Roadside Seed Mixture for Local Site Conditions : Tables

    DOT National Transportation Integrated Search

    2012-09-06

    The Nebraska Department of Roads (NDOR) has considerable challenges with its objectives of rapidly establishing and maintaining a diverse and vigorous vegetation cover on roadsides. Establishing vegetation quickly on NDOR roadsides is important becau...

  18. Rasch Mixture Models for DIF Detection

    PubMed Central

    Strobl, Carolin; Zeileis, Achim

    2014-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819

  19. Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data

    ERIC Educational Resources Information Center

    Kim, Su-Young; Kim, Jee-Seon

    2012-01-01

    This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…

  20. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  1. Becoming a Coach in Developmental Adaptive Sailing: A Lifelong Learning Perspective.

    PubMed

    Duarte, Tiago; Culver, Diane M

    2014-10-02

    Life-story methodology and innovative methods were used to explore the process of becoming a developmental adaptive sailing coach. Jarvis's (2009) lifelong learning theory framed the thematic analysis. The findings revealed that the coach, Jenny, was exposed from a young age to collaborative environments. Social interactions with others such as mentors, colleagues, and athletes made major contributions to her coaching knowledge. As Jenny was exposed to a mixture of challenges and learning situations, she advanced from recreational para-swimming instructor to developmental adaptive sailing coach. The conclusions inform future research in disability sport coaching, coach education, and applied sport psychology.

  2. Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data

    PubMed Central

    Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.

    2016-01-01

    We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872

  3. Local Solutions in the Estimation of Growth Mixture Models

    ERIC Educational Resources Information Center

    Hipp, John R.; Bauer, Daniel J.

    2006-01-01

    Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…

  4. An X-ray diffraction method for semiquantitative mineralogical analysis of Chilean nitrate ore

    USGS Publications Warehouse

    Jackson, J.C.; Ericksent, G.E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  5. An x-ray diffraction method for semiquantitative mineralogical analysis of chilean nitrate ore

    USGS Publications Warehouse

    John, C.; George, J.; Ericksen, E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  6. Adaptation of ammonia-oxidizing microorganisms to environment shift of paddy field soil.

    PubMed

    Ke, Xiubin; Lu, Yahai

    2012-04-01

    Adaptation of microorganisms to the environment is a central theme in microbial ecology. The objective of this study was to investigate the response of ammonia-oxidizing bacteria (AOB) and ammonia-oxidizing archaea (AOA) to a soil medium shift. We employed two rice field soils collected from Beijing and Hangzhou, China. These soils contained distinct AOB communities dominated by Nitrosomonas in Beijing rice soil and Nitrosospira in Hangzhou rice soil. Three mixtures were generated by mixing equal quantities of Beijing soil and Hangzhou soil (BH), Beijing soil with sterilized Hangzhou soil (BSH), and Hangzhou soil with sterilized Beijing soil (HSB). Pure and mixed soils were permanently flooded, and the surface-layer soil where ammonia oxidation occurred was collected to determine the response of AOB and AOA to the soil medium shift. AOB populations increased during the incubation, and the rates were initially faster in Beijing soil than in Hangzhou soil. Nitrosospira (cluster 3a) and Nitrosomonas (communis cluster) increased with time in correspondence with ammonia oxidation in the Hangzhou and Beijing soils, respectively. The 'BH' mixture exhibited a shift from Nitrosomonas at day 0 to Nitrosospira at days 21 and 60 when ammonia oxidation became most active. In 'HSB' and 'BSH' mixtures, Nitrosospira showed greater stimulation than Nitrosomonas, both with and without N amendment. These results suggest that Nitrosospira spp. were better adapted to soil environment shifts than Nitrosomonas. Analysis of the AOA community revealed that the composition of AOA community was not responsive to the soil environment shifts or to nitrogen amendment. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  7. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  8. Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.

    PubMed

    García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M

    2014-12-01

    Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Cluster kinetics model for mixtures of glassformers

    NASA Astrophysics Data System (ADS)

    Brenskelle, Lisa A.; McCoy, Benjamin J.

    2007-10-01

    For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.

  10. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  11. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  12. The Potential of Growth Mixture Modelling

    ERIC Educational Resources Information Center

    Muthen, Bengt

    2006-01-01

    The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…

  13. Thermal - Hydraulic Behavior of Unsaturated Bentonite and Sand-Bentonite Material as Seal for Nuclear Waste Repository: Numerical Simulation of Column Experiments

    NASA Astrophysics Data System (ADS)

    Ballarini, E.; Graupner, B.; Bauer, S.

    2015-12-01

    For deep geological repositories of high-level radioactive waste (HLRW), bentonite and sand bentonite mixtures are investigated as buffer materials to form a a sealing layer. This sealing layer surrounds the canisters and experiences an initial drying due to the heat produced by HLRW and a successive re-saturation with fluid from the host rock. These complex thermal, hydraulic and mechanical processes interact and were investigated in laboratory column experiments using MX-80 clay pellets as well as a mixture of 35% sand and 65% bentonite. The aim of this study is to both understand the individual processes taking place in the buffer materials and to identify the key physical parameters that determine the material behavior under heating and hydrating conditions. For this end, detailed and process-oriented numerical modelling was applied to the experiments, simulating heat transport, multiphase flow and mechanical effects from swelling. For both columns, the same set of parameters was assigned to the experimental set-up (i.e. insulation, heater and hydration system), while the parameters of the buffer material were adapted during model calibration. A good fit between model results and data was achieved for temperature, relative humidity, water intake and swelling pressure, thus explaining the material behavior. The key variables identified by the model are the permeability and relative permeability, the water retention curve and the thermal conductivity of the buffer material. The different hydraulic and thermal behavior of the two buffer materials observed in the laboratory observations was well reproduced by the numerical model.

  14. An auxiliary adaptive Gaussian mixture filter applied to flowrate allocation using real data from a multiphase producer

    NASA Astrophysics Data System (ADS)

    Lorentzen, Rolf J.; Stordal, Andreas S.; Hewitt, Neal

    2017-05-01

    Flowrate allocation in production wells is a complicated task, especially for multiphase flow combined with several reservoir zones and/or branches. The result depends heavily on the available production data, and the accuracy of these. In the application we show here, downhole pressure and temperature data are available, in addition to the total flowrates at the wellhead. The developed methodology inverts these observations to the fluid flowrates (oil, water and gas) that enters two production branches in a real full-scale producer. A major challenge is accurate estimation of flowrates during rapid variations in the well, e.g. due to choke adjustments. The Auxiliary Sequential Importance Resampling (ASIR) filter was developed to handle such challenges, by introducing an auxiliary step, where the particle weights are recomputed (second weighting step) based on how well the particles reproduce the observations. However, the ASIR filter suffers from large computational time when the number of unknown parameters increase. The Gaussian Mixture (GM) filter combines a linear update, with the particle filters ability to capture non-Gaussian behavior. This makes it possible to achieve good performance with fewer model evaluations. In this work we present a new filter which combines the ASIR filter and the Gaussian Mixture filter (denoted ASGM), and demonstrate improved estimation (compared to ASIR and GM filters) in cases with rapid parameter variations, while maintaining reasonable computational cost.

  15. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  16. Development of PBPK Models for Gasoline in Adult and ...

    EPA Pesticide Factsheets

    Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of

  17. [Functional state of various physiological systems of the human body during respiration of neon-oxygen mixture at depth up to 400 meters].

    PubMed

    Poleshuk, I P; Genin, A M; Unku, R D; Mikhnenko, A E; Sementsov, V N; Suvorov, A V

    1991-01-01

    Hyperbaric neon-oxygen mixture has been studied for the effect of its high density under pressure of 41 ata on basic physiological functions of human organism. Typical changes of the cardiorespiratory system and tissue respiration parameters are revealed. Changes in physical working capacity are shown. Exposure to gaseous medium of high pressure and density is accompanied by the development of some compensatory-adaptive reactions. The possibility to perform mid-hard physical work is attained with overstrain of respiration and circulation function.

  18. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  19. Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.

    2017-10-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.

  20. Fitting a Mixture Item Response Theory Model to Personality Questionnaire Data: Characterizing Latent Classes and Investigating Possibilities for Improving Prediction

    ERIC Educational Resources Information Center

    Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk

    2008-01-01

    Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…

  1. Admixture facilitates genetic adaptations to high altitude in Tibet

    PubMed Central

    Jeong, Choongwon; Alkorta-Aranburu, Gorka; Basnyat, Buddha; Neupane, Maniraj; Witonsky, David B.; Pritchard, Jonathan K.; Beall, Cynthia M.; Di Rienzo, Anna

    2015-01-01

    Admixture is recognized as a widespread feature of human populations, renewing interest in the possibility that genetic exchange can facilitate adaptations to new environments. Studies of Tibetans revealed candidates for high-altitude adaptations in the EGLN1 and EPAS1 genes, associated with lower hemoglobin concentration. However, the history of these variants or that of Tibetans remains poorly understood. Here, we analyze genotype data for the Nepalese Sherpa, and find that Tibetans are a mixture of ancestral populations related to the Sherpa and Han Chinese. EGLN1 and EPAS1 genes show a striking enrichment of high-altitude ancestry in the Tibetan genome, indicating that migrants from low altitude acquired adaptive alleles from the highlanders. Accordingly, the Sherpa and Tibetans share adaptive hemoglobin traits. This admixture-mediated adaptation shares important features with adaptive introgression. Therefore, we identify a novel mechanism, beyond selection on new mutations or on standing variation, through which populations can adapt to local environments. PMID:24513612

  2. Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis

    DTIC Science & Technology

    2005-07-25

    analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for

  3. Microstructure and hydrogen bonding in water-acetonitrile mixtures.

    PubMed

    Mountain, Raymond D

    2010-12-16

    The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.

  4. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    PubMed

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  5. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  6. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    NASA Astrophysics Data System (ADS)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  7. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  8. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    PubMed

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    PubMed

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999-2001) and the National Health and Nutrition Examination Survey (NHANES; 1999-2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1. To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model's goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2. Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture's components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3. Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). Specific Aim 1. Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10(-4), and 13% of all participants had risk levels above 10(-2). Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2. Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual's total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10(-3) for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3. In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence's AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. (ABSTRACT TRUNCATED)

  10. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    ERIC Educational Resources Information Center

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  11. Becoming a Coach in Developmental Adaptive Sailing: A Lifelong Learning Perspective

    PubMed Central

    Duarte, Tiago; Culver, Diane M.

    2014-01-01

    Life-story methodology and innovative methods were used to explore the process of becoming a developmental adaptive sailing coach. Jarvis's (2009) lifelong learning theory framed the thematic analysis. The findings revealed that the coach, Jenny, was exposed from a young age to collaborative environments. Social interactions with others such as mentors, colleagues, and athletes made major contributions to her coaching knowledge. As Jenny was exposed to a mixture of challenges and learning situations, she advanced from recreational para-swimming instructor to developmental adaptive sailing coach. The conclusions inform future research in disability sport coaching, coach education, and applied sport psychology. PMID:25210408

  12. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  13. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  14. Solubility modeling of refrigerant/lubricant mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels, H.H.; Sienel, T.H.

    1996-12-31

    A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less

  15. Catabolite repression by intracellular succinate in Campylobacter jejuni

    USDA-ARS?s Scientific Manuscript database

    Bacteria have evolved different mechanisms to catabolize carbon sources from a mixture of nutrients. They first consume their preferred carbon source, before others are used. Regulatory mechanisms adapt the metabolism accordingly to maximize growth and to outcompete other organisms. The human patho...

  16. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    USGS Publications Warehouse

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  17. Process Dissociation and Mixture Signal Detection Theory

    ERIC Educational Resources Information Center

    DeCarlo, Lawrence T.

    2008-01-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…

  18. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    ERIC Educational Resources Information Center

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  19. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  20. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  1. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  2. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  3. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  4. Application of correlation constrained multivariate curve resolution alternating least-squares methods for determination of compounds of interest in biodiesel blends using NIR and UV-visible spectroscopic data.

    PubMed

    de Oliveira, Rodrigo Rocha; de Lima, Kássio Michell Gomes; Tauler, Romà; de Juan, Anna

    2014-07-01

    This study describes two applications of a variant of the multivariate curve resolution alternating least squares (MCR-ALS) method with a correlation constraint. The first application describes the use of MCR-ALS for the determination of biodiesel concentrations in biodiesel blends using near infrared (NIR) spectroscopic data. In the second application, the proposed method allowed the determination of the synthetic antioxidant N,N'-Di-sec-butyl-p-phenylenediamine (PDA) present in biodiesel mixtures from different vegetable sources using UV-visible spectroscopy. Well established multivariate regression algorithm, partial least squares (PLS), were calculated for comparison of the quantification performance in the models developed in both applications. The correlation constraint has been adapted to handle the presence of batch-to-batch matrix effects due to ageing effects, which might occur when different groups of samples were used to build a calibration model in the first application. Different data set configurations and diverse modes of application of the correlation constraint are explored and guidelines are given to cope with different type of analytical problems, such as the correction of matrix effects among biodiesel samples, where MCR-ALS outperformed PLS reducing the relative error of prediction RE (%) from 9.82% to 4.85% in the first application, or the determination of minor compound with overlapped weak spectroscopic signals, where MCR-ALS gave higher (RE (%)=3.16%) for prediction of PDA compared to PLS (RE (%)=1.99%), but with the advantage of recovering the related pure spectral profile of analytes and interferences. The obtained results show the potential of the MCR-ALS method with correlation constraint to be adapted to diverse data set configurations and analytical problems related to the determination of biodiesel mixtures and added compounds therein. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  6. A general mixture model and its application to coastal sandbar migration simulation

    NASA Astrophysics Data System (ADS)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.

  7. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less

  8. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    PubMed

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  10. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  11. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    PubMed

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  12. Apparatus for converting hydrocarbon fuel into hydrogen gas and carbon dioxide

    DOEpatents

    Clawson, Lawrence G.; Mitchell, William L.; Bentley, Jeffrey M.; Thijssen, Johannes H. J.

    2002-01-01

    Hydrocarbon fuel reformer 100 suitable for producing synthesis hydrogen gas from reactions with hydrocarbons fuels, oxygen, and steam. A first tube 108 has a first tube inlet 110 and a first tube outlet 112. The first tube inlet 110 is adapted for receiving a first mixture including an oxygen-containing gas and a first fuel. A partially oxidized first reaction reformate is directed out of the first tube 108 into a mixing zone 114. A second tube 116 is annularly disposed about the first tube 108 and has a second tube inlet 118 and a second tube outlet 120. The second tube inlet 118 is adapted for receiving a second mixture including steam and a second fuel. A steam reformed second reaction reformate is directed out of the second tube 116 and into the mixing zone 114. From the mixing zone 114, the first and second reaction reformates may be directed into a catalytic reforming zone 144 containing a reforming catalyst 147.

  13. Age-associated loss of selectivity in human olfactory sensory neurons

    PubMed Central

    Rawson, Nancy E.; Gomez, George; Cowart, Beverly J.; Kriete, Andres; Pribitkin, Edmund; Restrepo, Diego

    2011-01-01

    We report a cross-sectional study of olfactory impairment with age based on both odorant-stimulated responses of human olfactory sensory neurons (OSNs) and tests of olfactory threshold sensitivity. A total of 621 OSNs from 440 subjects in two age groups of younger ( 45 years) and older (≥60 years) subjects were investigated using fluorescence intensity ratio fura-2 imaging. OSNs were tested for responses to two odorant mixtures, as well as to subsets of and individual odors in those mixtures. Whereas cells from younger donors were highly selective in the odorants to which they responded, cells from older donors were more likely to respond to multiple odor stimuli, despite a loss in these subjects’ absolute olfactory sensitivity, suggesting a loss of specificity. This degradation in peripheral cellular specificity may impact odor discrimination and olfactory adaptation in the elderly. It is also possible that chronic adaptation as a result of reduced specificity contributes to observed declines in absolute sensitivity. PMID:22074806

  14. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  15. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    ERIC Educational Resources Information Center

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  16. Effects of three veterinary antibiotics and their binary mixtures on two green alga species.

    PubMed

    Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A

    2018-03-01

    The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    PubMed Central

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999–2001) and the National Health and Nutrition Examination Survey (NHANES; 1999–2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1 To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model’s goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2 Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture’s components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3 Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). RESULTS Specific Aim 1 Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10−4, and 13% of all participants had risk levels above 10−2. Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2 Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual’s total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3 In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence’s AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. A portion of these differences are due to the nature of the convenience (RIOPA) and representative (NHANES) sampling strategies used in the two studies. CONCLUSIONS Accurate models for exposure data, which can feature extreme values, multiple modes, data below the MDL, heterogeneous interpollutant dependency structures, and other complex characteristics, are needed to estimate exposures and risks and to develop control and management guidelines and policies. Conventional and novel statistical methods were applied to data drawn from two large studies to understand the nature and significance of VOC exposures. Both extreme value distributions and mixture models were found to provide excellent fit to single VOC compounds (univariate distributions), and copulas may be the method of choice for VOC mixtures (multivariate distributions), especially for the highest exposures, which fit parametric models poorly and which may represent the greatest health risk. The identification of exposure determinants, including the influence of both certain activities (e.g., pumping gas) and environments (e.g., residences), provides information that can be used to manage and reduce exposures. The results obtained using the RIOPA data set add to our understanding of VOC exposures and further investigations using a more representative population and a wider suite of VOCs are suggested to extend and generalize results. PMID:25145040

  18. GENPLAT: an Automated Platform for Biomass Enzyme Discovery and Cocktail Optimization

    PubMed Central

    Walton, Jonathan; Banerjee, Goutami; Car, Suzana

    2011-01-01

    The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such as T. reesei. Proteins can also be purified from commercial enzyme cocktails (e.g., Multifect Xylanase, Novozyme 188). An increasing number of pure enzymes, including glycosyl hydrolases, cell wall-active esterases, proteases, and lyases, are available from commercial sources, e.g., Megazyme, Inc. (www.megazyme.com), NZYTech (www.nzytech.com), and PROZOMIX (www.prozomix.com). Design-Expert software (Stat-Ease, Inc.) is used to create simplex-lattice designs and to analyze responses (in this case, Glc and Xyl release). Mixtures contain 4-20 components, which can vary in proportion between 0 and 100%. Assay points typically include the extreme vertices with a sufficient number of intervening points to generate a valid model. In the terminology of experimental design, most of our studies are "mixture" experiments, meaning that the sum of all components adds to a total fixed protein loading (expressed as mg/g glucan). The number of mixtures in the simplex-lattice depends on both the number of components in the mixture and the degree of polynomial (quadratic or cubic). For example, a 6-component experiment will entail 63 separate reactions with an augmented special cubic model, which can detect three-way interactions, whereas only 23 individual reactions are necessary with an augmented quadratic model. For mixtures containing more than eight components, a quadratic experimental design is more practical, and in our experience such models are usually statistically valid. All enzyme loadings are expressed as a percentage of the final total loading (which for our experiments is typically 15 mg protein/g glucan). For "core" enzymes, the lower percentage limit is set to 5%. This limit was derived from our experience in which yields of Glc and/or Xyl were very low if any core enzyme was present at 0%. Poor models result from too many samples showing very low Glc or Xyl yields. Setting a lower limit in turn determines an upper limit. That is, for a six-component experiment, if the lower limit for each single component is set to 5%, then the upper limit of each single component will be 75%. The lower limits of all other enzymes considered as "accessory" are set to 0%. "Core" and "accessory" are somewhat arbitrary designations and will differ depending on the substrate, but in our studies the core enzymes for release of Glc from corn stover comprise the following enzymes from T. reesei: CBH1 (also known as Cel7A), CBH2 (Cel6A), EG1(Cel7B), BG (β-glucosidase), EX3 (endo-β1,4-xylanase, GH10), and BX (β-xylosidase). PMID:22042431

  19. GENPLAT: an automated platform for biomass enzyme discovery and cocktail optimization.

    PubMed

    Walton, Jonathan; Banerjee, Goutami; Car, Suzana

    2011-10-24

    The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such as T. reesei. Proteins can also be purified from commercial enzyme cocktails (e.g., Multifect Xylanase, Novozyme 188). An increasing number of pure enzymes, including glycosyl hydrolases, cell wall-active esterases, proteases, and lyases, are available from commercial sources, e.g., Megazyme, Inc. (www.megazyme.com), NZYTech (www.nzytech.com), and PROZOMIX (www.prozomix.com). Design-Expert software (Stat-Ease, Inc.) is used to create simplex-lattice designs and to analyze responses (in this case, Glc and Xyl release). Mixtures contain 4-20 components, which can vary in proportion between 0 and 100%. Assay points typically include the extreme vertices with a sufficient number of intervening points to generate a valid model. In the terminology of experimental design, most of our studies are "mixture" experiments, meaning that the sum of all components adds to a total fixed protein loading (expressed as mg/g glucan). The number of mixtures in the simplex-lattice depends on both the number of components in the mixture and the degree of polynomial (quadratic or cubic). For example, a 6-component experiment will entail 63 separate reactions with an augmented special cubic model, which can detect three-way interactions, whereas only 23 individual reactions are necessary with an augmented quadratic model. For mixtures containing more than eight components, a quadratic experimental design is more practical, and in our experience such models are usually statistically valid. All enzyme loadings are expressed as a percentage of the final total loading (which for our experiments is typically 15 mg protein/g glucan). For "core" enzymes, the lower percentage limit is set to 5%. This limit was derived from our experience in which yields of Glc and/or Xyl were very low if any core enzyme was present at 0%. Poor models result from too many samples showing very low Glc or Xyl yields. Setting a lower limit in turn determines an upper limit. That is, for a six-component experiment, if the lower limit for each single component is set to 5%, then the upper limit of each single component will be 75%. The lower limits of all other enzymes considered as "accessory" are set to 0%. "Core" and "accessory" are somewhat arbitrary designations and will differ depending on the substrate, but in our studies the core enzymes for release of Glc from corn stover comprise the following enzymes from T. reesei: CBH1 (also known as Cel7A), CBH2 (Cel6A), EG1(Cel7B), BG (β-glucosidase), EX3 (endo-β1,4-xylanase, GH10), and BX (β-xylosidase).

  20. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  1. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  2. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  3. Spatiotemporal SNP analysis reveals pronounced biocomplexity at the northern range margin of Atlantic cod Gadus morhua

    PubMed Central

    Therkildsen, Nina Overgaard; Hemmer-Hansen, Jakob; Hedeholm, Rasmus Berg; Wisz, Mary S; Pampoulie, Christophe; Meldrup, Dorte; Bonanomi, Sara; Retzel, Anja; Olsen, Steffen Malskær; Nielsen, Einar Eg

    2013-01-01

    Accurate prediction of species distribution shifts in the face of climate change requires a sound understanding of population diversity and local adaptations. Previous modeling has suggested that global warming will lead to increased abundance of Atlantic cod (Gadus morhua) in the ocean around Greenland, but the dynamics of earlier abundance fluctuations are not well understood. We applied a retrospective spatiotemporal population genomics approach to examine the temporal stability of cod population structure in this region and to search for signatures of divergent selection over a 78-year period spanning major demographic changes. Analyzing >900 gene-associated single nucleotide polymorphisms in 847 individuals, we identified four genetically distinct groups that exhibited varying spatial distributions with considerable overlap and mixture. The genetic composition had remained stable over decades at some spawning grounds, whereas complete population replacement was evident at others. Observations of elevated differentiation in certain genomic regions are consistent with adaptive divergence between the groups, indicating that they may respond differently to environmental variation. Significantly increased temporal changes at a subset of loci also suggest that adaptation may be ongoing. These findings illustrate the power of spatiotemporal population genomics for revealing biocomplexity in both space and time and for informing future fisheries management and conservation efforts. PMID:23789034

  4. Spatiotemporal SNP analysis reveals pronounced biocomplexity at the northern range margin of Atlantic cod Gadus morhua.

    PubMed

    Therkildsen, Nina Overgaard; Hemmer-Hansen, Jakob; Hedeholm, Rasmus Berg; Wisz, Mary S; Pampoulie, Christophe; Meldrup, Dorte; Bonanomi, Sara; Retzel, Anja; Olsen, Steffen Malskær; Nielsen, Einar Eg

    2013-06-01

    Accurate prediction of species distribution shifts in the face of climate change requires a sound understanding of population diversity and local adaptations. Previous modeling has suggested that global warming will lead to increased abundance of Atlantic cod (Gadus morhua) in the ocean around Greenland, but the dynamics of earlier abundance fluctuations are not well understood. We applied a retrospective spatiotemporal population genomics approach to examine the temporal stability of cod population structure in this region and to search for signatures of divergent selection over a 78-year period spanning major demographic changes. Analyzing >900 gene-associated single nucleotide polymorphisms in 847 individuals, we identified four genetically distinct groups that exhibited varying spatial distributions with considerable overlap and mixture. The genetic composition had remained stable over decades at some spawning grounds, whereas complete population replacement was evident at others. Observations of elevated differentiation in certain genomic regions are consistent with adaptive divergence between the groups, indicating that they may respond differently to environmental variation. Significantly increased temporal changes at a subset of loci also suggest that adaptation may be ongoing. These findings illustrate the power of spatiotemporal population genomics for revealing biocomplexity in both space and time and for informing future fisheries management and conservation efforts.

  5. VOXEL-LEVEL MAPPING OF TRACER KINETICS IN PET STUDIES: A STATISTICAL APPROACH EMPHASIZING TISSUE LIFE TABLES.

    PubMed

    O'Sullivan, Finbarr; Muzi, Mark; Mankoff, David A; Eary, Janet F; Spence, Alexander M; Krohn, Kenneth A

    2014-06-01

    Most radiotracers used in dynamic positron emission tomography (PET) scanning act in a linear time-invariant fashion so that the measured time-course data are a convolution between the time course of the tracer in the arterial supply and the local tissue impulse response, known as the tissue residue function. In statistical terms the residue is a life table for the transit time of injected radiotracer atoms. The residue provides a description of the tracer kinetic information measurable by a dynamic PET scan. Decomposition of the residue function allows separation of rapid vascular kinetics from slower blood-tissue exchanges and tissue retention. For voxel-level analysis, we propose that residues be modeled by mixtures of nonparametrically derived basis residues obtained by segmentation of the full data volume. Spatial and temporal aspects of diagnostics associated with voxel-level model fitting are emphasized. Illustrative examples, some involving cancer imaging studies, are presented. Data from cerebral PET scanning with 18 F fluoro-deoxyglucose (FDG) and 15 O water (H2O) in normal subjects is used to evaluate the approach. Cross-validation is used to make regional comparisons between residues estimated using adaptive mixture models with more conventional compartmental modeling techniques. Simulations studies are used to theoretically examine mean square error performance and to explore the benefit of voxel-level analysis when the primary interest is a statistical summary of regional kinetics. The work highlights the contribution that multivariate analysis tools and life-table concepts can make in the recovery of local metabolic information from dynamic PET studies, particularly ones in which the assumptions of compartmental-like models, with residues that are sums of exponentials, might not be certain.

  6. Mixed-up trees: the structure of phylogenetic mixtures.

    PubMed

    Matsen, Frederick A; Mossel, Elchanan; Steel, Mike

    2008-05-01

    In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.

  7. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  8. Mixed genotype transmission bodies and virions contribute to the maintenance of diversity in an insect virus

    PubMed Central

    Clavijo, Gabriel; Williams, Trevor; Muñoz, Delia; Caballero, Primitivo; López-Ferber, Miguel

    2010-01-01

    An insect nucleopolyhedrovirus naturally survives as a mixture of at least nine genotypes. Infection by multiple genotypes results in the production of virus occlusion bodies (OBs) with greater pathogenicity than those of any genotype alone. We tested the hypothesis that each OB contains a genotypically diverse population of virions. Few insects died following inoculation with an experimental two-genotype mixture at a dose of one OB per insect, but a high proportion of multiple infections were observed (50%), which differed significantly from the frequencies predicted by a non-associated transmission model in which genotypes are segregated into distinct OBs. By contrast, insects that consumed multiple OBs experienced higher mortality and infection frequencies did not differ significantly from those of the non-associated model. Inoculation with genotypically complex wild-type OBs indicated that genotypes tend to be transmitted in association, rather than as independent entities, irrespective of dose. To examine the hypothesis that virions may themselves be genotypically heterogeneous, cell culture plaques derived from individual virions were analysed to reveal that one-third of virions was of mixed genotype, irrespective of the genotypic composition of the OBs. We conclude that co-occlusion of genotypically distinct virions in each OB is an adaptive mechanism that favours the maintenance of virus diversity during insect-to-insect transmission. PMID:19939845

  9. Bioanalytical evidence that chemicals in tattoo ink can induce adaptive stress responses.

    PubMed

    Neale, Peta A; Stalter, Daniel; Tang, Janet Y M; Escher, Beate I

    2015-10-15

    Tattooing is becoming increasingly popular, particularly amongst young people. However, tattoo inks contain a complex mixture of chemical impurities that may pose a long-term risk for human health. As a first step towards the risk assessment of these complex mixtures we propose to assess the toxicological hazard potential of tattoo ink chemicals with cell-based bioassays. Targeted modes of toxic action and cellular endpoints included cytotoxicity, genotoxicity and adaptive stress response pathways. The studied tattoo inks, which were extracted with hexane as a proxy for the bioavailable fraction, caused effects in all bioassays, with the red and yellow tattoo inks having the greatest response, particularly inducing genotoxicity and oxidative stress response endpoints. Chemical analysis revealed the presence of polycyclic aromatic hydrocarbons in the tested black tattoo ink at concentrations twice the recommended level. The detected polycyclic aromatic hydrocarbons only explained 0.06% of the oxidative stress response of the black tattoo ink, thus the majority of the effect was caused by unidentified components. The study indicates that currently available tattoo inks contain components that induce adaptive stress response pathways, but to evaluate the risk to human health further work is required to understand the toxicokinetics of tattoo ink chemicals in the body. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. New approach in direct-simulation of gas mixtures

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren

    1991-01-01

    Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.

  11. Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation

    NASA Astrophysics Data System (ADS)

    Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter

    2016-11-01

    Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.

  12. Bayesian 2-Stage Space-Time Mixture Modeling With Spatial Misalignment of the Exposure in Small Area Health Data.

    PubMed

    Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong

    2012-09-01

    We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.

  13. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.

    PubMed

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-03-13

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.

  14. Some comments on thermodynamic consistency for equilibrium mixture equations of state

    DOE PAGES

    Grove, John W.

    2018-03-28

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  15. Adaptive nitrogen and integrated weed management in conservation agriculture: impacts on agronomic productivity, greenhouse gas emissions, and herbicide residues.

    PubMed

    Oyeogbe, Anthony Imoudu; Das, T K; Bhatia, Arti; Singh, Shashi Bala

    2017-04-01

    Increasing nitrogen (N) immobilization and weed interference in the early phase of implementation of conservation agriculture (CA) affects crop yields. Yet, higher fertilizer and herbicide use to improve productivity influences greenhouse gase emissions and herbicide residues. These tradeoffs precipitated a need for adaptive N and integrated weed management in CA-based maize (Zea mays L.)-wheat [Triticum aestivum (L.) emend Fiori & Paol] cropping system in the Indo-Gangetic Plains (IGP) to optimize N availability and reduce weed proliferation. Adaptive N fertilization was based on soil test value and normalized difference vegetation index measurement (NDVM) by GreenSeeker™ technology, while integrated weed management included brown manuring (Sesbania aculeata L. co-culture, killed at 25 days after sowing), herbicide mixture, and weedy check (control, i.e., without weed management). Results indicated that the 'best-adaptive N rate' (i.e., 50% basal + 25% broadcast at 25 days after sowing + supplementary N guided by NDVM) increased maize and wheat grain yields by 20 and 14% (averaged for 2 years), respectively, compared with whole recommended N applied at sowing. Weed management by brown manuring (during maize) and herbicide mixture (during wheat) resulted in 10 and 21% higher grain yields (averaged for 2 years), respectively, over the weedy check. The NDVM in-season N fertilization and brown manuring affected N 2 O and CO 2 emissions, but resulted in improved carbon storage efficiency, while herbicide residuals in soil were significantly lower in the maize season than in wheat cropping. This study concludes that adaptive N and integrated weed management enhance synergy between agronomic productivity, fertilizer and herbicide efficiency, and greenhouse gas mitigation.

  16. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  17. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  18. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  19. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  20. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  1. Single- and mixture toxicity of three organic UV-filters, ethylhexyl methoxycinnamate, octocrylene, and avobenzone on Daphnia magna.

    PubMed

    Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun

    2017-03-01

    In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Cumulative toxicity of neonicotinoid insecticide mixtures to Chironomus dilutus under acute exposure scenarios.

    PubMed

    Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-11-01

    Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.

  3. The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio

    NASA Astrophysics Data System (ADS)

    Roquier, Gerard

    2017-06-01

    The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.

  4. Effects of a Culturally Adapted HIV Prevention Intervention in Haitian Youth

    PubMed Central

    Malow, Robert M.; Stein, Judith A.; McMahon, Robert C.; Dévieux, Jessy G.; Rosenberg, Rhonda; Jean-Gilles, Michèle

    2009-01-01

    This study assessed the impact of an 8-week community-based translation of Becoming a Responsible Teen (BART), an HIV intervention that has been shown to be effective in other at-risk adolescent populations. A sample of Haitian adolescents living in the Miami area was randomized to a general health education control group (N = 101) or the BART intervention (N = 145), which is based on the information-motivation-behavior (IMB) model. Improvement in various IMB components (i.e., attitudinal, knowledge, and behavioral skills variables) related to condom use was assessed 1 month after the intervention. Longitudinal structural equation models using a mixture of latent and measured multi-item variables indicated that the intervention significantly and positively impacted all IMB variables tested in the model. These BART intervention-linked changes reflected greater knowledge, greater intentions to use condoms in the future, higher safer sex self-efficacy, an improved attitude about condom use and an enhanced ability to use condoms after the 8-week intervention. PMID:19286123

  5. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  6. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  7. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  8. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  9. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  10. Predicting the shock compression response of heterogeneous powder mixtures

    NASA Astrophysics Data System (ADS)

    Fredenburg, D. A.; Thadhani, N. N.

    2013-06-01

    A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.

  11. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  12. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    NASA Astrophysics Data System (ADS)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.

  13. Gas-fired radiant heater

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rattner, D.

    1987-01-06

    A heater apparatus is described comprising a plurality of porous tile members arranged in an elongated series; fuel distribution means adapted to support the tile members and having a baffled compartmentalized chamber transversely integrally formed therein for delivering a predetermined ignitable fuel mixture evenly to the tile members; and air circulation means adapted to substantially encase the fuel distribution means for circulating cooling air thereabout. The air circulation means is formed to provide an elongated and deflected air gap along opposite edges thereof to direct vented air away from the tile members.

  14. Adaptive simplification of complex multiscale systems.

    PubMed

    Chiavazzo, Eliodoro; Karlin, Ilya

    2011-03-01

    A fully adaptive methodology is developed for reducing the complexity of large dissipative systems. This represents a significant step toward extracting essential physical knowledge from complex systems, by addressing the challenging problem of a minimal number of variables needed to exactly capture the system dynamics. Accurate reduced description is achieved, by construction of a hierarchy of slow invariant manifolds, with an embarrassingly simple implementation in any dimension. The method is validated with the autoignition of the hydrogen-air mixture where a reduction to a cascade of slow invariant manifolds is observed.

  15. Vehicle Detection with Occlusion Handling, Tracking, and OC-SVM Classification: A High Performance Vision-Based System

    PubMed Central

    Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael

    2018-01-01

    This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078

  16. Improved patch-based learning for image deblurring

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng

    2015-05-01

    Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.

  17. A numerical study of granular dam-break flow

    NASA Astrophysics Data System (ADS)

    Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.

    2017-12-01

    Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.

  18. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  19. Modeling of the Urban Heat Island (UHI) using WRF - Assessment of adaptation and mitigation strategies for the city of Stuttgart.

    NASA Astrophysics Data System (ADS)

    Fallmann, Joachim; Suppan, Peter; Emeis, Stefan

    2013-04-01

    Cities are warmer than their surroundings (called urban heat island, UHI). UHI influence urban atmospheric circulation, air quality, and ecological conditions. UHI leads to upward motion and compensating near-surface inflow from the surroundings which import rural trace substances. Chemical and aerosol formation processes are modified due to increased temperature, reduced humidity and modified urban-rural trace substance mixtures. UHIs produce enhanced heat stress for humans, animals and plants, less water availability and modified air quality. Growing cities and Climate Change will aggravate the UHI and its effects and urgently require adaptation and mitigation strategies. Prior to this, UHI properties must be assessed by surface observations, ground- and satellite-based vertical remote sensing and numerical modelling. The Weather Research and Forecasting Model (WRF) is an instrument to simulate and assess this phenomenon based on boundary conditions from observations and global climate models. Three urbanization schemes are available with WRF, which are tested during this study for different weather conditions in central Europe and will be enhanced if necessary. High resolution land use maps are used for this modeling effort. In situ measurements and Landsat thermal images are employed for validation of the results. The study will focus on the city of Stuttgart located in the south western part of Germany that is situated in a caldera-like orographic feature. This municipality has a long tradition in urban climate research and thus is well equipped with climatologic measurement stations. By using Geographical Information Systems (GIS), it is possible to simulate several scenarios for different surface properties. By increasing the albedo of roof and wall layers in the urban canopy model or by replacing urban land use by natural vegetation, simple urban planning strategies can be tested and the effect on urban heat island formation and air quality can be investigated. These numerical simulations will then be used to assess effectiveness and impact of planned adaptation and mitigation actions for the UHI under present and future climate conditions. Urban air quality is in the focus of these studies. The study is funded by EU-Project 3CE292P3 - "UHI - Development and application of mitigation and adaptation strategies and measures for counteracting the global UHI phenomenon."

  20. Mixture theory-based poroelasticity as a model of interstitial tissue growth

    PubMed Central

    Cowin, Stephen C.; Cardoso, Luis

    2011-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481

  1. Mixture theory-based poroelasticity as a model of interstitial tissue growth.

    PubMed

    Cowin, Stephen C; Cardoso, Luis

    2012-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.

  2. Bioanalytical assessment of adaptive stress responses in drinking water: A predictive tool to differentiate between micropollutants and disinfection by-products.

    PubMed

    Hebert, Armelle; Feliers, Cedric; Lecarpentier, Caroline; Neale, Peta A; Schlichting, Rita; Thibert, Sylvie; Escher, Beate I

    2018-04-01

    Drinking water can contain low levels of micropollutants, as well as disinfection by-products (DBPs) that form from the reaction of disinfectants with organic and inorganic matter in water. Due to the complex mixture of trace chemicals in drinking water, targeted chemical analysis alone is not sufficient for monitoring. The current study aimed to apply in vitro bioassays indicative of adaptive stress responses to monitor the toxicological profiles and the formation of DBPs in three drinking water distribution systems in France. Bioanalysis was complemented with chemical analysis of forty DBPs. All water samples were active in the oxidative stress response assay, but only after considerable sample enrichment. As both micropollutants in source water and DBPs formed during treatment can contribute to the effect, the bioanalytical equivalent concentration (BEQ) approach was applied for the first time to determine the contribution of DBPs, with DBPs found to contribute between 17 and 58% of the oxidative stress response. Further, the BEQ approach was also used to assess the contribution of volatile DBPs to the observed effect, with detected volatile DBPs found to have only a minor contribution as compared to the measured effects of the non-volatile chemicals enriched by solid-phase extraction. The observed effects in the distribution systems were below any level of concern, quantifiable only at high enrichment and not different from bottled mineral water. Integrating bioanalytical tools and the BEQ mixture model for monitoring drinking water quality is an additional assurance that chemical monitoring is not overlooking any unknown chemicals or transformation products and can help to ensure chemically safe drinking water. Copyright © 2017. Published by Elsevier Ltd.

  3. The Impact II, a Very High-Resolution Quadrupole Time-of-Flight Instrument (QTOF) for Deep Shotgun Proteomics*

    PubMed Central

    Beck, Scarlet; Michalski, Annette; Raether, Oliver; Lubeck, Markus; Kaspar, Stephanie; Goedecke, Niels; Baessmann, Carsten; Hornburg, Daniel; Meier, Florian; Paron, Igor; Kulak, Nils A.; Cox, Juergen; Mann, Matthias

    2015-01-01

    Hybrid quadrupole time-of-flight (QTOF) mass spectrometry is one of the two major principles used in proteomics. Although based on simple fundamentals, it has over the last decades greatly evolved in terms of achievable resolution, mass accuracy, and dynamic range. The Bruker impact platform of QTOF instruments takes advantage of these developments and here we develop and evaluate the impact II for shotgun proteomics applications. Adaption of our heated liquid chromatography system achieved very narrow peptide elution peaks. The impact II is equipped with a new collision cell with both axial and radial ion ejection, more than doubling ion extraction at high tandem MS frequencies. The new reflectron and detector improve resolving power compared with the previous model up to 80%, i.e. to 40,000 at m/z 1222. We analyzed the ion current from the inlet capillary and found very high transmission (>80%) up to the collision cell. Simulation and measurement indicated 60% transfer into the flight tube. We adapted MaxQuant for QTOF data, improving absolute average mass deviations to better than 1.45 ppm. More than 4800 proteins can be identified in a single run of HeLa digest in a 90 min gradient. The workflow achieved high technical reproducibility (R2 > 0.99) and accurate fold change determination in spike-in experiments in complex mixtures. Using label-free quantification we rapidly quantified haploid against diploid yeast and characterized overall proteome differences in mouse cell lines originating from different tissues. Finally, after high pH reversed-phase fractionation we identified 9515 proteins in a triplicate measurement of HeLa peptide mixture and 11,257 proteins in single measurements of cerebellum—the highest proteome coverage reported with a QTOF instrument so far. PMID:25991688

  4. Combined contactless conductometric, photometric, and fluorimetric single point detector for capillary separation methods.

    PubMed

    Ryvolová, Markéta; Preisler, Jan; Foret, Frantisek; Hauser, Peter C; Krásenský, Pavel; Paull, Brett; Macka, Mirek

    2010-01-01

    This work for the first time combines three on-capillary detection methods, namely, capacitively coupled contactless conductometric (C(4)D), photometric (PD), and fluorimetric (FD), in a single (identical) point of detection cell, allowing concurrent measurements at a single point of detection for use in capillary electrophoresis, capillary electrochromatography, and capillary/nanoliquid chromatography. The novel design is based on a standard 6.3 mm i.d. fiber-optic SMA adapter with a drilled opening for the separation capillary to go through, to which two concentrically positioned C(4)D detection electrodes with a detection gap of 7 mm were added on each side acting simultaneously as capillary guides. The optical fibers in the SMA adapter were used for the photometric signal (absorbance), and another optical fiber at a 45 degrees angle to the capillary was applied to collect the emitted light for FD. Light emitting diodes (255 and 470 nm) were used as light sources for the PD and FD detection modes. LOD values were determined under flow-injection conditions to exclude any stacking effects: For the 470 nm LED limits of detection (LODs) for FD and PD were for fluorescein (1 x 10(-8) mol/L) and tartrazine (6 x 10(-6) mol/L), respectively, and the LOD for the C(4)D was for magnesium chloride (5 x 10(-7) mol/L). The advantage of the three different detection signals in a single point is demonstrated in capillary electrophoresis using model mixtures and samples including a mixture of fluorescent and nonfluorescent dyes and common ions, underivatized amino acids, and a fluorescently labeled digest of bovine serum albumin.

  5. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  6. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed Central

    Chen, D G; Pounds, J G

    1998-01-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894

  7. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete

    PubMed Central

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-01-01

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990

  8. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  9. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  10. Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method

    NASA Astrophysics Data System (ADS)

    Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.

    2018-03-01

    The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.

  11. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  12. Mixture models for estimating the size of a closed population when capture rates vary among individuals

    USGS Publications Warehouse

    Dorazio, R.M.; Royle, J. Andrew

    2003-01-01

    We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.

  13. Chemical mixtures in potable water in the U.S.

    USGS Publications Warehouse

    Ryker, Sarah J.

    2014-01-01

    In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.

  14. Identification of cortex in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    VanMeter, John W.; Sandon, Peter A.

    1992-06-01

    The overall goal of the work described here is to make available to the neurosurgeon in the operating room an on-line, three-dimensional, anatomically labeled model of the patient brain, based on pre-operative magnetic resonance (MR) images. A stereotactic operating microscope is currently in experimental use, which allows structures that have been manually identified in MR images to be made available on-line. We have been working to enhance this system by combining image processing techniques applied to the MR data with an anatomically labeled 3-D brain model developed from the Talairach and Tournoux atlas. Here we describe the process of identifying cerebral cortex in the patient MR images. MR images of brain tissue are reasonably well described by material mixture models, which identify each pixel as corresponding to one of a small number of materials, or as being a composite of two materials. Our classification algorithm consists of three steps. First, we apply hierarchical, adaptive grayscale adjustments to correct for nonlinearities in the MR sensor. The goal of this preprocessing step, based on the material mixture model, is to make the grayscale distribution of each tissue type constant across the entire image. Next, we perform an initial classification of all tissue types according to gray level. We have used a sum of Gaussian's approximation of the histogram to perform this classification. Finally, we identify pixels corresponding to cortex, by taking into account the spatial patterns characteristic of this tissue. For this purpose, we use a set of matched filters to identify image locations having the appropriate configuration of gray matter (cortex), cerebrospinal fluid and white matter, as determined by the previous classification step.

  15. Testing and Improving Theories of Radiative Transfer for Determining the Mineralogy of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.

    2012-12-01

    Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu

  16. Applying mixture toxicity modelling to predict bacterial bioluminescence inhibition by non-specifically acting pharmaceuticals and specifically acting antibiotics.

    PubMed

    Neale, Peta A; Leusch, Frederic D L; Escher, Beate I

    2017-04-01

    Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC 50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC 50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC 50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  18. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  19. Process dissociation and mixture signal detection theory.

    PubMed

    DeCarlo, Lawrence T

    2008-11-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.

  20. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    PubMed Central

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762

  1. Acid-base chemistry of white wine: analytical characterisation and chemical modelling.

    PubMed

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing.

  2. Toxicity interactions between manganese (Mn) and lead (Pb) or cadmium (Cd) in a model organism the nematode C. elegans.

    PubMed

    Lu, Cailing; Svoboda, Kurt R; Lenz, Kade A; Pattison, Claire; Ma, Hongbo

    2018-06-01

    Manganese (Mn) is considered as an emerging metal contaminant in the environment. However, its potential interactions with companying toxic metals and the associated mixture effects are largely unknown. Here, we investigated the toxicity interactions between Mn and two commonly seen co-occurring toxic metals, Pb and Cd, in a model organism the nematode Caenorhabditis elegans. The acute lethal toxicity of mixtures of Mn+Pb and Mn+Cd were first assessed using a toxic unit model. Multiple toxicity endpoints including reproduction, lifespan, stress response, and neurotoxicity were then examined to evaluate the mixture effects at sublethal concentrations. Stress response was assessed using a daf-16::GFP transgenic strain that expresses GFP under the control of DAF-16 promotor. Neurotoxicity was assessed using a dat-1::GFP transgenic strain that expresses GFP in dopaminergic neurons. The mixture of Mn+Pb induced a more-than-additive (synergistic) lethal toxicity in the worm whereas the mixture of Mn+Cd induced a less-than-additive (antagonistic) toxicity. Mixture effects on sublethal toxicity showed more complex patterns and were dependent on the toxicity endpoints as well as the modes of toxic action of the metals. The mixture of Mn+Pb induced additive effects on both reproduction and lifespan, whereas the mixture of Mn+Cd induced additive effects on lifespan but not reproduction. Both mixtures seemed to induce additive effects on stress response and neurotoxicity, although a quantitative assessment was not possible due to the single concentrations used in mixture tests. Our findings demonstrate the complexity of metal interactions and the associated mixture effects. Assessment of metal mixture toxicity should take into consideration the unique property of individual metals, their potential toxicity mechanisms, and the toxicity endpoints examined.

  3. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    NASA Astrophysics Data System (ADS)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  4. Mixture IRT Model with a Higher-Order Structure for Latent Traits

    ERIC Educational Resources Information Center

    Huang, Hung-Yu

    2017-01-01

    Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…

  5. Beta Regression Finite Mixture Models of Polarization and Priming

    ERIC Educational Resources Information Center

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  6. Predicting mixture toxicity of seven phenolic compounds with similar and dissimilar action mechanisms to Vibrio qinghaiensis sp.nov.Q67.

    PubMed

    Huang, Wei Ying; Liu, Fei; Liu, Shu Shen; Ge, Hui Lin; Chen, Hong Han

    2011-09-01

    The predictions of mixture toxicity for chemicals are commonly based on two models: concentration addition (CA) and independent action (IA). Whether the CA and IA can predict mixture toxicity of phenolic compounds with similar and dissimilar action mechanisms was studied. The mixture toxicity was predicted on the basis of the concentration-response data of individual compounds. Test mixtures at different concentration ratios and concentration levels were designed using two methods. The results showed that the Weibull function fit well with the concentration-response data of all the components and their mixtures, with all relative coefficients (Rs) greater than 0.99 and root mean squared errors (RMSEs) less than 0.04. The predicted values from CA and IA models conformed to observed values of the mixtures. Therefore, it can be concluded that both CA and IA can predict reliable results for the mixture toxicity of the phenolic compounds with similar and dissimilar action mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. [Elaboration and evaluation of infant food based on Andean crops].

    PubMed

    Repo-Carrasco, R; Hoyos, N L

    1993-06-01

    The Andes mountain range of South America is one of the most important centres for crop domestication, potato, corn, and lesser known grains such as quinua, cañihua, kiwicha and tarwi are indigenous of these highlands. These Andean grains have adapted perfectly to the climatic and geographical conditions present, whereas other grains have not been able to survive. In addition to their hardiness, they also have a high nutritional value. Bearing in mind on one hand, the high nutritional value of these indegenous products, and on the other hand the high rate of child malnutrition prevalent in the population, it was considered important to look for new variations in their processing which would facilitate their consumption by the poor working classes, especially the children. Accordingly three different flour mixtures were developed based on these Andean grains, the mixtures were then subjected to bromatological and biological analysis. The three new flour mixtures were: Quinua-Cañihua-Broad Bean (Q-C-B), Quinua-Kiwicha-Bean (Q-K-B) and Kiwicha-Rice (K-R). The protein content of these mixtures varied between 11.35-15.46 g/100g, the mixture K-R having the lowest protein level and the Q-C-B having the highest. The Q-K-B mixture had the highest chemical score, PER and NPU value. This PER value of 2.59 was higher than the value of casein which was 2.50. In addition this mixture had a chemical score of 0.94 and a NPU value of 59.38. The Q-C-B mixture had a chemical score of 0.88 and its PER, NPU and Digestibility values were 2.36, 47.24 and 79.2 respectively.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  9. Progress of IRSN R&D on ITER Safety Assessment

    NASA Astrophysics Data System (ADS)

    Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.

    2012-08-01

    The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.

  10. Existence, uniqueness and positivity of solutions for BGK models for mixtures

    NASA Astrophysics Data System (ADS)

    Klingenberg, C.; Pirner, M.

    2018-01-01

    We consider kinetic models for a multi component gas mixture without chemical reactions. In the literature, one can find two types of BGK models in order to describe gas mixtures. One type has a sum of BGK type interaction terms in the relaxation operator, for example the model described by Klingenberg, Pirner and Puppo [20] which contains well-known models of physicists and engineers for example Hamel [16] and Gross and Krook [15] as special cases. The other type contains only one collision term on the right-hand side, for example the well-known model of Andries, Aoki and Perthame [1]. For each of these two models [20] and [1], we prove existence, uniqueness and positivity of solutions in the first part of the paper. In the second part, we use the first model [20] in order to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described by Dellacherie [11].

  11. Remote sensing with intense filaments enhanced by adaptive optics

    NASA Astrophysics Data System (ADS)

    Daigle, J.-F.; Kamali, Y.; Châteauneuf, M.; Tremblay, G.; Théberge, F.; Dubois, J.; Roy, G.; Chin, S. L.

    2009-11-01

    A method involving a closed loop adaptive optic system is investigated as a tool to significantly enhance the collected optical emissions, for remote sensing applications involving ultrafast laser filamentation. The technique combines beam expansion and geometrical focusing, assisted by an adaptive optics system to correct the wavefront aberrations. Targets, such as a gaseous mixture of air and hydrocarbons, solid lead and airborne clouds of contaminated aqueous aerosols, were remotely probed with filaments generated at distances up to 118 m after the focusing beam expander. The integrated backscattered signals collected by the detection system (15-28 m from the filaments) were increased up to a factor of 7, for atmospheric N2 and solid lead, when the wavefronts were corrected by the adaptive optic system. Moreover, an extrapolation based on a simplified version of the LIDAR equation showed that the adaptive optic system improved the detection distance for N2 molecular fluorescence, from 45 m for uncorrected wavefronts to 125 m for corrected.

  12. Finite mixture modeling for vehicle crash data with application to hotspot identification.

    PubMed

    Park, Byung-Jung; Lord, Dominique; Lee, Chungwon

    2014-10-01

    The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  14. A Fully Coupled Simulation and Optimization Scheme for the Design of 3D Powder Injection Molding Processes

    NASA Astrophysics Data System (ADS)

    Ayad, G.; Song, J.; Barriere, T.; Liu, B.; Gelin, J. C.

    2007-05-01

    The paper is concerned with optimization and parametric identification of Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders parts by solid state diffusion. In the first part, one describes an original methodology to optimize the injection stage based on the combination of Design Of Experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometer curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization for manufacturing of a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.

  15. A manifold learning approach to target detection in high-resolution hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.

    Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.

  16. Closed-form solutions in stress-driven two-phase integral elasticity for bending of functionally graded nano-beams

    NASA Astrophysics Data System (ADS)

    Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de

    2018-03-01

    Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.

  17. A modified procedure for mixture-model clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.; Horton, John D.

    2014-01-01

    A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.

  18. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  19. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  20. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China.

    PubMed

    Ji, Cuicui; Jia, Yonghong; Gao, Zhihai; Wei, Huaidong; Li, Xiaosong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement.

  1. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China

    PubMed Central

    Jia, Yonghong; Gao, Zhihai; Wei, Huaidong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement. PMID:29240777

  2. Neurotoxicological and statistical analyses of a mixture of five organophosphorus pesticides using a ray design.

    PubMed

    Moser, V C; Casey, M; Hamm, A; Carter, W H; Simmons, J E; Gennings, C

    2005-07-01

    Environmental exposures generally involve chemical mixtures instead of single chemicals. Statistical models such as the fixed-ratio ray design, wherein the mixing ratio (proportions) of the chemicals is fixed across increasing mixture doses, allows for the detection and characterization of interactions among the chemicals. In this study, we tested for interaction(s) in a mixture of five organophosphorus (OP) pesticides (chlorpyrifos, diazinon, dimethoate, acephate, and malathion). The ratio of the five pesticides (full ray) reflected the relative dietary exposure estimates of the general population as projected by the US EPA Dietary Exposure Evaluation Model (DEEM). A second mixture was tested using the same dose levels of all pesticides, but excluding malathion (reduced ray). The experimental approach first required characterization of dose-response curves for the individual OPs to build a dose-additivity model. A series of behavioral measures were evaluated in adult male Long-Evans rats at the time of peak effect following a single oral dose, and then tissues were collected for measurement of cholinesterase (ChE) activity. Neurochemical (blood and brain cholinesterase [ChE] activity) and behavioral (motor activity, gait score, tail-pinch response score) endpoints were evaluated statistically for evidence of additivity. The additivity model constructed from the single chemical data was used to predict the effects of the pesticide mixture along the full ray (10-450 mg/kg) and the reduced ray (1.75-78.8 mg/kg). The experimental mixture data were also modeled and statistically compared to the additivity models. Analysis of the 5-OP mixture (the full ray) revealed significant deviation from additivity for all endpoints except tail-pinch response. Greater-than-additive responses (synergism) were observed at the lower doses of the 5-OP mixture, which contained non-effective dose levels of each of the components. The predicted effective doses (ED20, ED50) were about half that predicted by additivity, and for brain ChE and motor activity, there was a threshold shift in the dose-response curves. For the brain ChE and motor activity, there was no difference between the full (5-OP mixture) and reduced (4-OP mixture) rays, indicating that malathion did not influence the non-additivity. While the reduced ray for blood ChE showed greater deviation from additivity without malathion in the mixture, the non-additivity observed for the gait score was reversed when malathion was removed. Thus, greater-than-additive interactions were detected for both the full and reduced ray mixtures, and the role of malathion in the interactions varied depending on the endpoint. In all cases, the deviations from additivity occurred at the lower end of the dose-response curves.

  3. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  4. A new portable generator to dynamically produce SI-traceable reference gas mixtures for VOCs and water vapour at atmospheric concentration

    NASA Astrophysics Data System (ADS)

    Guillevic, Myriam; Pascale, Céline; Ackermann, Andreas; Leuenberger, Daiana; Niederhauser, Bernhard

    2016-04-01

    In the framework of the KEY-VOCs and AtmoChem-ECV projects, we are currently developing new facilities to dynamically generate reference gas mixtures for a variety of reactive compounds, at concentrations measured in the atmosphere and in a SI-traceable way (i.e. the amount of substance fraction in mole per mole is traceable to SI-units). Here we present the realisation of such standards for water vapour in the range 1-10 μmol/mol and for volatile organic compounds (VOCs) such as limonene, alpha-pinene, MVK, MEK, in the nmol/mol range. The matrix gas can be nitrogen or synthetic air. Further development in gas purification techniques could make possible to use purified atmospheric air as carrier gas. The method is based on permeation and dynamic dilution: one permeator containing a pure substance (either water, limonene, MVK, MEK or α-pinene) is kept into a permeation chamber with a constant gas flow. The mass loss is precisely calibrated using a magnetic suspension balance. The carrier gas is purified beforehand from the compounds of interest to the required level, using commercially available purification cartridges. This primary mixture is then diluted to reach the required amount of substance fraction. All flows are piloted by mass flow controllers which makes the production process flexible and easily adaptable to generate the required concentration. All parts in contact with the gas mixture are passivated using coated surfaces, to reduce adsorption/desorption processes as much as possible. Two setups are currently developed: one already built and fixed in our laboratory in Bern as well as a portable generator that is still under construction and that could be used anywhere in the field. The permeation chamber of the portable generator has multiple individual cells allowing the generation of mixtures up to 5 different components if needed. Moreover the presented technique can be adapted and applied to a large variety of molecules (e.g., NO2, BTEX, CFCs, HCFCs, HFCs and other refrigerants) and is particularly suitable for gas species and/or concentration ranges that are not stable in cylinders.

  5. Numerical study of underwater dispersion of dilute and dense sediment-water mixtures

    NASA Astrophysics Data System (ADS)

    Chan, Ziying; Dao, Ho-Minh; Tan, Danielle S.

    2018-05-01

    As part of the nodule-harvesting process, sediment tailings are released underwater. Due to the long period of clouding in the water during the settling process, this presents a significant environmental and ecological concern. One possible solution is to release a mixture of sediment tailings and seawater, with the aim of reducing the settling duration as well as the amount of spreading. In this paper, we present some results of numerical simulations using the smoothed particle hydrodynamics (SPH) method to model the release of a fixed volume of pre-mixed sediment-water mixture into a larger body of quiescent water. Both the sediment-water mixture and the “clean” water are modeled as two different fluids, with concentration-dependent bulk properties of the sediment-water mixture adjusted according to the initial solids concentration. This numerical model was validated in a previous study, which indicated significant differences in the dispersion and settling process between dilute and dense mixtures, and that a dense mixture may be preferable. For this study, we investigate a wider range of volumetric concentration with the aim of determining the optimum volumetric concentration, as well as its overall effectiveness compared to the original process (100% sediment).

  6. Space-time variation of respiratory cancers in South Carolina: a flexible multivariate mixture modeling approach to risk estimation.

    PubMed

    Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin

    2017-01-01

    Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Impact of chemical proportions on the acute neurotoxicity of a mixture of seven carbamates in preweanling and adult rats.

    PubMed

    Moser, Virginia C; Padilla, Stephanie; Simmons, Jane Ellen; Haber, Lynne T; Hertzberg, Richard C

    2012-09-01

    Statistical design and environmental relevance are important aspects of studies of chemical mixtures, such as pesticides. We used a dose-additivity model to test experimentally the default assumptions of dose additivity for two mixtures of seven N-methylcarbamates (carbaryl, carbofuran, formetanate, methomyl, methiocarb, oxamyl, and propoxur). The best-fitting models were selected for the single-chemical dose-response data and used to develop a combined prediction model, which was then compared with the experimental mixture data. We evaluated behavioral (motor activity) and cholinesterase (ChE)-inhibitory (brain, red blood cells) outcomes at the time of peak acute effects following oral gavage in adult and preweanling (17 days old) Long-Evans male rats. The mixtures varied only in their mixing ratios. In the relative potency mixture, proportions of each carbamate were set at equitoxic component doses. A California environmental mixture was based on the 2005 sales of each carbamate in California. In adult rats, the relative potency mixture showed dose additivity for red blood cell ChE and motor activity, and brain ChE inhibition showed a modest greater-than additive (synergistic) response, but only at a middle dose. In rat pups, the relative potency mixture was either dose-additive (brain ChE inhibition, motor activity) or slightly less-than additive (red blood cell ChE inhibition). On the other hand, at both ages, the environmental mixture showed greater-than additive responses on all three endpoints, with significant deviations from predicted at most to all doses tested. Thus, we observed different interactive properties for different mixing ratios of these chemicals. These approaches for studying pesticide mixtures can improve evaluations of potential toxicity under varying experimental conditions that may mimic human exposures.

  8. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  9. Natural Selection Causes Adaptive Genetic Resistance in Wild Emmer Wheat against Powdery Mildew at “Evolution Canyon” Microsite, Mt. Carmel, Israel

    PubMed Central

    Yin, Huayan; Ben-Abu, Yuval; Wang, Hongwei; Li, Anfei; Nevo, Eviatar; Kong, Lingrang

    2015-01-01

    Background “Evolution Canyon” (ECI) at Lower Nahal Oren, Mount Carmel, Israel, is an optimal natural microscale model for unraveling evolution in action highlighting the basic evolutionary processes of adaptation and speciation. A major model organism in ECI is wild emmer, Triticum dicoccoides, the progenitor of cultivated wheat, which displays dramatic interslope adaptive and speciational divergence on the tropical-xeric “African” slope (AS) and the temperate-mesic “European” slope (ES), separated on average by 250 m. Methods We examined 278 single sequence repeats (SSRs) and the phenotype diversity of the resistance to powdery mildew between the opposite slopes. Furthermore, 18 phenotypes on the AS and 20 phenotypes on the ES, were inoculated by both Bgt E09 and a mixture of powdery mildew races. Results In the experiment of genetic diversity, very little polymorphism was identified intra-slope in the accessions from both the AS or ES. By contrast, 148 pairs of SSR primers (53.23%) amplified polymorphic products between the phenotypes of AS and ES. There are some differences between the two wild emmer wheat genomes and the inter-slope SSR polymorphic products between genome A and B. Interestingly, all wild emmer types growing on the south-facing slope (SFS=AS) were susceptible to a composite of Blumeria graminis, while the ones growing on the north-facing slope (NFS=ES) were highly resistant to Blumeria graminis at both seedling and adult stages. Conclusion/Significance Remarkable inter-slope evolutionary divergent processes occur in wild emmer wheat, T. dicoccoides at EC I, despite the shot average distance of 250 meters. The AS, a dry and hot slope, did not develop resistance to powdery mildew, whereas the ES, a cool and humid slope, did develop resistance since the disease stress was strong there. This is a remarkable demonstration in host-pathogen interaction on how resistance develops when stress causes an adaptive result at a micro-scale distance. PMID:25856164

  10. Natural selection causes adaptive genetic resistance in wild emmer wheat against powdery mildew at "Evolution Canyon" microsite, Mt. Carmel, Israel.

    PubMed

    Yin, Huayan; Ben-Abu, Yuval; Wang, Hongwei; Li, Anfei; Nevo, Eviatar; Kong, Lingrang

    2015-01-01

    "Evolution Canyon" (ECI) at Lower Nahal Oren, Mount Carmel, Israel, is an optimal natural microscale model for unraveling evolution in action highlighting the basic evolutionary processes of adaptation and speciation. A major model organism in ECI is wild emmer, Triticum dicoccoides, the progenitor of cultivated wheat, which displays dramatic interslope adaptive and speciational divergence on the tropical-xeric "African" slope (AS) and the temperate-mesic "European" slope (ES), separated on average by 250 m. We examined 278 single sequence repeats (SSRs) and the phenotype diversity of the resistance to powdery mildew between the opposite slopes. Furthermore, 18 phenotypes on the AS and 20 phenotypes on the ES, were inoculated by both Bgt E09 and a mixture of powdery mildew races. In the experiment of genetic diversity, very little polymorphism was identified intra-slope in the accessions from both the AS or ES. By contrast, 148 pairs of SSR primers (53.23%) amplified polymorphic products between the phenotypes of AS and ES. There are some differences between the two wild emmer wheat genomes and the inter-slope SSR polymorphic products between genome A and B. Interestingly, all wild emmer types growing on the south-facing slope (SFS=AS) were susceptible to a composite of Blumeria graminis, while the ones growing on the north-facing slope (NFS=ES) were highly resistant to Blumeria graminis at both seedling and adult stages. Remarkable inter-slope evolutionary divergent processes occur in wild emmer wheat, T. dicoccoides at EC I, despite the shot average distance of 250 meters. The AS, a dry and hot slope, did not develop resistance to powdery mildew, whereas the ES, a cool and humid slope, did develop resistance since the disease stress was strong there. This is a remarkable demonstration in host-pathogen interaction on how resistance develops when stress causes an adaptive result at a micro-scale distance.

  11. Dielectric relaxation and hydrogen bonding interaction in xylitol-water mixtures using time domain reflectometry

    NASA Astrophysics Data System (ADS)

    Rander, D. N.; Joshi, Y. S.; Kanse, K. S.; Kumbharkhane, A. C.

    2016-01-01

    The measurements of complex dielectric permittivity of xylitol-water mixtures have been carried out in the frequency range of 10 MHz-30 GHz using a time domain reflectometry technique. Measurements have been done at six temperatures from 0 to 25 °C and at different weight fractions of xylitol (0 < W X ≤ 0.7) in water. There are different models to explain the dielectric relaxation behaviour of binary mixtures, such as Debye, Cole-Cole or Cole-Davidson model. We have observed that the dielectric relaxation behaviour of binary mixtures of xylitol-water can be well described by Cole-Davidson model having an asymmetric distribution of relaxation times. The dielectric parameters such as static dielectric constant and relaxation time for the mixtures have been evaluated. The molecular interaction between xylitol and water molecules is discussed using the Kirkwood correlation factor ( g eff ) and thermodynamic parameter.

  12. Lifetimes and stabilities of familiar explosives molecular adduct complexes during ion mobility measurements

    PubMed Central

    McKenzie, Alan; DeBord, John Daniel; Ridgeway, Mark; Park, Melvin; Eiceman, Gary; Fernandez-Lima, Francisco

    2015-01-01

    Trapped ion mobility spectrometry coupled to mass spectrometry (TIMS-MS) was utilized for the separation and identification of familiar explosives in complex mixtures. For the first time, molecular adduct complex lifetimes, relative stability, binding energies and candidate structures are reported for familiar explosives. Experimental and theoretical results showed that the adduct size and reactivity, complex binding energy and the explosive structure tailors the stability of the molecular adduct complex. TIMS flexibility to adapt the mobility separation as a function of the molecular adduct complex stability (i.e., short or long IMS experiments / low or high IMS resolution) permits targeted measurements of explosives in complex mixtures with higher confidence levels. PMID:26153567

  13. Broad Feshbach resonance in the 6Li-40K mixture.

    PubMed

    Tiecke, T G; Goosen, M R; Ludewig, A; Gensemer, S D; Kraft, S; Kokkelmans, S J J M F; Walraven, J T M

    2010-02-05

    We study the widths of interspecies Feshbach resonances in a mixture of the fermionic quantum gases 6Li and 40K. We develop a model to calculate the width and position of all available Feshbach resonances for a system. Using the model, we select the optimal resonance to study the {6}Li/{40}K mixture. Experimentally, we obtain the asymmetric Fano line shape of the interspecies elastic cross section by measuring the distillation rate of 6Li atoms from a potassium-rich 6Li/{40}K mixture as a function of magnetic field. This provides us with the first experimental determination of the width of a resonance in this mixture, DeltaB=1.5(5) G. Our results offer good perspectives for the observation of universal crossover physics using this mass-imbalanced fermionic mixture.

  14. How does pea architecture influence light sharing in virtual wheat–pea mixtures? A simulation study based on pea genotypes with contrasting architectures

    PubMed Central

    Barillot, Romain; Combes, Didier; Chevalier, Valérie; Fournier, Christian; Escobar-Gutiérrez, Abraham J.

    2012-01-01

    Background and aims Light interception is a key factor driving the functioning of wheat–pea intercrops. The sharing of light is related to the canopy structure, which results from the architectural parameters of the mixed species. In the present study, we characterized six contrasting pea genotypes and identified architectural parameters whose range of variability leads to various levels of light sharing within virtual wheat–pea mixtures. Methodology Virtual plants were derived from magnetic digitizations performed during the growing cycle in a greenhouse experiment. Plant mock-ups were used as inputs of a radiative transfer model in order to estimate light interception in virtual wheat–pea mixtures. The turbid medium approach, extended to well-mixed canopies, was used as a framework for assessing the effects of leaf area index (LAI) and mean leaf inclination on light sharing. Principal results Three groups of pea genotypes were distinguished: (i) early and leafy cultivars, (ii) late semi-leafless cultivars and (iii) low-development semi-leafless cultivars. Within open canopies, light sharing was well described by the turbid medium approach and was therefore determined by the architectural parameters that composed LAI and foliage inclination. When canopy closure started, the turbid medium approach was unable to properly infer light partitioning because of the vertical structure of the canopy. This was related to the architectural parameters that determine the height of pea genotypes. Light capture was therefore affected by the development of leaflets, number of branches and phytomers, as well as internode length. Conclusions This study provides information on pea architecture and identifies parameters whose variability can be used to drive light sharing within wheat–pea mixtures. These results could be used to build up the architecture of pea ideotypes adapted to multi-specific stands towards light competition. PMID:23240074

  15. Recognizing visual focus of attention from head pose in natural meetings.

    PubMed

    Ba, Sileye O; Odobez, Jean-Marc

    2009-02-01

    We address the problem of recognizing the visual focus of attention (VFOA) of meeting participants based on their head pose. To this end, the head pose observations are modeled using a Gaussian mixture model (GMM) or a hidden Markov model (HMM) whose hidden states correspond to the VFOA. The novelties of this paper are threefold. First, contrary to previous studies on the topic, in our setup, the potential VFOA of a person is not restricted to other participants only. It includes environmental targets as well (a table and a projection screen), which increases the complexity of the task, with more VFOA targets spread in the pan as well as tilt gaze space. Second, we propose a geometric model to set the GMM or HMM parameters by exploiting results from cognitive science on saccadic eye motion, which allows the prediction of the head pose given a gaze target. Third, an unsupervised parameter adaptation step not using any labeled data is proposed, which accounts for the specific gazing behavior of each participant. Using a publicly available corpus of eight meetings featuring four persons, we analyze the above methods by evaluating, through objective performance measures, the recognition of the VFOA from head pose information obtained either using a magnetic sensor device or a vision-based tracking system. The results clearly show that in such complex but realistic situations, the VFOA recognition performance is highly dependent on how well the visual targets are separated for a given meeting participant. In addition, the results show that the use of a geometric model with unsupervised adaptation achieves better results than the use of training data to set the HMM parameters.

  16. Real-time, adaptive machine learning for non-stationary, near chaotic gasoline engine combustion time series.

    PubMed

    Vaughan, Adam; Bohac, Stanislav V

    2015-10-01

    Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine combustion timing predictions must contend with non-linear chemistry, non-linear physics, period doubling bifurcation(s), turbulent mixing, model parameters that can drift day-to-day, and air-fuel mixture state information that cannot typically be resolved on a cycle-to-cycle basis, especially during transients. In previous work, an abstract cycle-to-cycle mapping function coupled with ϵ-Support Vector Regression was shown to predict experimentally observed cycle-to-cycle combustion timing over a wide range of engine conditions, despite some of the aforementioned difficulties. The main limitation of the previous approach was that a partially acasual randomly sampled training dataset was used to train proof of concept offline predictions. The objective of this paper is to address this limitation by proposing a new online adaptive Extreme Learning Machine (ELM) extension named Weighted Ring-ELM. This extension enables fully causal combustion timing predictions at randomly chosen engine set points, and is shown to achieve results that are as good as or better than the previous offline method. The broader objective of this approach is to enable a new class of real-time model predictive control strategies for high variability HCCI and, ultimately, to bring HCCI's low engine-out NOx and reduced CO2 emissions to production engines. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Assessment of the Risks of Mixtures of Major Use Veterinary Antibiotics in European Surface Waters.

    PubMed

    Guo, Jiahua; Selby, Katherine; Boxall, Alistair B A

    2016-08-02

    Effects of single veterinary antibiotics on a range of aquatic organisms have been explored in many studies. In reality, surface waters will be exposed to mixtures of these substances. In this study, we present an approach for establishing risks of antibiotic mixtures to surface waters and illustrate this by assessing risks of mixtures of three major use antibiotics (trimethoprim, tylosin, and lincomycin) to algal and cyanobacterial species in European surface waters. Ecotoxicity tests were initially performed to assess the combined effects of the antibiotics to the cyanobacteria Anabaena flos-aquae. The results were used to evaluate two mixture prediction models: concentration addition (CA) and independent action (IA). The CA model performed best at predicting the toxicity of the mixture with the experimental 96 h EC50 for the antibiotic mixture being 0.248 μmol/L compared to the CA predicted EC50 of 0.21 μmol/L. The CA model was therefore used alongside predictions of exposure for different European scenarios and estimations of hazards obtained from species sensitivity distributions to estimate risks of mixtures of the three antibiotics. Risk quotients for the different scenarios ranged from 0.066 to 385 indicating that the combination of three substances could be causing adverse impacts on algal communities in European surface waters. This could have important implications for primary production and nutrient cycling. Tylosin contributed most to the risk followed by lincomycin and trimethoprim. While we have explored only three antibiotics, the combined experimental and modeling approach could readily be applied to the wider range of antibiotics that are in use.

  19. Mesoscale Modeling of LX-17 Under Isentropic Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, H K; Willey, T M; Friedman, G

    Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less

  20. Plant selection and soil legacy enhance long-term biodiversity effects.

    PubMed

    Zuppinger-Dingley, Debra; Flynn, Dan F B; De Deyn, Gerlinde B; Petermann, Jana S; Schmid, Bernhard

    2016-04-01

    Plant-plant and plant-soil interactions can help maintain plant diversity and ecosystem functions. Changes in these interactions may underlie experimentally observed increases in biodiversity effects over time via the selection of genotypes adapted to low or high plant diversity. Little is known, however, about such community-history effects and particularly the role of plant-soil interactions in this process. Soil-legacy effects may occur if co-evolved interactions with soil communities either positively or negatively modify plant biodiversity effects. We tested how plant selection and soil legacy influence biodiversity effects on productivity, and whether such effects increase the resistance of the communities to invasion by weeds. We used two plant selection treatments: parental plants growing in monoculture or in mixture over 8 yr in a grassland biodiversity experiment in the field, which we term monoculture types and mixture types. The two soil-legacy treatments used in this study were neutral soil inoculated with live or sterilized soil inocula collected from the same plots in the biodiversity experiment. For each of the four factorial combinations, seedlings of eight species were grown in monocultures or four-species mixtures in pots in an experimental garden over 15 weeks. Soil legacy (live inoculum) strongly increased biodiversity complementarity effects for communities of mixture types, and to a significantly weaker extent for communities of monoculture types. This may be attributed to negative plant-soil feedbacks suffered by mixture types in monocultures, whereas monoculture types had positive plant-soil feedbacks, in both monocultures and mixtures. Monocultures of mixture types were most strongly invaded by weeds, presumably due to increased pathogen susceptibility, reduced biomass, and altered plant-soil interactions of mixture types. These results show that biodiversity effects in experimental grassland communities can be modified by the evolution of positive vs. negative plant-soil feedbacks of plant monoculture vs. mixture types.

  1. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  2. Native perennial forb tolerance to rates and mixtures of postemergence herbicides

    Treesearch

    Clinton C. Shock; Erik Feibert; Nancy Shaw

    2009-01-01

    Native forb seed is needed to restore rangelands of the Intermountain West. Commercial seed production is necessary to provide the quantity of seed needed for restoration efforts. A major limitation to economically viable commercial production of native forb seed is weed competition. Weeds are adapted to growing in disturbed soil, and native forbs are not competitive...

  3. Second Language Learners' Performance and Strategies When Writing Direct and Translated Essays

    ERIC Educational Resources Information Center

    Ismail, Sadiq Abdulwahed Ahmed; Alsheikh, Negmeldin Omer

    2012-01-01

    The purpose of this study was to investigate ESL students' performance and strategies when writing direct and translated essays. The study also aimed at exploring students' strategies when writing in L2 (English) and L1 (Arabic). The study used a mixture of quantitative and qualitative procedures for data collection and analysis. Adapted strategy…

  4. Ignition points and combustion reactions in Diesel engines. Part I

    NASA Technical Reports Server (NTRS)

    Tausz, J; Schulte, F

    1928-01-01

    The question of whether the fuel should be adapted to the engine or whether it is possible to improve equipment such as carburetors and engines so that as much of the crude oil as possible may be used without further transformation is examined in this report. Various ignition points and fuel mixtures are investigated in this regard.

  5. Activities of mixtures of soil-applied herbicides with different molecular targets.

    PubMed

    Kaushik, Shalini; Streibig, Jens Carl; Cedergreen, Nina

    2006-11-01

    The joint action of soil-applied herbicide mixtures with similar or different modes of action has been assessed by using the additive dose model (ADM). The herbicides chlorsulfuron, metsulfuron-methyl, pendimethalin and pretilachlor, applied either singly or in binary mixtures, were used on rice (Oryza sativa L.). The growth (shoot) response curves were described by a logistic dose-response model. The ED50 values and their corresponding standard errors obtained from the response curves were used to test statistically if the shape of the isoboles differed from the reference model (ADM). Results showed that mixtures of herbicides with similar molecular targets, i.e. chlorsulfuron and metsulfuron (acetolactate synthase (ALS) inhibitors), and with different molecular targets, i.e. pendimethalin (microtubule assembly inhibitor) and pretilachlor (very long chain fatty acids (VLCFAs) inhibitor), followed the ADM. Mixing herbicides with different molecular targets gave different results depending on whether pretilachlor or pendimethalin was involved. In general, mixtures of pretilachlor and sulfonylureas showed synergistic interactions, whereas mixtures of pendimethalin and sulfonylureas exhibited either antagonistic or additive activities. Hence, there is a large potential for both increasing the specificity of herbicides by using mixtures and lowering the total dose for weed control, while at the same time delaying the development of herbicide resistance by using mixtures with different molecular targets. Copyright (c) 2006 Society of Chemical Industry.

  6. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  7. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    ERIC Educational Resources Information Center

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  8. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  9. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  10. Structure investigations on assembled astaxanthin molecules

    NASA Astrophysics Data System (ADS)

    Köpsel, Christian; Möltgen, Holger; Schuch, Horst; Auweter, Helmut; Kleinermanns, Karl; Martin, Hans-Dieter; Bettermann, Hans

    2005-08-01

    The carotenoid r,r-astaxanthin (3R,3‧R-dihydroxy-4,4‧-diketo-β-carotene) forms different types of aggregates in acetone-water mixtures. H-type aggregates were found in mixtures with a high part of water (e.g. 1:9 acetone-water mixture) whereas two different types of J-aggregates were identified in mixtures with a lower part of water (3:7 acetone-water mixture). These aggregates were characterized by recording UV/vis-absorption spectra, CD-spectra and fluorescence emissions. The sizes of the molecular assemblies were determined by dynamic light scattering experiments. The hydrodynamic diameter of the assemblies amounts 40 nm in 1:9 acetone-water mixtures and exceeds up to 1 μm in 3:7 acetone-water mixtures. Scanning tunneling microscopy monitored astaxanthin aggregates on graphite surfaces. The structure of the H-aggregate was obtained by molecular modeling calculations. The structure was confirmed by calculating the electronic absorption spectrum and the CD-spectrum where the molecular modeling structure was used as input.

  11. Mixture modelling for cluster analysis.

    PubMed

    McLachlan, G J; Chang, S U

    2004-10-01

    Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

  12. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  13. Compact determination of hydrogen isotopes

    DOE PAGES

    Robinson, David

    2017-04-06

    Scanning calorimetry of a confined, reversible hydrogen sorbent material has been previously proposed as a method to determine compositions of unknown mixtures of diatomic hydrogen isotopologues and helium. Application of this concept could result in greater process knowledge during the handling of these gases. Previously published studies have focused on mixtures that do not include tritium. This paper focuses on modeling to predict the effect of tritium in mixtures of the isotopologues on a calorimetry scan. Furthermore, the model predicts that tritium can be measured with a sensitivity comparable to that observed for hydrogen-deuterium mixtures, and that under so memore » conditions, it may be possible to determine the atomic fractions of all three isotopes in a gas mixture.« less

  14. ALF: a strategy for identification of unauthorized GMOs in complex mixtures by a GW-NGS method and dedicated bioinformatics analysis.

    PubMed

    Košir, Alexandra Bogožalec; Arulandhu, Alfred J; Voorhuijzen, Marleen M; Xiao, Hongmei; Hagelaar, Rico; Staats, Martijn; Costessi, Adalberto; Žel, Jana; Kok, Esther J; Dijk, Jeroen P van

    2017-10-26

    The majority of feed products in industrialised countries contains materials derived from genetically modified organisms (GMOs). In parallel, the number of reports of unauthorised GMOs (UGMOs) is gradually increasing. There is a lack of specific detection methods for UGMOs, due to the absence of detailed sequence information and reference materials. In this research, an adapted genome walking approach was developed, called ALF: Amplification of Linearly-enriched Fragments. Coupling of ALF to NGS aims for simultaneous detection and identification of all GMOs, including UGMOs, in one sample, in a single analysis. The ALF approach was assessed on a mixture made of DNA extracts from four reference materials, in an uneven distribution, mimicking a real life situation. The complete insert and genomic flanking regions were known for three of the included GMO events, while for MON15985 only partial sequence information was available. Combined with a known organisation of elements, this GMO served as a model for a UGMO. We successfully identified sequences matching with this organisation of elements serving as proof of principle for ALF as new UGMO detection strategy. Additionally, this study provides a first outline of an automated, web-based analysis pipeline for identification of UGMOs containing known GM elements.

  15. Naproxen-imprinted xerogels in the micro- and nanospherical formsby emulsion technique.

    PubMed

    Ornelas, Mariana; Azenha, Manuel; Pereira, Carlos; Silva, A Fernando

    2015-11-27

    Naproxen-imprinted xerogels in the microspherical and nanospherical forms were prepared by W/O emulsion and microemulsion, respectively. The work evolved from a sol–gel mixture previously reported for bulk synthesis. It was relatively simple to convert the original sol–gel mixture to one amenable to emulsion technique. The microspheres thus produced presented mean diameter of 3.7 μm, surface area ranging 220–340 m2/g, selectivity factor 4.3 (against ibuprofen) and imprinting factor 61. A superior capacity (9.4 μmol/g) was found, when comparing with imprints obtained from similar pre-gelification mixtures. However, slow mass transfer kinetics was deduced from column efficiency results. Concerning the nanospherical format, which constituted the first example of the production of molecularly imprinted xerogels in that format by microemulsion technique, adapting the sol–gel mixture was troublesome. In the end, nanoparticles with diameter in the order of 10 nm were finally obtained, exhibiting good indications of an efficient molecular imprinting process. Future refinements are necessary to solve serious aggregation issues, before moving to more accurate characterization of the binding characteristics or to real applications of the nanospheres.

  16. Competition in high dimensional spaces using a sparse approximation of neural fields.

    PubMed

    Quinton, Jean-Charles; Girau, Bernard; Lefort, Mathieu

    2011-01-01

    The Continuum Neural Field Theory implements competition within topologically organized neural networks with lateral inhibitory connections. However, due to the polynomial complexity of matrix-based implementations, updating dense representations of the activity becomes computationally intractable when an adaptive resolution or an arbitrary number of input dimensions is required. This paper proposes an alternative to self-organizing maps with a sparse implementation based on Gaussian mixture models, promoting a trade-off in redundancy for higher computational efficiency and alleviating constraints on the underlying substrate.This version reproduces the emergent attentional properties of the original equations, by directly applying them within a continuous approximation of a high dimensional neural field. The model is compatible with preprocessed sensory flows but can also be interfaced with artificial systems. This is particularly important for sensorimotor systems, where decisions and motor actions must be taken and updated in real-time. Preliminary tests are performed on a reactive color tracking application, using spatially distributed color features.

  17. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  18. The nonlinear model for emergence of stable conditions in gas mixture in force field

    NASA Astrophysics Data System (ADS)

    Kalutskov, Oleg; Uvarova, Liudmila

    2016-06-01

    The case of M-component liquid evaporation from the straight cylindrical capillary into N - component gas mixture in presence of external forces was reviewed. It is assumed that the gas mixture is not ideal. The stable states in gas phase can be formed during the evaporation process for the certain model parameter valuesbecause of the mass transfer initial equationsnonlinearity. The critical concentrations of the resulting gas mixture components (the critical component concentrations at which the stable states occur in mixture) were determined mathematically for the case of single-component fluid evaporation into two-component atmosphere. It was concluded that this equilibrium concentration ratio of the mixture components can be achieved by external force influence on the mass transfer processes. It is one of the ways to create sustainable gas clusters that can be used effectively in modern nanotechnology.

  19. A general mixture theory. I. Mixtures of spherical molecules

    NASA Astrophysics Data System (ADS)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  20. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    PubMed

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Thermodynamics of concentrated electrolyte mixtures and the prediction of mineral solubilities to high temperatures for mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O

    NASA Astrophysics Data System (ADS)

    Pabalan, Roberto T.; Pitzer, Kenneth S.

    1987-09-01

    Mineral solubilities in binary and ternary electrolyte mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O are calculated to high temperatures using available thermodynamic data for solids and for aqueous electrolyte solutions. Activity and osmotic coefficients are derived from the ion-interaction model of Pitzer (1973, 1979) and co-workers, the parameters of which are evaluated from experimentally determined solution properties or from solubility data in binary and ternary mixtures. Excellent to good agreement with experimental solubilities for binary and ternary mixtures indicate that the model can be successfully used to predict mineral-solution equilibria to high temperatures. Although there are currently no theoretical forms for the temperature dependencies of the various model parameters, the solubility data in ternary mixtures can be adequately represented by constant values of the mixing term θ ij and values of ψ ijk which are either constant or have a simple temperature dependence. Since no additional parameters are needed to describe the thermodynamic properties of more complex electrolyte mixtures, the calculations can be extended to equilibrium studies relevant to natural systems. Examples of predicted solubilities are given for the quaternary system NaCl-KCl-MgCl 2-H 2O.

  2. Bioanalytical effect-balance model to determine the bioavailability of organic contaminants in sediments affected by black and natural carbon.

    PubMed

    Bräunig, Jennifer; Tang, Janet Y M; Warne, Michael St J; Escher, Beate I

    2016-08-01

    In sediments several binding phases dictate the fate and bioavailability of organic contaminants. Black carbon (BC) has a high sorptive capacity for organic contaminants and can limit their bioavailability, while the fraction bound to organic carbon (OC) is considered to be readily desorbable and bioavailable. We investigated the bioavailability and mixture toxicity of sediment-associated contaminants by combining different extraction techniques with in vitro bioanalytical tools. Sediments from a harbour with high fraction of BC, and sediments from remote, agricultural and urban areas with lower BC were treated with exhaustive solvent extraction, Tenax extraction and passive sampling to estimate total, bioaccessible and bioavailable fractions, respectively. The extracts were characterized with cell-based bioassays that measure dioxin-like activity (AhR-CAFLUX) and the adaptive stress response to oxidative stress (AREc32). Resulting bioanalytical equivalents, which are effect-scaled concentrations, were applied in an effect-balance model, consistent with a mass balance-partitioning model for single chemicals. Sediments containing BC had most of the bioactivity associated to the BC fraction, while the OC fraction played a role for sediments with lower BC. As effect-based sediment-water distribution ratios demonstrated, most of the bioactivity in the AhR-CAFLUX was attributable to hydrophobic chemicals while more hydrophilic chemicals activated AREc32, even though bioanalytical equivalents in the aqueous phase remained negligible. This approach can be used to understand the fate and effects of mixtures of diverse organic contaminants in sediments that would not be possible if single chemicals were targeted by chemical analysis; and make informed risk-based decisions concerning the management of contaminated sediments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Spectrophotometric Analysis of Pigments: A Critical Assessment of a High-Throughput Method for Analysis of Algal Pigment Mixtures by Spectral Deconvolution

    PubMed Central

    Thrane, Jan-Erik; Kyle, Marcia; Striebel, Maren; Haande, Sigrid; Grung, Merete; Rohrlack, Thomas; Andersen, Tom

    2015-01-01

    The Gauss-peak spectra (GPS) method represents individual pigment spectra as weighted sums of Gaussian functions, and uses these to model absorbance spectra of phytoplankton pigment mixtures. We here present several improvements for this type of methodology, including adaptation to plate reader technology and efficient model fitting by open source software. We use a one-step modeling of both pigment absorption and background attenuation with non-negative least squares, following a one-time instrument-specific calibration. The fitted background is shown to be higher than a solvent blank, with features reflecting contributions from both scatter and non-pigment absorption. We assessed pigment aliasing due to absorption spectra similarity by Monte Carlo simulation, and used this information to select a robust set of identifiable pigments that are also expected to be common in natural samples. To test the method’s performance, we analyzed absorbance spectra of pigment extracts from sediment cores, 75 natural lake samples, and four phytoplankton cultures, and compared the estimated pigment concentrations with concentrations obtained using high performance liquid chromatography (HPLC). The deviance between observed and fitted spectra was generally very low, indicating that measured spectra could successfully be reconstructed as weighted sums of pigment and background components. Concentrations of total chlorophylls and total carotenoids could accurately be estimated for both sediment and lake samples, but individual pigment concentrations (especially carotenoids) proved difficult to resolve due to similarity between their absorbance spectra. In general, our modified-GPS method provides an improvement of the GPS method that is a fast, inexpensive, and high-throughput alternative for screening of pigment composition in samples of phytoplankton material. PMID:26359659

  4. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    PubMed

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

  5. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    NASA Astrophysics Data System (ADS)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  6. A Mixtures-of-Trees Framework for Multi-Label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011

  7. Liquid class predictor for liquid handling of complex mixtures

    DOEpatents

    Seglke, Brent W [San Ramon, CA; Lekin, Timothy P [Livermore, CA

    2008-12-09

    A method of establishing liquid classes of complex mixtures for liquid handling equipment. The mixtures are composed of components and the equipment has equipment parameters. The first step comprises preparing a response curve for the components. The next step comprises using the response curve to prepare a response indicator for the mixtures. The next step comprises deriving a model that relates the components and the mixtures to establish the liquid classes.

  8. Regression mixture models: Does modeling the covariance between independent variables and latent classes improve the results?

    PubMed Central

    Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956

  9. A Concentration Addition Model to Assess Activation of the Pregnane X Receptor (PXR) by Pesticide Mixtures Found in the French Diet

    PubMed Central

    de Sousa, Georges; Nawaz, Ahmad; Cravedi, Jean-Pierre; Rahmani, Roger

    2014-01-01

    French consumers are exposed to mixtures of pesticide residues in part through food consumption. As a xenosensor, the pregnane X receptor (hPXR) is activated by numerous pesticides, the combined effect of which is currently unknown. We examined the activation of hPXR by seven pesticide mixtures most likely found in the French diet and their individual components. The mixture's effect was estimated using the concentration addition (CA) model. PXR transactivation was measured by monitoring luciferase activity in hPXR/HepG2 cells and CYP3A4 expression in human hepatocytes. The three mixtures with the highest potency were evaluated using the CA model, at equimolar concentrations and at their relative proportion in the diet. The seven mixtures significantly activated hPXR and induced the expression of CYP3A4 in human hepatocytes. Of the 14 pesticides which constitute the three most active mixtures, four were found to be strong hPXR agonists, four medium, and six weak. Depending on the mixture and pesticide proportions, additive, greater than additive or less than additive effects between compounds were demonstrated. Predictions of the combined effects were obtained with both real-life and equimolar proportions at low concentrations. Pesticides act mostly additively to activate hPXR, when present in a mixture. Modulation of hPXR activation and its target genes induction may represent a risk factor contributing to exacerbate the physiological response of the hPXR signaling pathways and to explain some adverse effects in humans. PMID:25028461

  10. Biodegradation of chloro- and bromobenzoic acids: effect of milieu conditions and microbial community analysis.

    PubMed

    Gaza, Sarah; Felgner, Annika; Otto, Johannes; Kushmaro, Ariel; Ben-Dov, Eitan; Tiehm, Andreas

    2015-04-28

    Monohalogenated benzoic acids often appear in industrial wastewaters where biodegradation can be hampered by complex mixtures of pollutants and prevailing extreme milieu conditions. In this study, the biodegradation of chlorinated and brominated benzoic acids was conducted at a pH range of 5.0-9.0, at elevated salt concentrations and with pollutant mixtures including fluorinated and iodinated compounds. In mixtures of the isomers, the degradation order was primarily 4-substituted followed by 3-substituted and then 2-substituted halogenated benzoic acids. If the pH and salt concentration were altered simultaneously, long adaptation periods were required. Community analyses were conducted in liquid batch cultures and after immobilization on sand columns. The Alphaproteobacteria represented an important fraction in all of the enrichment cultures. On the genus level, Afipia sp. was detected most frequently. In particular, Bacteroidetes were detected in high numbers with chlorinated benzoic acids. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Apparatus and method for phosphate-accelerated bioremediation

    DOEpatents

    Looney, Brian B.; Pfiffner, Susan M.; Phelps, Tommy J.; Lombard, Kenneth H.; Hazen, Terry C.; Borthen, James W.

    1998-01-01

    An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site and provides for the use of a passive delivery system. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.

  12. Apparatus and method for phosphate-accelerated bioremediation

    DOEpatents

    Looney, B.B.; Phelps, T.J.; Hazen, T.C.; Pfiffner, S.M.; Lombard, K.H.; Borthen, J.W.

    1994-01-01

    An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in fluid communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.

  13. Method for phosphate-accelerated bioremediation

    DOEpatents

    Looney, Brian B.; Lombard, Kenneth H.; Hazen, Terry C.; Pfiffner, Susan M.; Phelps, Tommy J.; Borthen, James W.

    1996-01-01

    An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in fluid communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.

  14. Transient Catalytic Combustor Model With Detailed Gas and Surface Chemistry

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Mellish, Benjamin P.; Miller, Fletcher J.; Tien, James S.

    2005-01-01

    In this work, we numerically investigate the transient combustion of a premixed gas mixture in a narrow, perfectly-insulated, catalytic channel which can represent an interior channel of a catalytic monolith. The model assumes a quasi-steady gas-phase and a transient, thermally thin solid phase. The gas phase is one-dimensional, but it does account for heat and mass transfer in a direction perpendicular to the flow via appropriate heat and mass transfer coefficients. The model neglects axial conduction in both the gas and in the solid. The model includes both detailed gas-phase reactions and catalytic surface reactions. The reactants modeled so far include lean mixtures of dry CO and CO/H2 mixtures, with pure oxygen as the oxidizer. The results include transient computations of light-off and system response to inlet condition variations. In some cases, the model predicts two different steady-state solutions depending on whether the channel is initially hot or cold. Additionally, the model suggests that the catalytic ignition of CO/O2 mixtures is extremely sensitive to small variations of inlet equivalence ratios and parts per million levels of H2.

  15. Using dynamic N-mixture models to test cavity limitation on northern flying squirrel demographic parameters using experimental nest box supplementation.

    PubMed

    Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica

    2014-06-01

    Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.

  16. Spurious Latent Classes in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Alexeev, Natalia; Templin, Jonathan; Cohen, Allan S.

    2011-01-01

    Mixture Rasch models have been used to study a number of psychometric issues such as goodness of fit, response strategy differences, strategy shifts, and multidimensionality. Although these models offer the potential for improving understanding of the latent variables being measured, under some conditions overextraction of latent classes may…

  17. Individual and binary toxicity of anatase and rutile nanoparticles towards Ceriodaphnia dubia.

    PubMed

    Iswarya, V; Bhuvaneshwari, M; Chandrasekaran, N; Mukherjee, Amitava

    2016-09-01

    Increasing usage of engineered nanoparticles, especially Titanium dioxide (TiO2) in various commercial products has necessitated their toxicity evaluation and risk assessment, especially in the aquatic ecosystem. In the present study, a comprehensive toxicity assessment of anatase and rutile NPs (individual as well as a binary mixture) has been carried out in a freshwater matrix on Ceriodaphnia dubia under different irradiation conditions viz., visible and UV-A. Anatase and rutile NPs produced an LC50 of about 37.04 and 48mg/L, respectively, under visible irradiation. However, lesser LC50 values of about 22.56 (anatase) and 23.76 (rutile) mg/L were noted under UV-A irradiation. A toxic unit (TU) approach was followed to determine the concentrations of binary mixtures of anatase and rutile. The binary mixture resulted in an antagonistic and additive effect under visible and UV-A irradiation, respectively. Among the two different modeling approaches used in the study, Marking-Dawson model was noted to be a more appropriate model than Abbott model for the toxicity evaluation of binary mixtures. The agglomeration of NPs played a significant role in the induction of antagonistic and additive effects by the mixture based on the irradiation applied. TEM and zeta potential analysis confirmed the surface interactions between anatase and rutile NPs in the mixture. Maximum uptake was noticed at 0.25 total TU of the binary mixture under visible irradiation and 1 TU of anatase NPs for UV-A irradiation. Individual NPs showed highest uptake under UV-A than visible irradiation. In contrast, binary mixture showed a difference in the uptake pattern based on the type of irradiation exposed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Rasch Mixture Models for DIF Detection: A Comparison of Old and New Score Specifications

    ERIC Educational Resources Information Center

    Frick, Hannah; Strobl, Carolin; Zeileis, Achim

    2015-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch…

  19. The effect of relative solubility on crystal purity

    NASA Astrophysics Data System (ADS)

    Givand, Jeffrey Christopher

    This study establishes the relationship between impurity incorporation in a crystal by lattice substitution and the solubility of that impurity in solution. The model system studied was L-isoleucine crystals contaminated by the isomorphic impurity L-leucine. Upon crystallization from aqueous solution by cooling, leucine is concentrated in the isoleucine unit cell through lattice substitution mechanisms. Attempts to reduce the degree of leucine incorporation via adjustments of the rate at which supersaturation is generated yielded marginal success. This work demonstrates that incorporation of leucine in the crystal can be considerably suppressed by reducing the solubility of product relative to the solubility of impurity. Changes to the relative solubility of the impurity were accomplished by the addition of various electrolytes and organic co-solvents to the aqueous amino acid solutions. The solubilities of the two amino acids were measured and compared to their solubilities in pure water. Changes in the ratio of pure-component solubilities were directly related to changes in crystal purity. This thermodynamic quantity of relative solubility was shown to be a key factor in determining impurity uptake by lattice substitution. In addition to the experimental observations, a fundamental thermodynamic link between relative solubility and crystal purity is established through this research. First, the amino acid solubility data as a function of temperature in all solvent mixtures were accurately correlated using a thermodynamic model. The parameters from this model were then adapted to a novel solid-solution thermodynamic model to express the crystal purity in terms of equilibrium solution impurity concentration. After the determination of one system specific parameter, the model is able to predict the crystal purity in a new solvent in which the pure-component solubilities are known. The ability of an electrolyte or co-solvent to improve crystal purity from a given level can now be determined based on existing solubility and purity measurements and solubilities of the product and impurity in the new solvent mixture.

  20. Myocardium Segmentation From DE MRI Using Multicomponent Gaussian Mixture Model and Coupled Level Set.

    PubMed

    Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu

    2017-11-01

    Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.

  1. Modeling biofiltration of VOC mixtures under steady-state conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baltzis, B.C.; Wojdyla, S.M.; Zarook, S.M.

    1997-06-01

    Treatment of air streams contaminated with binary volatile organic compound (VOC) mixtures in classical biofilters under steady-state conditions of operation was described with a general mathematical model. The model accounts for potential kinetic interactions among the pollutants, effects of oxygen availability on biodegradation, and biomass diversification in the filter bed. While the effects of oxygen were always taken into account, two distinct cases were considered for the experimental model validation. The first involves kinetic interactions, but no biomass differentiation, used for describing data from biofiltration of benzene/toluene mixtures. The second case assumes that each pollutant is treated by a differentmore » type of biomass. Each biomass type is assumed to form separate patches of biofilm on the solid packing material, thus kinetic interference does not occur. This model was used for describing biofiltration of ethanol/butanol mixtures. Experiments were performed with classical biofilters packed with mixtures of peat moss and perlite (2:3, volume:volume). The model equations were solved through the use of computer codes based on the fourth-order Runge-Kutta technique for the gas-phase mass balances and the method of orthogonal collocation for the concentration profiles in the biofilms. Good agreement between model predictions and experimental data was found in almost all cases. Oxygen was found to be extremely important in the case of polar VOCs (ethanol/butanol).« less

  2. A new predictive multi-zone model for HCCI engine combustion

    DOE PAGES

    Bissoli, Mattia; Frassoldati, Alessio; Cuoci, Alberto; ...

    2016-06-30

    Here, this work introduces a new predictive multi-zone model for the description of combustion in Homogeneous Charge Compression Ignition (HCCI) engines. The model exploits the existing OpenSMOKE++ computational suite to handle detailed kinetic mechanisms, providing reliable predictions of the in-cylinder auto-ignition processes. All the elements with a significant impact on the combustion performances and emissions, like turbulence, heat and mass exchanges, crevices, residual burned gases, thermal and feed stratification are taken into account. Compared to other computational approaches, this model improves the description of mixture stratification phenomena by coupling a wall heat transfer model derived from CFD application with amore » proper turbulence model. Furthermore, the calibration of this multi-zone model requires only three parameters, which can be derived from a non-reactive CFD simulation: these adaptive variables depend only on the engine geometry and remain fixed across a wide range of operating conditions, allowing the prediction of auto-ignition, pressure traces and pollutants. This computational framework enables the use of detail kinetic mechanisms, as well as Rate of Production Analysis (RoPA) and Sensitivity Analysis (SA) to investigate the complex chemistry involved in the auto-ignition and the pollutants formation processes. In the final sections of the paper, these capabilities are demonstrated through the comparison with experimental data.« less

  3. Proposed Guidance for Preparing and Reviewing Molten Salt Nonpower Reactor Licence Applications (NUREG-1537)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belles, Randy; Flanagan, George F.; Voth, Marcus

    Development of non-power molten salt reactor (MSR) test facilities is under consideration to support the analyses needed for development of a full-scale MSR. These non-power MSR test facilities will require review by the US Nuclear Regulatory Commission (NRC) staff. This report proposes chapter adaptations for NUREG-1537 in the form of interim staff guidance to address preparation and review of molten salt non-power reactor license applications. The proposed adaptations are based on a previous regulatory gap analysis of select chapters from NUREG-1537 for their applicability to non-power MSRs operating with a homogeneous fuel salt mixture.

  4. Adaptive computations of multispecies mixing between scramjet nozzle flows and hypersonic freestream

    NASA Technical Reports Server (NTRS)

    Baysa, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.; Pittman, James L.

    1989-01-01

    The objective of this paper is to compute the expansion of a supersonic flow through an internal-external nozzle and its viscous mixing with the hypersonic flow of air. The supersonic jet may be that of a multispecies gas other than air. Calculations are performed for one case where both flows are those of air, and another case where a mixture of freon-12 and argon is discharged supersonically to mix with the hypersonic airflow. Comparisons are made between these two cases with respect to gas compositions, and fixed versus flow-adaptive grids. All the computational results are compared successfully with the wind-tunnel tests results.

  5. Modeling the soil water retention curves of soil-gravel mixtures with regression method on the Loess Plateau of China.

    PubMed

    Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an

    2013-01-01

    Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present.

  6. Modeling the Soil Water Retention Curves of Soil-Gravel Mixtures with Regression Method on the Loess Plateau of China

    PubMed Central

    Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an

    2013-01-01

    Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present. PMID:23555040

  7. Phenomenological Modeling and Laboratory Simulation of Long-Term Aging of Asphalt Mixtures

    NASA Astrophysics Data System (ADS)

    Elwardany, Michael Dawoud

    The accurate characterization of asphalt mixture properties as a function of pavement service life is becoming more important as more powerful pavement design and performance prediction methods are implemented. Oxidative aging is a major distress mechanism of asphalt pavements. Aging increases the stiffness and brittleness of the material, which leads to a high cracking potential. Thus, an improved understanding of the aging phenomenon and its effect on asphalt binder chemical and rheological properties will allow for the prediction of mixture properties as a function of pavement service life. Many researchers have conducted laboratory binder thin-film aging studies; however, this approach does not allow for studying the physicochemical effects of mineral fillers on age hardening rates in asphalt mixtures. Moreover, aging phenomenon in the field is governed by kinetics of binder oxidation, oxygen diffusion through mastic phase, and oxygen percolation throughout the air voids structure. In this study, laboratory aging trials were conducted on mixtures prepared using component materials of several field projects throughout the USA and Canada. Laboratory aged materials were compared against field cores sampled at different ages. Results suggested that oven aging of loose mixture at 95°C is the most promising laboratory long-term aging method. Additionally, an empirical model was developed in order to account for the effect of mineral fillers on age hardening rates in asphalt mixtures. Kinetics modeling was used to predict field aging levels throughout pavement thickness and to determine the required laboratory aging duration to match field aging. Kinetics model outputs are calibrated using measured data from the field to account for the effects of oxygen diffusion and percolation. Finally, the calibrated model was validated using independent set of field sections. This work is expected to provide basis for improved asphalt mixture and pavement design procedures in order to save taxpayers' money.

  8. Kinetics of methane production from the codigestion of switchgrass and Spirulina platensis algae.

    PubMed

    El-Mashad, Hamed M

    2013-03-01

    Anaerobic batch digestion of four feedstocks was conducted at 35 and 50 °C: switchgrass; Spirulina platensis algae; and two mixtures of both switchgrass and S. platensis. Mixture 1 was composed of 87% switchgrass (based on volatile solids) and 13% S. platensis. Mixture 2 was composed of 67% switchgrass and 33% S. platensis. The kinetics of methane production from these feedstocks was studied using four first order models: exponential, Gompertz, Fitzhugh, and Cone. The methane yields after 40days of digestion at 35 °C were 355, 127, 143 and 198 ml/g VS, respectively for S. platensis, switchgrass, and Mixtures 1 and 2, while the yields at 50 °C were 358, 167, 198, and 236 ml/g VS, respectively. Based on Akaike's information criterion, the Cone model best described the experimental data. The Cone model was validated with experimental data collected from the digestion of a third mixture that was composed of 83% switchgrass and 17% S. platensis. Published by Elsevier Ltd.

  9. Advanced stability indicating chemometric methods for quantitation of amlodipine and atorvastatin in their quinary mixture with acidic degradation products

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2016-02-01

    Two advanced, accurate and precise chemometric methods are developed for the simultaneous determination of amlodipine besylate (AML) and atorvastatin calcium (ATV) in the presence of their acidic degradation products in tablet dosage forms. The first method was Partial Least Squares (PLS-1) and the second was Artificial Neural Networks (ANN). PLS was compared to ANN models with and without variable selection procedure (genetic algorithm (GA)). For proper analysis, a 5-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the interfering species. Fifteen mixtures were used as calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested models. The proposed methods were successfully applied to the analysis of pharmaceutical tablets containing AML and ATV. The methods indicated the ability of the mentioned models to solve the highly overlapped spectra of the quinary mixture, yet using inexpensive and easy to handle instruments like the UV-VIS spectrophotometer.

  10. Thermal conductivity of disperse insulation materials and their mixtures

    NASA Astrophysics Data System (ADS)

    Geža, V.; Jakovičs, A.; Gendelis, S.; Usiļonoks, I.; Timofejevs, J.

    2017-10-01

    Development of new, more efficient thermal insulation materials is a key to reduction of heat losses and contribution to greenhouse gas emissions. Two innovative materials developed at Thermeko LLC are Izoprok and Izopearl. This research is devoted to experimental study of thermal insulation properties of both materials as well as their mixture. Results show that mixture of 40% Izoprok and 60% of Izopearl has lower thermal conductivity than pure materials. In this work, material thermal conductivity dependence temperature is also measured. Novel modelling approach is used to model spatial distribution of disperse insulation material. Computational fluid dynamics approach is also used to estimate role of different heat transfer phenomena in such porous mixture. Modelling results show that thermal convection plays small role in heat transfer despite large fraction of air within material pores.

  11. A comparative study of mixture cure models with covariate

    NASA Astrophysics Data System (ADS)

    Leng, Oh Yit; Khalid, Zarina Mohd

    2017-05-01

    In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.

  12. Optimal Design of Material and Process Parameters in Powder Injection Molding

    NASA Astrophysics Data System (ADS)

    Ayad, G.; Barriere, T.; Gelin, J. C.; Song, J.; Liu, B.

    2007-04-01

    The paper is concerned with optimization and parametric identification for the different stages in Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders part by solid state diffusion. In the first part, one describes an original methodology to optimize the process and geometry parameters in injection stage based on the combination of design of experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometeric curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization of material and process parameters for manufacturing a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.

  13. Sonic Thermometer for High-Altitude Balloons

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    The sonic thermometer is a specialized application of well-known sonic anemometer technology. Adaptations have been made to the circuit, including the addition of supporting sensors, which enable its use in the high-altitude environment and in non-air gas mixtures. There is a need to measure gas temperatures inside and outside of superpressure balloons that are flown at high altitudes. These measurements will allow the performance of the balloon to be modeled more accurately, leading to better flight performance. Small thermistors (solid-state temperature sensors) have been used for this general purpose, and for temperature measurements on radiosondes. A disadvantage to thermistors and other physical (as distinct from sonic) temperature sensors is that they are subject to solar heating errors when they are exposed to the Sun, and this leads to issues with their use in a very high-altitude environment

  14. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    NASA Astrophysics Data System (ADS)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  15. Metal-Polycyclic Aromatic Hydrocarbon Mixture Toxicity in Hyalella azteca. 1. Response Surfaces and Isoboles To Measure Non-additive Mixture Toxicity and Ecological Risk.

    PubMed

    Gauthier, Patrick T; Norwood, Warren P; Prepas, Ellie E; Pyle, Greg G

    2015-10-06

    Mixtures of metals and polycyclic aromatic hydrocarbons (PAHs) occur ubiquitously in aquatic environments, yet relatively little is known regarding their potential to produce non-additive toxicity (i.e., antagonism or potentiation). A review of the lethality of metal-PAH mixtures in aquatic biota revealed that more-than-additive lethality is as common as strictly additive effects. Approaches to ecological risk assessment do not consider non-additive toxicity of metal-PAH mixtures. Forty-eight-hour water-only binary mixture toxicity experiments were conducted to determine the additive toxic nature of mixtures of Cu, Cd, V, or Ni with phenanthrene (PHE) or phenanthrenequinone (PHQ) using the aquatic amphipod Hyalella azteca. In cases where more-than-additive toxicity was observed, we calculated the possible mortality rates at Canada's environmental water quality guideline concentrations. We used a three-dimensional response surface isobole model-based approach to compare the observed co-toxicity in juvenile amphipods to predicted outcomes based on concentration addition or effects addition mixtures models. More-than-additive lethality was observed for all Cu-PHE, Cu-PHQ, and several Cd-PHE, Cd-PHQ, and Ni-PHE mixtures. Our analysis predicts Cu-PHE, Cu-PHQ, Cd-PHE, and Cd-PHQ mixtures at the Canadian Water Quality Guideline concentrations would produce 7.5%, 3.7%, 4.4% and 1.4% mortality, respectively.

  16. Olfactory cortical adaptation facilitates detection of odors against background.

    PubMed

    Kadohisa, Mikiko; Wilson, Donald A

    2006-03-01

    Detection and discrimination of odors generally, if not always, occurs against an odorous background. On any given inhalation, olfactory receptor neurons will be activated by features of both the target odorant and features of background stimuli. To identify a target odorant against a background therefore, the olfactory system must be capable of grouping a subset of features into an odor object distinct from the background. Our previous work has suggested that rapid homosynaptic depression of afferents to the anterior piriform cortex (aPCX) contributes to both cortical odor adaptation to prolonged stimulation and habituation of simple odor-evoked behaviors. We hypothesize here that this process may also contribute to figure-ground separation of a target odorant from background stimulation. Single-unit recordings were made from both mitral/tufted cells and aPCX neurons in urethan-anesthetized rats and mice. Single-unit responses to odorant stimuli and their binary mixtures were determined. One of the odorants was randomly selected as the background and presented for 50 s. Forty seconds after the onset of the background stimulus, the second target odorant was presented, producing a binary mixture. The results suggest that mitral/tufted cells continue to respond to the background odorant and, when the target odorant is presented, had response magnitudes similar to that evoked by the binary mixture. In contrast, aPCX neurons filter out the background stimulus while maintaining responses to the target stimulus. Thus the aPCX acts as a filter driven most strongly by changing stimuli, providing a potential mechanism for olfactory figure-ground separation and selective reading of olfactory bulb output.

  17. The simultaneous mass and energy evaporation (SM2E) model.

    PubMed

    Choudhary, Rehan; Klauda, Jeffery B

    2016-01-01

    In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.

  18. PACE: Probabilistic Assessment for Contributor Estimation- A machine learning-based assessment of the number of contributors in DNA mixtures.

    PubMed

    Marciano, Michael A; Adelman, Jonathan D

    2017-03-01

    The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)

  20. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Native perennial forb tolerance to rates and mixtures of postemergence herbicides, 2009

    Treesearch

    Clinton C. Shock; Erik Feibert; Nancy Shaw

    2010-01-01

    Native forb seed is needed to restore rangelands of the Intermountain West. Commercial seed production is necessary to provide the quantity of seed needed for restoration efforts. A major limitation to economically viable commercial production of native forb seed is weed competition. Weeds are adapted to growing in disturbed soil, and native forbs are not competitive...

  2. Discovering Electronic Effects of Substituents in Nitrations of Benzene Derivatives Using GC-MS Analysis

    ERIC Educational Resources Information Center

    Clennan, Malgorzata M.; Clennan, Edward L.

    2007-01-01

    The nitration of six benzene derivatives having a range of substituents that differ in electronic effects were followed by GC-MS analyses of the crude reaction mixtures and adapted for the second-year organic laboratory. Students pool their results and identify the products by analyzing the mass spectral data of the isomers and by comparing them…

  3. An analysis of lethal and sublethal interactions among type I and type II pyrethroid pesticide mixtures using standard Hyalella azteca water column toxicity tests.

    PubMed

    Hoffmann, Krista Callinan; Deanovic, Linda; Werner, Inge; Stillway, Marie; Fong, Stephanie; Teh, Swee

    2016-10-01

    A novel 2-tiered analytical approach was used to characterize and quantify interactions between type I and type II pyrethroids in Hyalella azteca using standardized water column toxicity tests. Bifenthrin, permethrin, cyfluthrin, and lambda-cyhalothrin were tested in all possible binary combinations across 6 experiments. All mixtures were analyzed for 4-d lethality, and 2 of the 6 mixtures (permethrin-bifenthrin and permethrin-cyfluthrin) were tested for subchronic 10-d lethality and sublethal effects on swimming motility and growth. Mixtures were initially analyzed for interactions using regression analyses, and subsequently compared with the additive models of concentration addition and independent action to further characterize mixture responses. Negative interactions (antagonistic) were significant in 2 of the 6 mixtures tested, including cyfluthrin-bifenthrin and cyfluthrin-permethrin, but only on the acute 4-d lethality endpoint. In both cases mixture responses fell between the additive models of concentration addition and independent action. All other mixtures were additive across 4-d lethality, and bifenthrin-permethrin and cyfluthrin-permethrin were also additive in terms of subchronic 10-d lethality and sublethal responses. Environ Toxicol Chem 2016;35:2542-2549. © 2016 SETAC. © 2016 SETAC.

  4. Heat transfer during condensation of steam from steam-gas mixtures in the passive safety systems of nuclear power plants

    NASA Astrophysics Data System (ADS)

    Portnova, N. M.; Smirnov, Yu B.

    2017-11-01

    A theoretical model for calculation of heat transfer during condensation of multicomponent vapor-gas mixtures on vertical surfaces, based on film theory and heat and mass transfer analogy is proposed. Calculations were performed for the conditions implemented in experimental studies of heat transfer during condensation of steam-gas mixtures in the passive safety systems of PWR-type reactors of different designs. Calculated values of heat transfer coefficients for condensation of steam-air, steam-air-helium and steam-air-hydrogen mixtures at pressures of 0.2 to 0.6 MPa and of steam-nitrogen mixture at the pressures of 0.4 to 2.6 MPa were obtained. The composition of mixtures and vapor-to-surface temperature difference were varied within wide limits. Tube length ranged from 0.65 to 9.79m. The condensation of all steam-gas mixtures took place in a laminar-wave flow mode of condensate film and turbulent free convection in the diffusion boundary layer. The heat transfer coefficients obtained by calculation using the proposed model are in good agreement with the considered experimental data for both the binary and ternary mixtures.

  5. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    PubMed Central

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-01-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127

  6. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate.

    PubMed

    Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-26

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  7. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    NASA Astrophysics Data System (ADS)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  8. Plasticity and genetic adaptation mediate amphibian and reptile responses to climate change.

    PubMed

    Urban, Mark C; Richardson, Jonathan L; Freidenfelds, Nicole A

    2014-01-01

    Phenotypic plasticity and genetic adaptation are predicted to mitigate some of the negative biotic consequences of climate change. Here, we evaluate evidence for plastic and evolutionary responses to climate variation in amphibians and reptiles via a literature review and meta-analysis. We included studies that either document phenotypic changes through time or space. Plasticity had a clear and ubiquitous role in promoting phenotypic changes in response to climate variation. For adaptive evolution, we found no direct evidence for evolution of amphibians or reptiles in response to climate change over time. However, we found many studies that documented adaptive responses to climate along spatial gradients. Plasticity provided a mixture of adaptive and maladaptive responses to climate change, highlighting that plasticity frequently, but not always, could ameliorate climate change. Based on our review, we advocate for more experiments that survey genetic changes through time in response to climate change. Overall, plastic and genetic variation in amphibians and reptiles could buffer some of the formidable threats from climate change, but large uncertainties remain owing to limited data.

  9. Plasticity and genetic adaptation mediate amphibian and reptile responses to climate change

    PubMed Central

    Urban, Mark C; Richardson, Jonathan L; Freidenfelds, Nicole A

    2014-01-01

    Phenotypic plasticity and genetic adaptation are predicted to mitigate some of the negative biotic consequences of climate change. Here, we evaluate evidence for plastic and evolutionary responses to climate variation in amphibians and reptiles via a literature review and meta-analysis. We included studies that either document phenotypic changes through time or space. Plasticity had a clear and ubiquitous role in promoting phenotypic changes in response to climate variation. For adaptive evolution, we found no direct evidence for evolution of amphibians or reptiles in response to climate change over time. However, we found many studies that documented adaptive responses to climate along spatial gradients. Plasticity provided a mixture of adaptive and maladaptive responses to climate change, highlighting that plasticity frequently, but not always, could ameliorate climate change. Based on our review, we advocate for more experiments that survey genetic changes through time in response to climate change. Overall, plastic and genetic variation in amphibians and reptiles could buffer some of the formidable threats from climate change, but large uncertainties remain owing to limited data. PMID:24454550

  10. High-Resolution Numerical Simulation and Analysis of Mach Reflection Structures in Detonation Waves in Low-Pressure H 2 –O 2 –Ar Mixtures: A Summary of Results Obtained with the Adaptive Mesh Refinement Framework AMROC

    DOE PAGES

    Deiterding, Ralf

    2011-01-01

    Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniquesmore » in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.« less

  11. Nature and prevalence of non-additive toxic effects in industrially relevant mixtures of organic chemicals.

    PubMed

    Parvez, Shahid; Venkataraman, Chandra; Mukherji, Suparna

    2009-06-01

    The concentration addition (CA) and the independent action (IA) models are widely used for predicting mixture toxicity based on its composition and individual component dose-response profiles. However, the prediction based on these models may be inaccurate due to interaction among mixture components. In this work, the nature and prevalence of non-additive effects were explored for binary, ternary and quaternary mixtures composed of hydrophobic organic compounds (HOCs). The toxicity of each individual component and mixture was determined using the Vibrio fischeri bioluminescence inhibition assay. For each combination of chemicals specified by the 2(n) factorial design, the percent deviation of the predicted toxic effect from the measured value was used to characterize mixtures as synergistic (positive deviation) and antagonistic (negative deviation). An arbitrary classification scheme was proposed based on the magnitude of deviation (d) as: additive (< or =10%, class-I) and moderately (10< d < or =30 %, class-II), highly (30< d < or =50%, class-III) and very highly (>50%, class-IV) antagonistic/synergistic. Naphthalene, n-butanol, o-xylene, catechol and p-cresol led to synergism in mixtures while 1, 2, 4-trimethylbenzene and 1, 3-dimethylnaphthalene contributed to antagonism. Most of the mixtures depicted additive or antagonistic effect. Synergism was prominent in some of the mixtures, such as, pulp and paper, textile dyes, and a mixture composed of polynuclear aromatic hydrocarbons. The organic chemical industry mixture depicted the highest abundance of antagonism and least synergism. Mixture toxicity was found to depend on partition coefficient, molecular connectivity index and relative concentration of the components.

  12. Gaseous emissions from the combustion of a waste mixture containing a high concentration of N2O.

    PubMed

    Dong, Changqing; Yang, Yongping; Zhang, Junjiao; Lu, Xuefeng

    2009-01-01

    This paper is focused on reducing the emissions from the combustion of a waste mixture containing a high concentration of N2O. A rate model and an equilibrium model were used to predict gaseous emissions from the combustion of the mixture. The influences of temperature and methane were considered, and the experimental research was carried out in a tabular reactor and a pilot combustion furnace. The results showed that for the waste mixture, the combustion temperature should be in the range of 950-1100 degrees C and the gas residence time should be 2s or higher to reduce emissions.

  13. Mixtures of charged colloid and neutral polymer: Influence of electrostatic interactions on demixing and interfacial tension

    NASA Astrophysics Data System (ADS)

    Denton, Alan R.; Schmidt, Matthias

    2005-06-01

    The equilibrium phase behavior of a binary mixture of charged colloids and neutral, nonadsorbing polymers is studied within free-volume theory. A model mixture of charged hard-sphere macroions and ideal, coarse-grained, effective-sphere polymers is mapped first onto a binary hard-sphere mixture with nonadditive diameters and then onto an effective Asakura-Oosawa model [S. Asakura and F. Oosawa, J. Chem. Phys. 22, 1255 (1954)]. The effective model is defined by a single dimensionless parameter—the ratio of the polymer diameter to the effective colloid diameter. For high salt-to-counterion concentration ratios, a free-volume approximation for the free energy is used to compute the fluid phase diagram, which describes demixing into colloid-rich (liquid) and colloid-poor (vapor) phases. Increasing the range of electrostatic interactions shifts the demixing binodal toward higher polymer concentration, stabilizing the mixture. The enhanced stability is attributed to a weakening of polymer depletion-induced attraction between electrostatically repelling macroions. Comparison with predictions of density-functional theory reveals a corresponding increase in the liquid-vapor interfacial tension. The predicted trends in phase stability are consistent with observed behavior of protein-polysaccharide mixtures in food colloids.

  14. Four common pesticides, their mixtures and a formulation solvent in the hive environment have high oral toxicity to honey bee larvae.

    PubMed

    Zhu, Wanyi; Schmehl, Daniel R; Mullin, Christopher A; Frazier, James L

    2014-01-01

    Recently, the widespread distribution of pesticides detected in the hive has raised serious concerns about pesticide exposure on honey bee (Apis mellifera L.) health. A larval rearing method was adapted to assess the chronic oral toxicity to honey bee larvae of the four most common pesticides detected in pollen and wax--fluvalinate, coumaphos, chlorothalonil, and chloropyrifos--tested alone and in all combinations. All pesticides at hive-residue levels triggered a significant increase in larval mortality compared to untreated larvae by over two fold, with a strong increase after 3 days of exposure. Among these four pesticides, honey bee larvae were most sensitive to chlorothalonil compared to adults. Synergistic toxicity was observed in the binary mixture of chlorothalonil with fluvalinate at the concentrations of 34 mg/L and 3 mg/L, respectively; whereas, when diluted by 10 fold, the interaction switched to antagonism. Chlorothalonil at 34 mg/L was also found to synergize the miticide coumaphos at 8 mg/L. The addition of coumaphos significantly reduced the toxicity of the fluvalinate and chlorothalonil mixture, the only significant non-additive effect in all tested ternary mixtures. We also tested the common 'inert' ingredient N-methyl-2-pyrrolidone at seven concentrations, and documented its high toxicity to larval bees. We have shown that chronic dietary exposure to a fungicide, pesticide mixtures, and a formulation solvent have the potential to impact honey bee populations, and warrants further investigation. We suggest that pesticide mixtures in pollen be evaluated by adding their toxicities together, until complete data on interactions can be accumulated.

  15. Assessment of combined antiandrogenic effects of binary parabens mixtures in a yeast-based reporter assay.

    PubMed

    Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui

    2014-05-01

    To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.

  16. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Teaching learning based optimization-functional link artificial neural network filter for mixed noise reduction from magnetic resonance image.

    PubMed

    Kumar, M; Mishra, S K

    2017-01-01

    The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.

  18. Adaptive Multi-sensor Data Fusion Model for In-situ Exploration of Mars

    NASA Astrophysics Data System (ADS)

    Schneiderman, T.; Sobron, P.

    2014-12-01

    Laser Raman spectroscopy (LRS) and laser-induced breakdown spectroscopy (LIBS) can be used synergistically to characterize the geochemistry and mineralogy of potential microbial habitats and biosignatures. The value of LRS and LIBS has been recognized by the planetary science community: (i) NASA's Mars2020 mission features a combined LRS-LIBS instrument, SuperCam, and an LRS instrument, SHERLOC; (ii) an LRS instrument, RLS, will fly on ESA's 2018 ExoMars mission. The advantages of combining LRS and LIBS are evident: (1) LRS/LIBS can share hardware components; (2) LIBS reveals the relative concentration of major (and often trace) elements present in a sample; and (3) LRS yields information on the individual mineral species and their chemical/structural nature. Combining data from LRS and LIBS enables definitive mineral phase identification with precise chemical characterization of major, minor, and trace mineral species. New approaches to data processing are needed to analyze large amounts of LRS+LIBS data efficiently and maximize the scientific return of integrated measurements. Multi-sensor data fusion (MSDF) is a method that allows for robust sample identification through automated acquisition, processing, and combination of data. It optimizes information usage, yielding a more robust characterization of a target than could be acquired through single sensor use. We have developed a prototype fuzzy logic adaptive MSDF model aimed towards the unsupervised characterization of Martian habitats and their biosignatures using LRS and LIBS datasets. Our model also incorporates fusion of microimaging (MI) data - critical for placing analyses in geological and spatial context. Here, we discuss the performance of our novel MSDF model and demonstrate that automated quantification of the salt abundance in sulfate/clay/phyllosilicate mixtures is possible through data fusion of collocated LRS, LIBS, and MI data.

  19. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    PubMed Central

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  20. Bayesian multivariate Poisson abundance models for T-cell receptor data.

    PubMed

    Greene, Joshua; Birtwistle, Marc R; Ignatowicz, Leszek; Rempala, Grzegorz A

    2013-06-07

    A major feature of an adaptive immune system is its ability to generate B- and T-cell clones capable of recognizing and neutralizing specific antigens. These clones recognize antigens with the help of the surface molecules, called antigen receptors, acquired individually during the clonal development process. In order to ensure a response to a broad range of antigens, the number of different receptor molecules is extremely large, resulting in a huge clonal diversity of both B- and T-cell receptor populations and making their experimental comparisons statistically challenging. To facilitate such comparisons, we propose a flexible parametric model of multivariate count data and illustrate its use in a simultaneous analysis of multiple antigen receptor populations derived from mammalian T-cells. The model relies on a representation of the observed receptor counts as a multivariate Poisson abundance mixture (m PAM). A Bayesian parameter fitting procedure is proposed, based on the complete posterior likelihood, rather than the conditional one used typically in similar settings. The new procedure is shown to be considerably more efficient than its conditional counterpart (as measured by the Fisher information) in the regions of m PAM parameter space relevant to model T-cell data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  2. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    PubMed

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  3. Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects

    NASA Technical Reports Server (NTRS)

    Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.

    2002-01-01

    Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.

  4. A globally accurate theory for a class of binary mixture models

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  5. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts

    USGS Publications Warehouse

    Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.

    2013-01-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  6. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts.

    PubMed

    Dorazio, Robert M; Martin, Julien; Edwards, Holly H

    2013-07-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  7. Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; And Others

    A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…

  8. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    ERIC Educational Resources Information Center

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  9. Mixture Distribution Latent State-Trait Analysis: Basic Ideas and Applications

    ERIC Educational Resources Information Center

    Courvoisier, Delphine S.; Eid, Michael; Nussbeck, Fridtjof W.

    2007-01-01

    Extensions of latent state-trait models for continuous observed variables to mixture latent state-trait models with and without covariates of change are presented that can separate individuals differing in their occasion-specific variability. An empirical application to the repeated measurement of mood states (N = 501) revealed that a model with 2…

  10. Kinetic model for the vibrational energy exchange in flowing molecular gas mixtures. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Offenhaeuser, F.

    1987-01-01

    The present study is concerned with the development of a computational model for the description of the vibrational energy exchange in flowing gas mixtures, taking into account a given number of energy levels for each vibrational degree of freedom. It is possible to select an arbitrary number of energy levels. The presented model uses values in the range from 10 to approximately 40. The distribution of energy with respect to these levels can differ from the equilibrium distribution. The kinetic model developed can be employed for arbitrary gaseous mixtures with an arbitrary number of vibrational degrees of freedom for each type of gas. The application of the model to CO2-H2ON2-O2-He mixtures is discussed. The obtained relations can be utilized in a study of the suitability of radiation-related transitional processes, involving the CO2 molecule, for laser applications. It is found that the computational results provided by the model agree very well with experimental data obtained for a CO2 laser. Possibilities for the activation of a 16-micron and 14-micron laser are considered.

  11. MODEL OF ADDITIVE EFFECTS OF MIXTURES OF NARCOTIC CHEMICALS

    EPA Science Inventory

    Biological effects data with single chemicals are far more abundant than with mixtures. et, environmental exposures to chemical mixtures, for example near hazardous waste sites or nonpoint sources, are very common and using test data from single chemicals to approximate effects o...

  12. Thermodynamic properties of model CdTe/CdSe mixtures

    DOE PAGES

    van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...

    2015-02-20

    We report on the thermodynamic properties of binary compound mixtures of model groups II–VI semiconductors. We use the recently introduced Stillinger–Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. We found that the potential energy exhibits a positive deviation frommore » ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. Moreover, it roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.« less

  13. Second law of thermodynamics in volume diffusion hydrodynamics in multicomponent gas mixtures

    NASA Astrophysics Data System (ADS)

    Dadzie, S. Kokou

    2012-10-01

    We presented the thermodynamic structure of a new continuum flow model for multicomponent gas mixtures. The continuum model is based on a volume diffusion concept involving specific species. It is independent of the observer's reference frame and enables a straightforward tracking of a selected species within a mixture composed of a large number of constituents. A method to derive the second law and constitutive equations accompanying the model is presented. Using the configuration of a rotating fluid we illustrated an example of non-classical flow physics predicted by new contributions in the entropy and constitutive equations.

  14. Estimation of performance of a J-T refrigerators operating with nitrogen-hydrocarbon mixtures and a coiled tubes-in-tube heat exchanger

    NASA Astrophysics Data System (ADS)

    Satya Meher, R.; Venkatarathnam, G.

    2018-06-01

    The exergy efficiency of Joule-Thomson (J-T) refrigerators operating with mixtures (MRC systems) strongly depends on the choice of refrigerant mixture and the performance of the heat exchanger used. Helically coiled, multiple tubes-in-tube heat exchangers with an effectiveness of over 96% are widely used in these types of systems. All the current studies focus only on the different heat transfer correlations and the uncertainty in predicting performance of the heat exchanger alone. The main focus of this work is to estimate the uncertainty in cooling capacity when the homogenous model is used by comparing the theoretical and experimental studies. The comparisons have been extended to some two-phase models present in the literature as well. Experiments have been carried out on a J-T refrigerator at a fixed heat load of 10 W with different nitrogen-hydrocarbon mixtures in the evaporator temperature range of 100-120 K. Different heat transfer models have been used to predict the temperature profiles as well as the cooling capacity of the refrigerator. The results show that the homogenous two-phase flow model is probably the most suitable model for rating the cooling capacity of a J-T refrigerator operating with nitrogen-hydrocarbon mixtures.

  15. Interactions and Toxicity of Cu-Zn mixtures to Hordeum vulgare in Different Soils Can Be Rationalized with Bioavailability-Based Prediction Models.

    PubMed

    Qiu, Hao; Versieren, Liske; Rangel, Georgina Guzman; Smolders, Erik

    2016-01-19

    Soil contamination with copper (Cu) is often associated with zinc (Zn), and the biological response to such mixed contamination is complex. Here, we investigated Cu and Zn mixture toxicity to Hordeum vulgare in three different soils, the premise being that the observed interactions are mainly due to effects on bioavailability. The toxic effect of Cu and Zn mixtures on seedling root elongation was more than additive (i.e., synergism) in soils with high and medium cation-exchange capacity (CEC) but less than additive (antagonism) in a low-CEC soil. This was found when we expressed the dose as the conventional total soil concentration. In contrast, antagonism was found in all soils when we expressed the dose as free-ion activities in soil solution, indicating that there is metal-ion competition for binding to the plant roots. Neither a concentration addition nor an independent action model explained mixture effects, irrespective of the dose expressions. In contrast, a multimetal BLM model and a WHAM-Ftox model successfully explained the mixture effects across all soils and showed that bioavailability factors mainly explain the interactions in soils. The WHAM-Ftox model is a promising tool for the risk assessment of mixed-metal contamination in soils.

  16. Estimating Lion Abundance using N-mixture Models for Social Species

    PubMed Central

    Belant, Jerrold L.; Bled, Florent; Wilton, Clay M.; Fyumagwa, Robert; Mwampeta, Stanslaus B.; Beyer, Dean E.

    2016-01-01

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170–551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species. PMID:27786283

  17. Estimating Lion Abundance using N-mixture Models for Social Species.

    PubMed

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  18. A comparison of direct and indirect methods for the estimation of health utilities from clinical outcomes.

    PubMed

    Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb

    2014-10-01

    Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.

  19. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    NASA Astrophysics Data System (ADS)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.

  20. A framework for the use of single-chemical transcriptomics data in predicting the hazards associated with complex mixtures of polycyclic aromatic hydrocarbons.

    PubMed

    Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina

    2017-07-01

    The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.

  1. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    USGS Publications Warehouse

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.

  2. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    NASA Astrophysics Data System (ADS)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  3. Structure-related aspects on water diffusivity in fatty acid-soap and skin lipid model systems.

    PubMed

    Norlén, L; Engblom, J

    2000-01-03

    Simplified skin barrier models are necessary to get a first hand understanding of the very complex morphology and physical properties of the human skin barrier. In addition, it is of great importance to construct relevant models that will allow for rational testing of barrier perturbing/occlusive effects of a large variety of substances. The primary objective of this work was to study the effect of lipid morphology on water permeation through various lipid mixtures (i.e., partly neutralised free fatty acids, as well as a skin lipid model mixture). In addition, the effects of incorporating Azone((R)) (1-dodecyl-azacycloheptan-2-one) into the skin lipid model mixture was studied. Small- and wide-angle X-ray diffraction was used for structure determinations. It is concluded that: (a) the water flux through a crystalline fatty acid-sodium soap-water mixture (s) is statistically significantly higher than the water flux through the corresponding lamellar (L(alpha)) and reversed hexagonal (H(II)) liquid crystalline phases, which do not differ between themselves; (b) the water flux through mixtures of L(alpha)/s decreases statistically significantly with increasing relative amounts of lamellar (L(alpha)) liquid crystalline phase; (c) the addition of Azone((R)) to a skin lipid model system induces a reduction in water flux. However, further studies are needed to more closely characterise the structural basis for the occlusive effects of Azone((R)) on water flux.

  4. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    PubMed

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    PubMed

    Yu, Kezi; Quirk, J Gerald; Djurić, Petar M

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.

  6. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models

    PubMed Central

    Yu, Kezi; Quirk, J. Gerald

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927

  7. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  8. An introduction to mixture item response theory models.

    PubMed

    De Ayala, R J; Santiago, S Y

    2017-02-01

    Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  9. Population heterogeneity in the salience of multiple risk factors for adolescent delinquency.

    PubMed

    Lanza, Stephanie T; Cooper, Brittany R; Bray, Bethany C

    2014-03-01

    To present mixture regression analysis as an alternative to more standard regression analysis for predicting adolescent delinquency. We demonstrate how mixture regression analysis allows for the identification of population subgroups defined by the salience of multiple risk factors. We identified population subgroups (i.e., latent classes) of individuals based on their coefficients in a regression model predicting adolescent delinquency from eight previously established risk indices drawn from the community, school, family, peer, and individual levels. The study included N = 37,763 10th-grade adolescents who participated in the Communities That Care Youth Survey. Standard, zero-inflated, and mixture Poisson and negative binomial regression models were considered. Standard and mixture negative binomial regression models were selected as optimal. The five-class regression model was interpreted based on the class-specific regression coefficients, indicating that risk factors had varying salience across classes of adolescents. Standard regression showed that all risk factors were significantly associated with delinquency. Mixture regression provided more nuanced information, suggesting a unique set of risk factors that were salient for different subgroups of adolescents. Implications for the design of subgroup-specific interventions are discussed. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  10. Evaluation and improvement of micro-surfacing mix design method and modelling of asphalt emulsion mastic in terms of filler-emulsion interaction

    NASA Astrophysics Data System (ADS)

    Robati, Masoud

    This Doctorate program focuses on the evaluation and improving the rutting resistance of micro-surfacing mixtures. There are many research problems related to the rutting resistance of micro-surfacing mixtures that still require further research to be solved. The main objective of this Ph.D. program is to experimentally and analytically study and improve rutting resistance of micro-surfacing mixtures. During this Ph.D. program major aspects related to the rutting resistance of micro-surfacing mixtures are investigated and presented as follow: 1) evaluation of a modification of current micro-surfacing mix design procedures: On the basis of this effort, a new mix design procedure is proposed for type III micro-surfacing mixtures as rut-fill materials on the road surface. Unlike the current mix design guidelines and specification, the new mix design is capable of selecting the optimum mix proportions for micro-surfacing mixtures; 2) evaluation of test methods and selection of aggregate grading for type III application of micro-surfacing: Within the term of this study, a new specification for selection of aggregate grading for type III application of micro-surfacing is proposed; 3) evaluation of repeatability and reproducibility of micro-surfacing mixture design tests: In this study, limits for repeatability and reproducibility of micro-surfacing mix design tests are presented; 4) a new conceptual model for filler stiffening effect on asphalt mastic of micro-surfacing: A new model is proposed, which is able to establish limits for minimum and maximum filler concentrations in the micro-surfacing mixture base on only the filler important physical and chemical properties; 5) incorporation of reclaimed asphalt pavement and post-fabrication asphalt shingles in micro-surfacing mixture: The effectiveness of newly developed mix design procedure for micro-surfacing mixtures is further validated using recycled materials. The results present the limits for the use of RAP and RAS amount in micro-surfacing mixtures; 6) new colored micro-surfacing formulations with improved durability and performance: The significant improvement of around 45% in rutting resistance of colored and conventional micro-surfacing mixtures is achieved through employing low penetration grade bitumen polymer modified asphalt emulsion stabilized using nanoparticles.

  11. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Treesearch

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  12. Cure modeling in real-time prediction: How much does it help?

    PubMed

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. LES/PDF studies of joint statistics of mixture fraction and progress variable in piloted methane jet flames with inhomogeneous inlet flows

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng

    2016-11-01

    The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.

  14. EVALUATING QUANTITATIVE FORMULAS FOR DOSE-RESPONSE ASSESSMENT OF CHEMICAL MIXTURES

    EPA Science Inventory

    Risk assessment formulas are often distinguished from dose-response models by being rough but necessary. The evaluation of these rough formulas is described here, using the example of mixture risk assessment. Two conditions make the dose-response part of mixture risk assessment d...

  15. SELECTIVE CHANGES IN BRAIN PROTEIN KINASE C ISOFORMS FOLLOWING DEVELOPMENTAL EXPOSURE TO A PCB MIXTURE.

    EPA Science Inventory

    Introduction
    Polychlorinated biphenyls (PCBs) offer a unique model to understand the major issues related to complex environmental mixtures. These environmental pollutants are ubiquitous, persistent, bioaccumulate in human body through the food chain, and exist as mixtures of ...

  16. Physiologically based pharmacokinetic modeling of tea catechin mixture in rats and humans.

    PubMed

    Law, Francis C P; Yao, Meicun; Bi, Hui-Chang; Lam, Stephen

    2017-06-01

    Although green tea ( Camellia sinensis) (GT) contains a large number of polyphenolic compounds with anti-oxidative and anti-proliferative activities, little is known of the pharmacokinetics and tissue dose of tea catechins (TCs) as a chemical mixture in humans. The objectives of this study were to develop and validate a physiologically based pharmacokinetic (PBPK) model of tea catechin mixture (TCM) in rats and humans, and to predict an integrated or total concentration of TCM in the plasma of humans after consuming GT or Polyphenon E (PE). To this end, a PBPK model of epigallocatechin gallate (EGCg) consisting of 13 first-order, blood flow-limited tissue compartments was first developed in rats. The rat model was scaled up to humans by replacing its physiological parameters, pharmacokinetic parameters and tissue/blood partition coefficients (PCs) with human-specific values. Both rat and human EGCg models were then extrapolated to other TCs by substituting its physicochemical parameters, pharmacokinetic parameters, and PCs with catechin-specific values. Finally, a PBPK model of TCM was constructed by linking three rat (or human) tea catechin models together without including a description for pharmacokinetic interaction between the TCs. The mixture PBPK model accurately predicted the pharmacokinetic behaviors of three individual TCs in the plasma of rats and humans after GT or PE consumption. Model-predicted total TCM concentration in the plasma was linearly related to the dose consumed by humans. The mixture PBPK model is able to translate an external dose of TCM into internal target tissue doses for future safety assessment and dose-response analysis studies in humans. The modeling framework as described in this paper is also applicable to the bioactive chemical in other plant-based health products.

  17. A Mechanistic Design Approach for Graphite Nanoplatelet (GNP) Reinforced Asphalt Mixtures for Low-Temperature Applications

    DOT National Transportation Integrated Search

    2018-01-01

    This report explores the application of a discrete computational model for predicting the fracture behavior of asphalt mixtures at low temperatures based on the results of simple laboratory experiments. In this discrete element model, coarse aggregat...

  18. On hydrodynamic phase field models for binary fluid mixtures

    NASA Astrophysics Data System (ADS)

    Yang, Xiaogang; Gong, Yuezheng; Li, Jun; Zhao, Jia; Wang, Qi

    2018-05-01

    Two classes of thermodynamically consistent hydrodynamic phase field models have been developed for binary fluid mixtures of incompressible viscous fluids of possibly different densities and viscosities. One is quasi-incompressible, while the other is incompressible. For the same binary fluid mixture of two incompressible viscous fluid components, which one is more appropriate? To answer this question, we conduct a comparative study in this paper. First, we visit their derivation, conservation and energy dissipation properties and show that the quasi-incompressible model conserves both mass and linear momentum, while the incompressible one does not. We then show that the quasi-incompressible model is sensitive to the density deviation of the fluid components, while the incompressible model is not in a linear stability analysis. Second, we conduct a numerical investigation on coarsening or coalescent dynamics of protuberances using the two models. We find that they can predict quite different transient dynamics depending on the initial conditions and the density difference although they predict essentially the same quasi-steady results in some cases. This study thus cast a doubt on the applicability of the incompressible model to describe dynamics of binary mixtures of two incompressible viscous fluids especially when the two fluid components have a large density deviation.

  19. Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics

    NASA Astrophysics Data System (ADS)

    Zaharieva, Roussislava; Hanagud, Sathya

    2009-06-01

    Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.

  20. An odor interaction model of binary odorant mixtures by a partial differential equation method.

    PubMed

    Yan, Luchun; Liu, Jiemin; Wang, Guihua; Wu, Chuandong

    2014-07-09

    A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE) method. Based on the measurement method (tangent-intercept method) of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture's odor intensity to the individual odorant's relative odor activity value (OAV). Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors) also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  1. A Twin Factor Mixture Modeling Approach to Childhood Temperament: Differential Heritability

    ERIC Educational Resources Information Center

    Scott, Brandon G.; Lemery-Chalfant, Kathryn; Clifford, Sierra; Tein, Jenn-Yun; Stoll, Ryan; Goldsmith, H.Hill

    2016-01-01

    Twin factor mixture modeling was used to identify temperament profiles while simultaneously estimating a latent factor model for each profile with a sample of 787 twin pairs (M[subscript age] = 7.4 years, SD = 0.84; 49% female; 88.3% Caucasian), using mother- and father-reported temperament. A four-profile, one-factor model fit the data well.…

  2. Effect of surface ionization on wetting layers

    NASA Technical Reports Server (NTRS)

    Kayser, R. F.

    1986-01-01

    A surface ionization model due to Langmuir is generalized to liquid mixtures of polar and nonpolar components in contact with ionizable substrates. When a predominantly nonpolar mixture is near a miscibility gap, thick wetting layers of the conjugate polar phase form on the substrate. Such charged layers can be much thicker than similar wetting layers stabilized by dispersion forces. This model may explain the 0.4- to 0.6-micron-thick wetting layers formed in stirred mixtures of nitromethane and carbon disulfide in contact with glass.

  3. Investigating Individual Differences in Toddler Search with Mixture Models

    ERIC Educational Resources Information Center

    Berthier, Neil E.; Boucher, Kelsea; Weisner, Nina

    2015-01-01

    Children's performance on cognitive tasks is often described in categorical terms in that a child is described as either passing or failing a test, or knowing or not knowing some concept. We used binomial mixture models to determine whether individual children could be classified as passing or failing two search tasks, the DeLoache model room…

  4. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo

    2017-01-01

    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  5. Mixture Item Response Theory-MIMIC Model: Simultaneous Estimation of Differential Item Functioning for Manifest Groups and Latent Classes

    ERIC Educational Resources Information Center

    Bilir, Mustafa Kuzey

    2009-01-01

    This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…

  6. Evidence for phase separation of ethanol-water mixtures at the hydrogen terminated nanocrystalline diamond surface.

    PubMed

    Janssens, Stoffel D; Drijkoningen, Sien; Saitner, Marc; Boyen, Hans-Gerd; Wagner, Patrick; Larsson, Karin; Haenen, Ken

    2012-07-28

    Interactions between ethanol-water mixtures and a hydrophobic hydrogen terminated nanocrystalline diamond surface, are investigated by sessile drop contact angle measurements. The surface free energy of the hydrophobic surface, obtained with pure liquids, differs strongly from values obtained by ethanol-water mixtures. Here, a model which explains this difference is presented. The model suggests that, due to a higher affinity of ethanol for the hydrophobic surface, when compared to water, a phase separation occurs when a mixture of both liquids is in contact with the H-terminated diamond surface. These results are supported by a computational study giving insight in the affinity and related interaction at the liquid-solid interface.

  7. New theoretical framework for designing nonionic surfactant mixtures that exhibit a desired adsorption kinetics behavior.

    PubMed

    Moorkanikkara, Srinivas Nageswaran; Blankschtein, Daniel

    2010-12-21

    How does one design a surfactant mixture using a set of available surfactants such that it exhibits a desired adsorption kinetics behavior? The traditional approach used to address this design problem involves conducting trial-and-error experiments with specific surfactant mixtures. This approach is typically time-consuming and resource-intensive and becomes increasingly challenging when the number of surfactants that can be mixed increases. In this article, we propose a new theoretical framework to identify a surfactant mixture that most closely meets a desired adsorption kinetics behavior. Specifically, the new theoretical framework involves (a) formulating the surfactant mixture design problem as an optimization problem using an adsorption kinetics model and (b) solving the optimization problem using a commercial optimization package. The proposed framework aims to identify the surfactant mixture that most closely satisfies the desired adsorption kinetics behavior subject to the predictive capabilities of the chosen adsorption kinetics model. Experiments can then be conducted at the identified surfactant mixture condition to validate the predictions. We demonstrate the reliability and effectiveness of the proposed theoretical framework through a realistic case study by identifying a nonionic surfactant mixture consisting of up to four alkyl poly(ethylene oxide) surfactants (C(10)E(4), C(12)E(5), C(12)E(6), and C(10)E(8)) such that it most closely exhibits a desired dynamic surface tension (DST) profile. Specifically, we use the Mulqueen-Stebe-Blankschtein (MSB) adsorption kinetics model (Mulqueen, M.; Stebe, K. J.; Blankschtein, D. Langmuir 2001, 17, 5196-5207) to formulate the optimization problem as well as the SNOPT commercial optimization solver to identify a surfactant mixture consisting of these four surfactants that most closely exhibits the desired DST profile. Finally, we compare the experimental DST profile measured at the surfactant mixture condition identified by the new theoretical framework with the desired DST profile and find good agreement between the two profiles.

  8. Mixture toxicity revisited from a toxicogenomic perspective.

    PubMed

    Altenburger, Rolf; Scholz, Stefan; Schmitt-Jansen, Mechthild; Busch, Wibke; Escher, Beate I

    2012-03-06

    The advent of new genomic techniques has raised expectations that central questions of mixture toxicology such as for mechanisms of low dose interactions can now be answered. This review provides an overview on experimental studies from the past decade that address diagnostic and/or mechanistic questions regarding the combined effects of chemical mixtures using toxicogenomic techniques. From 2002 to 2011, 41 studies were published with a focus on mixture toxicity assessment. Primarily multiplexed quantification of gene transcripts was performed, though metabolomic and proteomic analysis of joint exposures have also been undertaken. It is now standard to explicitly state criteria for selecting concentrations and provide insight into data transformation and statistical treatment with respect to minimizing sources of undue variability. Bioinformatic analysis of toxicogenomic data, by contrast, is still a field with diverse and rapidly evolving tools. The reported combined effect assessments are discussed in the light of established toxicological dose-response and mixture toxicity models. Receptor-based assays seem to be the most advanced toward establishing quantitative relationships between exposure and biological responses. Often transcriptomic responses are discussed based on the presence or absence of signals, where the interpretation may remain ambiguous due to methodological problems. The majority of mixture studies design their studies to compare the recorded mixture outcome against responses for individual components only. This stands in stark contrast to our existing understanding of joint biological activity at the levels of chemical target interactions and apical combined effects. By joining established mixture effect models with toxicokinetic and -dynamic thinking, we suggest a conceptual framework that may help to overcome the current limitation of providing mainly anecdotal evidence on mixture effects. To achieve this we suggest (i) to design studies to establish quantitative relationships between dose and time dependency of responses and (ii) to adopt mixture toxicity models. Moreover, (iii) utilization of novel bioinformatic tools and (iv) stress response concepts could be productive to translate multiple responses into hypotheses on the relationships between general stress and specific toxicity reactions of organisms.

  9. Multiscale Constitutive Modeling of Asphalt Concrete

    NASA Astrophysics Data System (ADS)

    Underwood, Benjamin Shane

    Multiscale modeling of asphalt concrete has become a popular technique for gaining improved insight into the physical mechanisms that affect the material's behavior and ultimately its performance. This type of modeling considers asphalt concrete, not as a homogeneous mass, but rather as an assemblage of materials at different characteristic length scales. For proper modeling these characteristic scales should be functionally definable and should have known properties. Thus far, research in this area has not focused significant attention on functionally defining what the characteristic scales within asphalt concrete should be. Instead, many have made assumptions on the characteristic scales and even the characteristic behaviors of these scales with little to no support. This research addresses these shortcomings by directly evaluating the microstructure of the material and uses these results to create materials of different characteristic length scales as they exist within the asphalt concrete mixture. The objectives of this work are to; 1) develop mechanistic models for the linear viscoelastic (LVE) and damage behaviors in asphalt concrete at different length scales and 2) develop a mechanistic, mechanistic/empirical, or phenomenological formulation to link the different length scales into a model capable of predicting the effects of microstructural changes on the linear viscoelastic behaviors of asphalt concrete mixture, e.g., a microstructure association model for asphalt concrete mixture. Through the microstructural study it is found that asphalt concrete mixture can be considered as a build-up of three different phases; asphalt mastic, fine aggregate matrix (FAM), and finally the coarse aggregate particles. The asphalt mastic is found to exist as a homogenous material throughout the mixture and FAM, and the filler content within this material is consistent with the volumetric averaged concentration, which can be calculated from the job mix formula. It is also found that the maximum aggregate size of the FAM is mixture dependent, but consistent with a gradation parameter from the Baily Method of mixture design. Mechanistic modeling of these different length scales reveals that although many consider asphalt concrete to be a LVE material, it is in fact only quasi-LVE because it shows some tendencies that are inconsistent with LVE theory. Asphalt FAM and asphalt mastic show similar nonlinear tendencies although the exact magnitude of the effect differs. These tendencies can be ignored for damage modeling in the mixture and FAM scales as long as the effects are consistently ignored, but it is found that they must be accounted for in mastic and binder damage modeling. The viscoelastic continuum damage (VECD) model is used for damage modeling in this research. To aid in characterization and application of the VECD model for cyclic testing, a simplified version (S-VECD) is rigorously derived and verified. Through the modeling efforts at each scale, various factors affecting the fundamental and engineering properties at each scale are observed and documented. A microstructure association model that accounts for particle interaction through physico-chemical processes and the effects of aggregate structuralization is developed to links the moduli at each scale. This model is shown to be capable of upscaling the mixture modulus from either the experimentally determined mastic modulus or FAM modulus. Finally, an initial attempt at upscaling the damage and nonlinearity phenomenon is shown.

  10. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  11. Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.

    PubMed

    Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi

    2013-12-01

    Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.

  12. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    EPA Science Inventory

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  13. Kinetic model of water disinfection using peracetic acid including synergistic effects.

    PubMed

    Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D

    2016-01-01

    The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors.

  14. Weaker Ligands Can Dominate an Odor Blend due to Syntopic Interactions

    PubMed Central

    2013-01-01

    Most odors in natural environments are mixtures of several compounds. Perceptually, these can blend into a new “perfume,” or some components may dominate as elements of the mixture. In order to understand such mixture interactions, it is necessary to study the events at the olfactory periphery, down to the level of single-odorant receptor cells. Does a strong ligand present at a low concentration outweigh the effect of weak ligands present at high concentrations? We used the fruit fly receptor dOr22a and a banana-like odor mixture as a model system. We show that an intermediate ligand at an intermediate concentration alone elicits the neuron’s blend response, despite the presence of both weaker ligands at higher concentration, and of better ligands at lower concentration in the mixture. Because all of these components, when given alone, elicited significant responses, this reveals specific mixture processing already at the periphery. By measuring complete dose–response curves we show that these mixture effects can be fully explained by a model of syntopic interaction at a single-receptor binding site. Our data have important implications for how odor mixtures are processed in general, and what preprocessing occurs before the information reaches the brain. PMID:23315042

  15. The CPA Equation of State and an Activity Coefficient Model for Accurate Molar Enthalpy Calculations of Mixtures with Carbon Dioxide and Water/Brine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myint, P. C.; Hao, Y.; Firoozabadi, A.

    2015-03-27

    Thermodynamic property calculations of mixtures containing carbon dioxide (CO 2) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO 2 activity coefficient model by Duanmore » and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO 2, pure water, and both CO 2-rich and aqueous (H 2O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO 2. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H 2O-CO 2-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.« less

  16. Advanced stability indicating chemometric methods for quantitation of amlodipine and atorvastatin in their quinary mixture with acidic degradation products.

    PubMed

    Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A

    2016-02-05

    Two advanced, accurate and precise chemometric methods are developed for the simultaneous determination of amlodipine besylate (AML) and atorvastatin calcium (ATV) in the presence of their acidic degradation products in tablet dosage forms. The first method was Partial Least Squares (PLS-1) and the second was Artificial Neural Networks (ANN). PLS was compared to ANN models with and without variable selection procedure (genetic algorithm (GA)). For proper analysis, a 5-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the interfering species. Fifteen mixtures were used as calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested models. The proposed methods were successfully applied to the analysis of pharmaceutical tablets containing AML and ATV. The methods indicated the ability of the mentioned models to solve the highly overlapped spectra of the quinary mixture, yet using inexpensive and easy to handle instruments like the UV-VIS spectrophotometer. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    PubMed

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Original predictive approach to the compressibility of pharmaceutical powder mixtures based on the Kawakita equation.

    PubMed

    Mazel, Vincent; Busignies, Virginie; Duca, Stéphane; Leclerc, Bernard; Tchoreloff, Pierre

    2011-05-30

    In the pharmaceutical industry, tablets are obtained by the compaction of two or more components which have different physical properties and compaction behaviours. Therefore, it could be interesting to predict the physical properties of the mixture using the single-component results. In this paper, we have focused on the prediction of the compressibility of binary mixtures using the Kawakita model. Microcrystalline cellulose (MCC) and L-alanine were compacted alone and mixed at different weight fractions. The volume reduction, as a function of the compaction pressure, was acquired during the compaction process ("in-die") and after elastic recovery ("out-of-die"). For the pure components, the Kawakita model is well suited to the description of the volume reduction. For binary mixtures, an original approach for the prediction of the volume reduction without using the effective Kawakita parameters was proposed and tested. The good agreement between experimental and predicted data proved that this model was efficient to predict the volume reduction of MCC and L-alanine mixtures during compaction experiments. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Modeling CO2 mass transfer in amine mixtures: PZ-AMP and PZ-MDEA.

    PubMed

    Puxty, Graeme; Rowland, Robert

    2011-03-15

    The most common method of carbon dioxide (CO(2)) capture is the absorption of CO(2) into a falling thin film of an aqueous amine solution. Modeling of mass transfer during CO(2) absorption is an important way to gain insight and understanding about the underlying processes that are occurring. In this work a new software tool has been used to model CO(2) absorption into aqueous piperazine (PZ) and binary mixtures of PZ with 2-amino-2-methyl-1-propanol (AMP) or methyldiethanolamine (MDEA). The tool solves partial differential and simultaneous equations describing diffusion and chemical reaction automatically derived from reactions written using chemical notation. It has been demonstrated that by using reactions that are chemically plausible the mass transfer in binary mixtures can be fully described by combining the chemical reactions and their associated parameters determined for single amines. The observed enhanced mass transfer in binary mixtures can be explained through chemical interactions occurring in the mixture without need to resort to using additional reactions or unusual transport phenomena such as the "shuttle mechanism".

  20. An approach for evaluating the respiratory irritation of mixtures: application to metalworking fluids.

    PubMed

    Schaper, M M; Detwiler-Okabayashi, K A

    1995-01-01

    Recently, the sensory and pulmonary irritating properties of ten metalworking fluids (MWF) were assessed using a mouse bioassay. Relative potency of the MWFs was estimated, but it was not possible to identify the component(s) responsible for the the respiratory irritation induced by each MWF. One of the ten fluids, MWF "ET", produced sensory and pulmonary irritation in mice, and it was of moderate potency in comparison to the other nine MWFs. MWF "E" had three major components: tall oil fatty acids (TOFA), sodium sulfonate (SA), and paraffinic oil (PO). In the present study, the sensory and pulmonary irritating properties of these individual components of MWF "E" were evaluated. Mixtures of the three components were also prepared and similarly evaluated. This analysis revealed that the sensory irritation from MWF "E" was largely due to TOFA, whereas SA produced the pulmonary irritation observed with MWF "E". Both TOFA and SA were more potent irritants than was MWF "E", and the potency of TOFA and/or SA was diminished through combination with PO. There was no evidence of synergism of the components when combined to form MWF "E". This approach for identifying the biologically "active" component(s) in a mixture should be useful for other MWFs. Furthermore, the approach should be easily adapted for other applications involving concerns with mixtures.

  1. Molecular simulations of Hugoniots of detonation product mixtures at chemical equilibrium: Microscopic calculation of the Chapman-Jouguet state

    NASA Astrophysics Data System (ADS)

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-08-01

    In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at PCJ and TCJ is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Grüneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.

  2. Molecular simulations of Hugoniots of detonation product mixtures at chemical equilibrium: microscopic calculation of the Chapman-Jouguet state.

    PubMed

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-08-28

    In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at P(CJ) and T(CJ) is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Gruneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.

  3. A high-resolution Godunov method for compressible multi-material flow on overlapping grids

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Schwendeman, D. W.; Kapila, A. K.; Henshaw, W. D.

    2007-04-01

    A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on a uniform-pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on the Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of a planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.

  4. Research on odor interaction between aldehyde compounds via a partial differential equation (PDE) model.

    PubMed

    Yan, Luchun; Liu, Jiemin; Qu, Chen; Gu, Xingye; Zhao, Xia

    2015-01-28

    In order to explore the odor interaction of binary odor mixtures, a series of odor intensity evaluation tests were performed using both individual components and binary mixtures of aldehydes. Based on the linear relation between the logarithm of odor activity value and odor intensity of individual substances, the relationship between concentrations of individual constituents and their joint odor intensity was investigated by employing a partial differential equation (PDE) model. The obtained results showed that the binary odor interaction was mainly influenced by the mixing ratio of two constituents, but not the concentration level of an odor sample. Besides, an extended PDE model was also proposed on the basis of the above experiments. Through a series of odor intensity matching tests for several different binary odor mixtures, the extended PDE model was proved effective at odor intensity prediction. Furthermore, odorants of the same chemical group and similar odor type exhibited similar characteristics in the binary odor interaction. The overall results suggested that the PDE model is a more interpretable way of demonstrating the odor interactions of binary odor mixtures.

  5. Fine-tuning molecular acoustic models: sensitivity of the predicted attenuation to the Lennard-Jones parameters

    NASA Astrophysics Data System (ADS)

    Petculescu, Andi G.; Lueptow, Richard M.

    2005-01-01

    In a previous paper [Y. Dain and R. M. Lueptow, J. Acoust. Soc. Am. 109, 1955 (2001)], a model of acoustic attenuation due to vibration-translation and vibration-vibration relaxation in multiple polyatomic gas mixtures was developed. In this paper, the model is improved by treating binary molecular collisions via fully pairwise vibrational transition probabilities. The sensitivity of the model to small variations in the Lennard-Jones parameters-collision diameter (σ) and potential depth (ɛ)-is investigated for nitrogen-water-methane mixtures. For a N2(98.97%)-H2O(338 ppm)-CH4(1%) test mixture, the transition probabilities and acoustic absorption curves are much more sensitive to σ than they are to ɛ. Additionally, when the 1% methane is replaced by nitrogen, the resulting mixture [N2(99.97%)-H2O(338 ppm)] becomes considerably more sensitive to changes of σwater. The current model minimizes the underprediction of the acoustic absorption peak magnitudes reported by S. G. Ejakov et al. [J. Acoust. Soc. Am. 113, 1871 (2003)]. .

  6. Deformation of debris-ice mixtures

    NASA Astrophysics Data System (ADS)

    Moore, Peter L.

    2014-09-01

    Mixtures of rock debris and ice are common in high-latitude and high-altitude environments and are thought to be widespread elsewhere in our solar system. In the form of permafrost soils, glaciers, and rock glaciers, these debris-ice mixtures are often not static but slide and creep, generating many of the landforms and landscapes associated with the cryosphere. In this review, a broad range of field observations, theory, and experimental work relevant to the mechanical interactions between ice and rock debris are evaluated, with emphasis on the temperature and stress regimes common in terrestrial surface and near-surface environments. The first-order variables governing the deformation of debris-ice mixtures in these environments are debris concentration, particle size, temperature, solute concentration (salinity), and stress. A key observation from prior studies, consistent with expectations, is that debris-ice mixtures are usually more resistant to deformation at low temperatures than their pure end-member components. However, at temperatures closer to melting, the growth of unfrozen water films at ice-particle interfaces begins to reduce the strengthening effect and can even lead to profound weakening. Using existing quantitative relationships from theoretical and experimental work in permafrost engineering, ice mechanics, and glaciology combined with theory adapted from metallurgy and materials science, a simple constitutive framework is assembled that is capable of capturing most of the observed dynamics. This framework highlights the competition between the role of debris in impeding ice creep and the mitigating effects of unfrozen water at debris-ice interfaces.

  7. Effects of Nickel, Chlorpyrifos and Their Mixture on the Dictyostelium discoideum Proteome

    PubMed Central

    Boatti, Lara; Robotti, Elisa; Marengo, Emilio; Viarengo, Aldo; Marsano, Francesco

    2012-01-01

    Mixtures of chemicals can have additive, synergistic or antagonistic interactions. We investigated the effects of the exposure to nickel, the organophosphate insecticide chlorpyrifos at effect concentrations (EC) of 25% and 50% and their binary mixture (Ec25 + EC25) on Dictyostelium discoideum amoebae based on lysosomal membrane stability (LMS). We treated D. discoideum with these compounds under controlled laboratory conditions and evaluated the changes in protein levels using a two-dimensional gel electrophoresis (2DE) proteomic approach. Nickel treatment at EC25 induced changes in 14 protein spots, 12 of which were down-regulated. Treatment with nickel at EC50 resulted in changes in 15 spots, 10 of which were down-regulated. Treatment with chlorpyrifos at EC25 induced changes in six spots, all of which were down-regulated; treatment with chlorpyrifos at EC50 induced changes in 13 spots, five of which were down-regulated. The mixture corresponding to EC25 of each compound induced changes in 19 spots, 13 of which were down-regulated. The data together reveal that a different protein expression signature exists for each treatment, and that only a few proteins are modulated in multiple different treatments. For a simple binary mixture, the proteomic response does not allow for the identification of each toxicant. The protein spots that showed significant differences were identified by mass spectrometry, which revealed modulations of proteins involved in metal detoxification, stress adaptation, the oxidative stress response and other cellular processes. PMID:23443088

  8. Integral equation model for warm and hot dense mixtures.

    PubMed

    Starrett, C E; Saumon, D; Daligault, J; Hamel, S

    2014-09-01

    In a previous work [C. E. Starrett and D. Saumon, Phys. Rev. E 87, 013104 (2013)] a model for the calculation of electronic and ionic structures of warm and hot dense matter was described and validated. In that model the electronic structure of one atom in a plasma is determined using a density-functional-theory-based average-atom (AA) model and the ionic structure is determined by coupling the AA model to integral equations governing the fluid structure. That model was for plasmas with one nuclear species only. Here we extend it to treat plasmas with many nuclear species, i.e., mixtures, and apply it to a carbon-hydrogen mixture relevant to inertial confinement fusion experiments. Comparison of the predicted electronic and ionic structures with orbital-free and Kohn-Sham molecular dynamics simulations reveals excellent agreement wherever chemical bonding is not significant.

  9. Mixture effects of benzene, toluene, ethylbenzene, and xylenes (BTEX) on lung carcinoma cells via a hanging drop air exposure system.

    PubMed

    Liu, Faye F; Escher, Beate I; Were, Stephen; Duffy, Lesley; Ng, Jack C

    2014-06-16

    A recently developed hanging drop air exposure system for toxicity studies of volatile chemicals was applied to evaluate the cell viability of lung carcinoma A549 cells after 1 and 24 h of exposure to benzene, toluene, ethylbenzene, and xylenes (BTEX) as individual compounds and as mixtures of four or six components. The cellular chemical concentrations causing 50% reduction of cell viability (EC50) were calculated using a mass balance model and came to 17, 12, 11, 9, 4, and 4 mmol/kg cell dry weight for benzene, toluene, ethylbenzene, m-xylene, o-xylene, and p-xylene, respectively, after 1 h of exposure. The EC50 decreased by a factor of 4 after 24 h of exposure. All mixture effects were best described by the mixture toxicity model of concentration addition, which is valid for chemicals with the same mode of action. Good agreement with the model predictions was found for benzene, toluene, ethylbenzene, and m-xylene at four different representative fixed concentration ratios after 1 h of exposure, but lower agreement with mixture prediction was obtained after 24 h of exposure. A recreated car exhaust mixture, which involved the contribution of the more toxic p-xylene and o-xylene, yielded an acceptable, but lower quality, prediction as well.

  10. Novel positioning method using Gaussian mixture model for a monolithic scintillator-based detector in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Bae, Seungbin; Lee, Kisung; Seo, Changwoo; Kim, Jungmin; Joo, Sung-Kwan; Joung, Jinhun

    2011-09-01

    We developed a high precision position decoding method for a positron emission tomography (PET) detector that consists of a thick slab scintillator coupled with a multichannel photomultiplier tube (PMT). The DETECT2000 simulation package was used to validate light response characteristics for a 48.8 mm×48.8 mm×10 mm slab of lutetium oxyorthosilicate coupled to a 64 channel PMT. The data are then combined to produce light collection histograms. We employed a Gaussian mixture model (GMM) to parameterize the composite light response with multiple Gaussian mixtures. In the training step, light photons acquired by N PMT channels was used as an N-dimensional feature vector and were fed into a GMM training model to generate optimal parameters for M mixtures. In the positioning step, we decoded the spatial locations of incident photons by evaluating a sample feature vector with respect to the trained mixture parameters. The average spatial resolutions after positioning with four mixtures were 1.1 mm full width at half maximum (FWHM) at the corner and 1.0 mm FWHM at the center section. This indicates that the proposed algorithm achieved high performance in both spatial resolution and positioning bias, especially at the corner section of the detector.

  11. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  12. Mixture Hidden Markov Models in Finance Research

    NASA Astrophysics Data System (ADS)

    Dias, José G.; Vermunt, Jeroen K.; Ramos, Sofia

    Finite mixture models have proven to be a powerful framework whenever unobserved heterogeneity cannot be ignored. We introduce in finance research the Mixture Hidden Markov Model (MHMM) that takes into account time and space heterogeneity simultaneously. This approach is flexible in the sense that it can deal with the specific features of financial time series data, such as asymmetry, kurtosis, and unobserved heterogeneity. This methodology is applied to model simultaneously 12 time series of Asian stock markets indexes. Because we selected a heterogeneous sample of countries including both developed and emerging countries, we expect that heterogeneity in market returns due to country idiosyncrasies will show up in the results. The best fitting model was the one with two clusters at country level with different dynamics between the two regimes.

  13. Impact of Chemical Proportions on the Acute Neurotoxicity of a Mixture of Seven Carbamates in Preweanling and Adult Rats

    EPA Science Inventory

    Statistical design and environmental relevance are important aspects of studies of chemical mixtures, such as pesticides. We used a dose-additivity model to test experimentally the default assumptions of dose-additivity for two mixtures of seven N-methylcarbamates (carbaryl, carb...

  14. CHANGES IN NUCLEAR TRANSCRIPTION FACTORS IN RAT HIPPOCAMPUS AND CEREBELLUM FOLLOWING DEVELOPMENTAL EXPOSURE TO A COMMERCIAL PCB MIXTURE.

    EPA Science Inventory

    Polychlorinated biphenyls (PCBs) offer a unique model to understand the major issues related to complex environmental mixtures. These pollutants are ubiquitous and exist as mixtures of several congeners in the environment. Human exposures to PCBs are associated with a variety of ...

  15. CHANGES IN HIPPOCAMPAL SPINE DENSITY AND PROTEIN KINASE C ISOFORMS FOLLOWING DEVELOPMENTAL EXPOSURE TO A MIXTURE OF PERSISTENT CHEMICALS.

    EPA Science Inventory

    Polychlorinated biphenyls (PCBs) offer a unique model to understand the major issues related to complex environmental mixtures of persistent chemicals. These pollutants are ubiquitous, persistent, bioaccumulate in human body through the food chain, and exist as mixtures of severa...

  16. Joint model-based clustering of nonlinear longitudinal trajectories and associated time-to-event data analysis, linked by latent class membership: with application to AIDS clinical studies.

    PubMed

    Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam

    2017-10-27

    Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.

  17. Combined effects of pharmaceuticals, personal care products, biocides and organic contaminants on the growth of Skeletonema pseudocostatum.

    PubMed

    Petersen, Karina; Heiaas, Harald Hasle; Tollefsen, Knut Erik

    2014-05-01

    Organisms in the environment are exposed to a number of pollutants from different compound groups. In addition to the classic pollutants like the polychlorinated biphenyls, polyaromatic hydrocarbons (PAHs), alkylphenols, biocides, etc. other compound groups of concern are constantly emerging. Pharmaceuticals and personal care products (PPCPs) can be expected to co-occur with other organic contaminants like biocides, PAHs and alkylphenols in areas affected by wastewater, industrial effluents and intensive recreational activity. In this study, representatives from these four different compound groups were tested individually and in mixtures in a growth inhibition assay with the marine algae Skeletonema pseudocostatum (formerly Skeletonema costatum) to determine whether the combined effects could be predicted by models for additive effects; the concentration addition (CA) and independent action (IA) prediction model. The eleven tested compounds reduced the growth of S. pseudocostatum in the microplate test in a concentration-dependent manner. The order of toxicity of these chemicals were irgarol>fluoxetine>diuron>benzo(a)pyrene>thioguanine>triclosan>propranolol>benzophenone 3>cetrimonium bromide>4-tert-octylphenol>endosulfan. Several binary mixtures and a mixture of eight compounds from the four different compound groups were tested. All tested mixtures were additive as model deviation ratios, the deviation between experimental and predicted effect concentrations, were within a factor of 2 from one or both prediction models (e.g. CA and IA). Interestingly, a concentration dependent shift from IA to CA, potentially due to activation of similar toxicity pathways at higher concentrations, was observed for the mixture of eight compounds. The combined effects of the multi-compound mixture were clearly additive and it should therefore be expected that PPCPs, biocides, PAHs and alkylphenols will collectively contribute to the risk in areas contaminated by such complex mixtures. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Additivity and Interactions in Ecotoxicity of Pollutant Mixtures: Some Patterns, Conclusions, and Open Questions

    PubMed Central

    Rodea-Palomares, Ismael; González-Pleiter, Miguel; Martín-Betancor, Keila; Rosal, Roberto; Fernández-Piñas, Francisca

    2015-01-01

    Understanding the effects of exposure to chemical mixtures is a common goal of pharmacology and ecotoxicology. In risk assessment-oriented ecotoxicology, defining the scope of application of additivity models has received utmost attention in the last 20 years, since they potentially allow one to predict the effect of any chemical mixture relying on individual chemical information only. The gold standard for additivity in ecotoxicology has demonstrated to be Loewe additivity which originated the so-called Concentration Addition (CA) additivity model. In pharmacology, the search for interactions or deviations from additivity (synergism and antagonism) has similarly captured the attention of researchers over the last 20 years and has resulted in the definition and application of the Combination Index (CI) Theorem. CI is based on Loewe additivity, but focused on the identification and quantification of synergism and antagonism. Despite additive models demonstrating a surprisingly good predictive power in chemical mixture risk assessment, concerns still exist due to the occurrence of unpredictable synergism or antagonism in certain experimental situations. In the present work, we summarize the parallel history of development of CA, IA, and CI models. We also summarize the applicability of these concepts in ecotoxicology and how their information may be integrated, as well as the possibility of prediction of synergism. Inside the box, the main question remaining is whether it is worthy to consider departures from additivity in mixture risk assessment and how to predict interactions among certain mixture components. Outside the box, the main question is whether the results observed under the experimental constraints imposed by fractional approaches are a de fide reflection of what it would be expected from chemical mixtures in real world circumstances. PMID:29051468

  19. Catalytic effects of inorganic acids on the decomposition of ammonium nitrate.

    PubMed

    Sun, Jinhua; Sun, Zhanhui; Wang, Qingsong; Ding, Hui; Wang, Tong; Jiang, Chuansheng

    2005-12-09

    In order to evaluate the catalytic effects of inorganic acids on the decomposition of ammonium nitrate (AN), the heat releases of decomposition or reaction of pure AN and its mixtures with inorganic acids were analyzed by a heat flux calorimeter C80. Through the experiments, the different reaction mechanisms of AN and its mixtures were analyzed. The chemical reaction kinetic parameters such as reaction order, activation energy and frequency factor were calculated with the C80 experimental results for different samples. Based on these parameters and the thermal runaway models (Semenov and Frank-Kamenestkii model), the self-accelerating decomposition temperatures (SADTs) of AN and its mixtures were calculated and compared. The results show that the mixtures of AN with acid are more unsteady than pure AN. The AN decomposition reaction is catalyzed by acid. The calculated SADTs of AN mixtures with acid are much lower than that of pure AN.

  20. Measurement Of Multiphase Flow Water Fraction And Water-cut

    NASA Astrophysics Data System (ADS)

    Xie, Cheng-gang

    2007-06-01

    This paper describes a microwave transmission multiphase flow water-cut meter that measures the amplitude attenuation and phase shift across a pipe diameter at multiple frequencies using cavity-backed antennas. The multiphase flow mixture permittivity and conductivity are derived from a unified microwave transmission model for both water- and oil-continuous flows over a wide water-conductivity range; this is far beyond the capability of microwave-resonance-based sensors currently on the market. The water fraction and water cut are derived from a three-component gas-oil-water mixing model using the mixture permittivity or the mixture conductivity and an independently measured mixture density. Water salinity variations caused, for example, by changing formation water or formation/injection water breakthrough can be detected and corrected using an online water-conductivity tracking technique based on the interpretation of the mixture permittivity and conductivity, simultaneously measured by a single-modality microwave sensor.

Top