Sample records for constrained mixture model

  1. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  2. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  3. Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis

    DTIC Science & Technology

    2005-07-25

    analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for

  4. Style consistent classification of isogenous patterns.

    PubMed

    Sarkar, Prateek; Nagy, George

    2005-01-01

    In many applications of pattern recognition, patterns appear together in groups (fields) that have a common origin. For example, a printed word is usually a field of character patterns printed in the same font. A common origin induces consistency of style in features measured on patterns. The features of patterns co-occurring in a field are statistically dependent because they share the same, albeit unknown, style. Style constrained classifiers achieve higher classification accuracy by modeling such dependence among patterns in a field. Effects of style consistency on the distributions of field-features (concatenation of pattern features) can be modeled by hierarchical mixtures. Each field derives from a mixture of styles, while, within a field, a pattern derives from a class-style conditional mixture of Gaussians. Based on this model, an optimal style constrained classifier processes entire fields of patterns rendered in a consistent but unknown style. In a laboratory experiment, style constrained classification reduced errors on fields of printed digits by nearly 25 percent over singlet classifiers. Longer fields favor our classification method because they furnish more information about the underlying style.

  5. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    PubMed Central

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-01-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127

  6. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate.

    PubMed

    Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-26

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  7. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    NASA Astrophysics Data System (ADS)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  8. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  9. A second-generation constrained reaction volume shock tube

    NASA Astrophysics Data System (ADS)

    Campbell, M. F.; Tulgestke, A. M.; Davidson, D. F.; Hanson, R. K.

    2014-05-01

    We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance.

  10. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    PubMed Central

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  11. Chemical kinetic model uncertainty minimization through laminar flame speed measurements.

    PubMed

    Park, Okjoo; Veloo, Peter S; Sheen, David A; Tao, Yujie; Egolfopoulos, Fokion N; Wang, Hai

    2016-10-01

    Laminar flame speed measurements were carried for mixture of air with eight C 3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso -butene, n -butane, and iso -butane) at the room temperature and ambient pressure. Along with C 1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358-2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C 3 and C 4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C 3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C 4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C 4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel.

  12. Designing a mixture experiment when the components are subject to a nonlinear multiple-component constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Greg F.; Cooley, Scott K.; Vienna, John D.

    This article presents a case study of developing an experimental design for a constrained mixture experiment when the experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this article. The case study involves a 15-component nuclear waste glass example in which SO3 is one of the components. SO3 has a solubility limit inmore » glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture (PQM) model expressed in the relative proportions of the 14 other components. The PQM model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This article discusses the waste glass example and how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study.« less

  13. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  14. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed Central

    Chen, D G; Pounds, J G

    1998-01-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894

  15. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    PubMed Central

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  16. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    DOE PAGES

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; ...

    2016-07-25

    Laminar flame speed measurements were carried for mixture of air with eight C 3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C 1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011,more » 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C 3 and C 4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C 3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C 4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C 4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel.« less

  17. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.

    Laminar flame speed measurements were carried for mixture of air with eight C 3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C 1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011,more » 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C 3 and C 4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C 3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C 4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C 4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel.« less

  18. The Use of Growth Mixture Modeling for Studying Resilience to Major Life Stressors in Adulthood and Old Age: Lessons for Class Size and Identification and Model Selection.

    PubMed

    Infurna, Frank J; Grimm, Kevin J

    2017-12-15

    Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. PREDICTING EVAPORATION RATES AND TIMES FOR SPILLS OF CHEMICAL MIXTURES

    EPA Science Inventory


    Spreadsheet and short-cut methods have been developed for predicting evaporation rates and evaporation times for spills (and constrained baths) of chemical mixtures. Steady-state and time-varying predictions of evaporation rates can be made for six-component mixtures, includ...

  20. In situ measurements of the photochemical formation rates and optical properties of organic aerosols in CH4/CO2 mixtures.

    NASA Astrophysics Data System (ADS)

    Adamkovics, M.; Boering, K. A.

    2003-12-01

    The presence of photochemically-generated hazes has a significant impact on radiative transfer in planetary atmospheres. While the rates of particle formation have been inferred from photochemical or microphysical models constrained to match observations, these rates have not been determined experimentally. Thus, the fundamental kinetics of particle formation are not known and remain highly parameterized in planetary atmospheric models. We have developed instrumentation for measuring the formation rates and optical properties of organic aerosols produced by irradiating mixtures of precursor gases via in situ optical (633nm) scattering and online quadrupole mass spectrometry (1-200 amu). Results for the generation of particulate hydrocarbons from the irradiation of pure, gas-phase CH4 as well as CH4/CO2 mixtures with vacuum ultraviolet (120-160nm) light, along with simultaneous measurements of the evolution of higher gas-phase hydrocarbons will be presented.

  1. Effect of Substrate Wetting on the Morphology and Dynamics of Phase Separating Multi-Component Mixture

    NASA Astrophysics Data System (ADS)

    Goyal, Abheeti; Toschi, Federico; van der Schoot, Paul

    2017-11-01

    We study the morphological evolution and dynamics of phase separation of multi-component mixture in thin film constrained by a substrate. Specifically, we have explored the surface-directed spinodal decomposition of multicomponent mixture numerically by Free Energy Lattice Boltzmann (LB) simulations. The distinguishing feature of this model over the Shan-Chen (SC) model is that we have explicit and independent control over the free energy functional and EoS of the system. This vastly expands the ambit of physical systems that can be realistically simulated by LB simulations. We investigate the effect of composition, film thickness and substrate wetting on the phase morphology and the mechanism of growth in the vicinity of the substrate. The phase morphology and averaged size in the vicinity of the substrate fluctuate greatly due to the wetting of the substrate in both the parallel and perpendicular directions. Additionally, we also describe how the model presented here can be extended to include an arbitrary number of fluid components.

  2. Modelling carotid artery adaptations to dynamic alterations in pressure and flow over the cardiac cycle

    PubMed Central

    Cardamone, L.; Valentín, A.; Eberth, J. F.; Humphrey, J. D.

    2010-01-01

    Motivated by recent clinical and laboratory findings of important effects of pulsatile pressure and flow on arterial adaptations, we employ and extend an established constrained mixture framework of growth (change in mass) and remodelling (change in structure) to include such dynamical effects. New descriptors of cell and tissue behavior (constitutive relations) are postulated and refined based on new experimental data from a transverse aortic arch banding model in the mouse that increases pulsatile pressure and flow in one carotid artery. In particular, it is shown that there was a need to refine constitutive relations for the active stress generated by smooth muscle, to include both stress- and stress rate-mediated control of the turnover of cells and matrix and to account for a cyclic stress-mediated loss of elastic fibre integrity and decrease in collagen stiffness in order to capture the reported evolution, over 8 weeks, of luminal radius, wall thickness, axial force and in vivo axial stretch of the hypertensive mouse carotid artery. We submit, therefore, that complex aspects of adaptation by elastic arteries can be predicted by constrained mixture models wherein individual constituents are produced or removed at individual rates and to individual extents depending on changes in both stress and stress rate from normal values. PMID:20484365

  3. Informing Aerosol Transport Models With Satellite Multi-Angle Aerosol Measurements

    NASA Technical Reports Server (NTRS)

    Limbacher, J.; Patadia, F.; Petrenko, M.; Martin, M. Val; Chin, M.; Gaitley, B.; Garay, M.; Kalashnikova, O.; Nelson, D.; Scollo, S.

    2011-01-01

    As the aerosol products from the NASA Earth Observing System's Multi-angle Imaging SpectroRadiometer (MISR) mature, we are placing greater focus on ways of using the aerosol amount and type data products, and aerosol plume heights, to constrain aerosol transport models. We have demonstrated the ability to map aerosol air-mass-types regionally, and have identified product upgrades required to apply them globally, including the need for a quality flag indicating the aerosol type information content, that varies depending upon retrieval conditions. We have shown that MISR aerosol type can distinguish smoke from dust, volcanic ash from sulfate and water particles, and can identify qualitative differences in mixtures of smoke, dust, and pollution aerosol components in urban settings. We demonstrated the use of stereo imaging to map smoke, dust, and volcanic effluent plume injection height, and the combination of MISR and MODIS aerosol optical depth maps to constrain wildfire smoke source strength. This talk will briefly highlight where we stand on these application, with emphasis on the steps we are taking toward applying the capabilities toward constraining aerosol transport models, planet-wide.

  4. [Retrieval of crown closure of moso bamboo forest using unmanned aerial vehicle (UAV) remotely sensed imagery based on geometric-optical model].

    PubMed

    Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long

    2015-05-01

    This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.

  5. Optical Constants of Mars Candidate Materials used to Model Laboratory Reflectance Spectra of Mixtures

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Brown, Adrian Jon; Blake, D.; Bristow, T.

    2014-01-01

    Data obtained at visible and nearinfrared wavelengths by OMEGA on MarsExpress and CRISM on MRO provide definitive evidence for the presence of phyllosilicates and other hydrated phases on Mars. A diverse range of both Fe/Mg-OH and Al- OH-bearing phyllosilicates were identified including the smectites nontronite, saponite, and montmorillonite. To constrain the abundances of these phyllosilicates, spectral analyses of mixtures are needed. We report on our effort to enable the quantitative evaluation of the abundance of hydrated-hydroxylated silicates when they are contained in mixtures. Here we focus on two component mixtures of the hydrated/ hydroxylated silicates, saponite and montmorillonite (Mg- and Al-rich smectites) with each other and with two analogs for other Martian materials; pyroxene (enstatite) and palagonitic soil (an alteration product of basaltic glass, hereafter referred to as palagonite). We prepared three size separates of each end-member for study: 20-45, 63-90, and 125-150 micron. Here we focus upon mixtures of the 63-90 m size fractions.

  6. Phase diagrams of Janus fluids with up-down constrained orientations

    NASA Astrophysics Data System (ADS)

    Fantoni, Riccardo; Giacometti, Achille; Maestre, Miguel Ángel G.; Santos, Andrés

    2013-11-01

    A class of binary mixtures of Janus fluids formed by colloidal spheres with the hydrophobic hemispheres constrained to point either up or down are studied by means of Gibbs ensemble Monte Carlo simulations and simple analytical approximations. These fluids can be experimentally realized by the application of an external static electrical field. The gas-liquid and demixing phase transitions in five specific models with different patch-patch affinities are analyzed. It is found that a gas-liquid transition is present in all the models, even if only one of the four possible patch-patch interactions is attractive. Moreover, provided the attraction between like particles is stronger than between unlike particles, the system demixes into two subsystems with different composition at sufficiently low temperatures and high densities.

  7. Discriminant analysis of fused positive and negative ion mobility spectra using multivariate self-modeling mixture analysis and neural networks.

    PubMed

    Chen, Ping; Harrington, Peter B

    2008-02-01

    A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).

  8. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882

  9. An upscaling method and a numerical analysis of swelling/shrinking processes in a compacted bentonite/sand mixture

    NASA Astrophysics Data System (ADS)

    Xie, M.; Agus, S. S.; Schanz, T.; Kolditz, O.

    2004-12-01

    This paper presents an upscaling concept of swelling/shrinking processes of a compacted bentonite/sand mixture, which also applies to swelling of porous media in general. A constitutive approach for highly compacted bentonite/sand mixture is developed accordingly. The concept is based on the diffuse double layer theory and connects microstructural properties of the bentonite as well as chemical properties of the pore fluid with swelling potential. Main factors influencing the swelling potential of bentonite, i.e. variation of water content, dry density, chemical composition of pore fluid, as well as the microstructures and the amount of swelling minerals are taken into account. According to the proposed model, porosity is divided into interparticle and interlayer porosity. Swelling is the potential of interlayer porosity increase, which reveals itself as volume change in the case of free expansion, or turns to be swelling pressure in the case of constrained swelling. The constitutive equations for swelling/shrinking are implemented in the software GeoSys/RockFlow as a new chemo-hydro-mechanical model, which is able to simulate isothermal multiphase flow in bentonite. Details of the mathematical and numerical multiphase flow formulations, as well as the code implementation are described. The proposed model is verified using experimental data of tests on a highly compacted bentonite/sand mixture. Comparison of the 1D modelling results with the experimental data evidences the capability of the proposed model to satisfactorily predict free swelling of the material under investigation. Copyright

  10. A simulation assessment of the thermodynamics of dense ion-dipole mixtures with polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastea, Sorin, E-mail: sbastea@llnl.gov

    Molecular dynamics (MD) simulations are employed to ascertain the relative importance of various electrostatic interaction contributions, including induction interactions, to the thermodynamics of dense, hot ion-dipole mixtures. In the absence of polarization, we find that an MD-constrained free energy term accounting for the ion-dipole interactions, combined with well tested ionic and dipolar contributions, yields a simple, fairly accurate free energy form that may be a better option for describing the thermodynamics of such mixtures than the mean spherical approximation (MSA). Polarization contributions induced by the presence of permanent dipoles and ions are found to be additive to a good approximation,more » simplifying the thermodynamic modeling. We suggest simple free energy corrections that account for these two effects, based in part on standard perturbative treatments and partly on comparisons with MD simulation. Even though the proposed approximations likely need further study, they provide a first quantitative assessment of polarization contributions at high densities and temperatures and may serve as a guide for future modeling efforts.« less

  11. Thermodynamic optimization of mixed refrigerant Joule- Thomson systems constrained by heat transfer considerations

    NASA Astrophysics Data System (ADS)

    Hinze, J. F.; Klein, S. A.; Nellis, G. F.

    2015-12-01

    Mixed refrigerant (MR) working fluids can significantly increase the cooling capacity of a Joule-Thomson (JT) cycle. The optimization of MRJT systems has been the subject of substantial research. However, most optimization techniques do not model the recuperator in sufficient detail. For example, the recuperator is usually assumed to have a heat transfer coefficient that does not vary with the mixture. Ongoing work at the University of Wisconsin-Madison has shown that the heat transfer coefficients for two-phase flow are approximately three times greater than for a single phase mixture when the mixture quality is between 15% and 85%. As a result, a system that optimizes a MR without also requiring that the flow be in this quality range may require an extremely large recuperator or not achieve the performance predicted by the model. To ensure optimal performance of the JT cycle, the MR should be selected such that it is entirely two-phase within the recuperator. To determine the optimal MR composition, a parametric study was conducted assuming a thermodynamically ideal cycle. The results of the parametric study are graphically presented on a contour plot in the parameter space consisting of the extremes of the qualities that exist within the recuperator. The contours show constant values of the normalized refrigeration power. This ‘map’ shows the effect of MR composition on the cycle performance and it can be used to select the MR that provides a high cooling load while also constraining the recuperator to be two phase. The predicted best MR composition can be used as a starting point for experimentally determining the best MR.

  12. Objective determination of image end-members in spectral mixture analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.

    1993-01-01

    Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.

  13. Characterizing Intimate Mixtures of Materials in Hyperspectral Imagery with Albedo-based and Kernel-based Approaches

    DTIC Science & Technology

    2015-09-01

    scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed...following Hapke 9 (1993); and Mustard and Pieters 18 (1987)) assuming the reflectance spectra are bidirectional . SSA spectra were also generated...from AVIRIS data collected during a JPL/USGS campaign in response to the Deep Water Horizon (DWH) oil spill incident. 27 Out of the numerous

  14. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  15. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    USGS Publications Warehouse

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlou, A. T.; Betzler, B. R.; Burke, T. P.

    Uncertainties in the composition and fabrication of fuel compacts for the Fort St. Vrain (FSV) high temperature gas reactor have been studied by performing eigenvalue sensitivity studies that represent the key uncertainties for the FSV neutronic analysis. The uncertainties for the TRISO fuel kernels were addressed by developing a suite of models for an 'average' FSV fuel compact that models the fuel as (1) a mixture of two different TRISO fuel particles representing fissile and fertile kernels, (2) a mixture of four different TRISO fuel particles representing small and large fissile kernels and small and large fertile kernels and (3)more » a stochastic mixture of the four types of fuel particles where every kernel has its diameter sampled from a continuous probability density function. All of the discrete diameter and continuous diameter fuel models were constrained to have the same fuel loadings and packing fractions. For the non-stochastic discrete diameter cases, the MCNP compact model arranged the TRISO fuel particles on a hexagonal honeycomb lattice. This lattice-based fuel compact was compared to a stochastic compact where the locations (and kernel diameters for the continuous diameter cases) of the fuel particles were randomly sampled. Partial core configurations were modeled by stacking compacts into fuel columns containing graphite. The differences in eigenvalues between the lattice-based and stochastic models were small but the runtime of the lattice-based fuel model was roughly 20 times shorter than with the stochastic-based fuel model. (authors)« less

  17. On the Theory of Reactive Mixtures for Modeling Biological Growth

    PubMed Central

    Ateshian, Gerard A.

    2013-01-01

    Mixture theory, which can combine continuum theories for the motion and deformation of solids and fluids with general principles of chemistry, is well suited for modeling the complex responses of biological tissues, including tissue growth and remodeling, tissue engineering, mechanobiology of cells and a variety of other active processes. A comprehensive presentation of the equations of reactive mixtures of charged solid and fluid constituents is lacking in the biomechanics literature. This study provides the conservation laws and entropy inequality, as well as interface jump conditions, for reactive mixtures consisting of a constrained solid mixture and multiple fluid constituents. The constituents are intrinsically incompressible and may carry an electrical charge. The interface jump condition on the mass flux of individual constituents is shown to define a surface growth equation, which predicts deposition or removal of material points from the solid matrix, complementing the description of volume growth described by the conservation of mass. A formu-lation is proposed for the reference configuration of a body whose material point set varies with time. State variables are defined which can account for solid matrix volume growth and remodeling. Constitutive constraints are provided on the stresses and momentum supplies of the various constituents, as well as the interface jump conditions for the electrochem cal potential of the fluids. Simplifications appropriate for biological tissues are also proposed, which help reduce the governing equations into a more practical format. It is shown that explicit mechanisms of growth-induced residual stresses can be predicted in this framework. PMID:17206407

  18. Collaborative Learning across Physical and Virtual Worlds: Factors Supporting and Constraining Learners in a Blended Reality Environment

    ERIC Educational Resources Information Center

    Bower, Matt; Lee, Mark J. W.; Dalgarno, Barney

    2017-01-01

    This article presents the outcomes of a pilot study investigating factors that supported and constrained collaborative learning in a blended reality environment. Pre-service teachers at an Australian university took part in a hybrid tutorial lesson involving a mixture of students who were co-located in the same face-to-face (F2F) classroom along…

  19. Flow of variably fluidized granular masses across three-dimensional terrain I. Coulomb mixture theory

    USGS Publications Warehouse

    Iverson, R.M.; Denlinger, R.P.

    2001-01-01

    Rock avalanches, debris flows, and related phenomena consist of grain-fluid mixtures that move across three-dimensional terrain. In all these phenomena the same basic forces, govern motion, but differing mixture compositions, initial conditions, and boundary conditions yield varied dynamics and deposits. To predict motion of diverse grain-fluid masses from initiation to deposition, we develop a depth-averaged, threedimensional mathematical model that accounts explicitly for solid- and fluid-phase forces and interactions. Model input consists of initial conditions, path topography, basal and internal friction angles of solid grains, viscosity of pore fluid, mixture density, and a mixture diffusivity that controls pore pressure dissipation. Because these properties are constrained by independent measurements, the model requires little or no calibration and yields readily testable predictions. In the limit of vanishing Coulomb friction due to persistent high fluid pressure the model equations describe motion of viscous floods, and in the limit of vanishing fluid stress they describe one-phase granular avalanches. Analysis of intermediate phenomena such as debris flows and pyroclastic flows requires use of the full mixture equations, which can simulate interaction of high-friction surge fronts with more-fluid debris that follows. Special numerical methods (described in the companion paper) are necessary to solve the full equations, but exact analytical solutions of simplified equations provide critical insight. An analytical solution for translational motion of a Coulomb mixture accelerating from rest and descending a uniform slope demonstrates that steady flow can occur only asymptotically. A solution for the asymptotic limit of steady flow in a rectangular channel explains why shear may be concentrated in narrow marginal bands that border a plug of translating debris. Solutions for static equilibrium of source areas describe conditions of incipient slope instability, and other static solutions show that nonuniform distributions of pore fluid pressure produce bluntly tapered vertical profiles at the margins of deposits. Simplified equations and solutions may apply in additional situations identified by a scaling analysis. Assessment of dimensionless scaling parameters also reveals that miniature laboratory experiments poorly simulate the dynamics of full-scale flows in which fluid effects are significant. Therefore large geophysical flows can exhibit dynamics not evident at laboratory scales.

  20. Flow of variably fluidized granular masses across three-dimensional terrain: 1. Coulomb mixture theory

    NASA Astrophysics Data System (ADS)

    Iverson, Richard M.; Denlinger, Roger P.

    2001-01-01

    Rock avalanches, debris flows, and related phenomena consist of grain-fluid mixtures that move across three-dimensional terrain. In all these phenomena the same basic forces govern motion, but differing mixture compositions, initial conditions, and boundary conditions yield varied dynamics and deposits. To predict motion of diverse grain-fluid masses from initiation to deposition, we develop a depth-averaged, three-dimensional mathematical model that accounts explicitly for solid- and fluid-phase forces and interactions. Model input consists of initial conditions, path topography, basal and internal friction angles of solid grains, viscosity of pore fluid, mixture density, and a mixture diffusivity that controls pore pressure dissipation. Because these properties are constrained by independent measurements, the model requires little or no calibration and yields readily testable predictions. In the limit of vanishing Coulomb friction due to persistent high fluid pressure the model equations describe motion of viscous floods, and in the limit of vanishing fluid stress they describe one-phase granular avalanches. Analysis of intermediate phenomena such as debris flows and pyroclastic flows requires use of the full mixture equations, which can simulate interaction of high-friction surge fronts with more-fluid debris that follows. Special numerical methods (described in the companion paper) are necessary to solve the full equations, but exact analytical solutions of simplified equations provide critical insight. An analytical solution for translational motion of a Coulomb mixture accelerating from rest and descending a uniform slope demonstrates that steady flow can occur only asymptotically. A solution for the asymptotic limit of steady flow in a rectangular channel explains why shear may be concentrated in narrow marginal bands that border a plug of translating debris. Solutions for static equilibrium of source areas describe conditions of incipient slope instability, and other static solutions show that nonuniform distributions of pore fluid pressure produce bluntly tapered vertical profiles at the margins of deposits. Simplified equations and solutions may apply in additional situations identified by a scaling analysis. Assessment of dimensionless scaling parameters also reveals that miniature laboratory experiments poorly simulate the dynamics of full-scale flows in which fluid effects are significant. Therefore large geophysical flows can exhibit dynamics not evident at laboratory scales.

  1. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  2. Sediment unmixing using detrital geochronology

    USGS Publications Warehouse

    Sharman, Glenn R.; Johnstone, Samuel

    2017-01-01

    Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the influence of environmental forcings (e.g., tectonism, climate) on the earth’s surface. Here we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First we summarize ‘top-down’ mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions (‘parents’) that characterize a derived sample or set of samples (‘daughters’). Second we propose the use of ‘bottom-up’ methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable mixtures over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.

  3. A Survey of Studies on Ignition and Burn of Inertially Confined Fuels

    NASA Astrophysics Data System (ADS)

    Atzeni, Stefano

    2016-10-01

    A survey of studies on ignition and burn of inertial fusion fuels is presented. Potentials and issues of different approaches to ignition (central ignition, fast ignition, volume ignition) are addressed by means of simple models and numerical simulations. Both equimolar DT and T-lean mixtures are considered. Crucial issues concerning hot spot formation (implosion symmetry for central ignition; igniting pulse parameters for fast ignition) are briefly discussed. Recent results concerning the scaling of the ignition energy with the implosion velocity and constrained gain curves are also summarized.

  4. Determining the Water Ice Content of Martian Regolith by Nonlinear Spectral Mixture Modeling

    NASA Technical Reports Server (NTRS)

    Gyalay, S.; Noe Dobrea, E. Z.

    2015-01-01

    In the search for evidence of life, Icebreaker will drill in to the Martian ice-rich regolith to collect samples, which will then be analyzed by an array of instruments designed to identify biomarkers. In addition, drilling into the subsurface will provide the opportunity to assess the vertical distribution of ice to a depth of 1 meter. The purpose of this particular project was to understand the uncertainties involved in the use of the imaging system to constrain the water ice content in regolith samples.

  5. Accuracy assessment of linear spectral mixture model due to terrain undulation

    NASA Astrophysics Data System (ADS)

    Wang, Tianxing; Chen, Songlin; Ma, Ya

    2008-12-01

    Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be effectively used to evaluate different algorithms of terrain undulation correction for further study.

  6. Estimating wetland vegetation abundance from Landsat-8 operational land imager imagery: a comparison between linear spectral mixture analysis and multinomial logit modeling methods

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Gong, Zhaoning; Zhao, Wenji; Pu, Ruiliang; Liu, Ke

    2016-01-01

    Mapping vegetation abundance by using remote sensing data is an efficient means for detecting changes of an eco-environment. With Landsat-8 operational land imager (OLI) imagery acquired on July 31, 2013, both linear spectral mixture analysis (LSMA) and multinomial logit model (MNLM) methods were applied to estimate and assess the vegetation abundance in the Wild Duck Lake Wetland in Beijing, China. To improve mapping vegetation abundance and increase the number of endmembers in spectral mixture analysis, normalized difference vegetation index was extracted from OLI imagery along with the seven reflective bands of OLI data for estimating the vegetation abundance. Five endmembers were selected, which include terrestrial plants, aquatic plants, bare soil, high albedo, and low albedo. The vegetation abundance mapping results from Landsat OLI data were finally evaluated by utilizing a WorldView-2 multispectral imagery. Similar spatial patterns of vegetation abundance produced by both fully constrained LSMA algorithm and MNLM methods were observed: higher vegetation abundance levels were distributed in agricultural and riparian areas while lower levels in urban/built-up areas. The experimental results also indicate that the MNLM model outperformed the LSMA algorithm with smaller root mean square error (0.0152 versus 0.0252) and higher coefficient of determination (0.7856 versus 0.7214) as the MNLM model could handle the nonlinear reflection phenomenon better than the LSMA with mixed pixels.

  7. Torsion of DNA modeled as a heterogeneous fluctuating rod

    NASA Astrophysics Data System (ADS)

    Argudo, David; Purohit, Prashant K.

    2014-01-01

    We discuss the statistical mechanics of a heterogeneous elastic rod with bending, twisting and stretching. Our model goes beyond earlier works where only homogeneous rods were considered in the limit of high forces and long lengths. Our methods allow us to consider shorter fluctuating rods for which boundary conditions can play an important role. We use our theory to study structural transitions in torsionally constrained DNA where there is coexistence of states with different effective properties. In particular, we examine whether a newly discovered left-handed DNA conformation called L-DNA is a mixture of two known states. We also use our model to investigate the mechanical effects of the binding of small molecules to DNA. For both these applications we make experimentally falsifiable predictions.

  8. Automatic sub-pixel coastline extraction based on spectral mixture analysis using EO-1 Hyperion data

    NASA Astrophysics Data System (ADS)

    Hong, Zhonghua; Li, Xuesu; Han, Yanling; Zhang, Yun; Wang, Jing; Zhou, Ruyan; Hu, Kening

    2018-06-01

    Many megacities (such as Shanghai) are located in coastal areas, therefore, coastline monitoring is critical for urban security and urban development sustainability. A shoreline is defined as the intersection between coastal land and a water surface and features seawater edge movements as tides rise and fall. Remote sensing techniques have increasingly been used for coastline extraction; however, traditional hard classification methods are performed only at the pixel-level and extracting subpixel accuracy using soft classification methods is both challenging and time consuming due to the complex features in coastal regions. This paper presents an automatic sub-pixel coastline extraction method (ASPCE) from high-spectral satellite imaging that performs coastline extraction based on spectral mixture analysis and, thus, achieves higher accuracy. The ASPCE method consists of three main components: 1) A Water- Vegetation-Impervious-Soil (W-V-I-S) model is first presented to detect mixed W-V-I-S pixels and determine the endmember spectra in coastal regions; 2) The linear spectral mixture unmixing technique based on Fully Constrained Least Squares (FCLS) is applied to the mixed W-V-I-S pixels to estimate seawater abundance; and 3) The spatial attraction model is used to extract the coastline. We tested this new method using EO-1 images from three coastal regions in China: the South China Sea, the East China Sea, and the Bohai Sea. The results showed that the method is accurate and robust. Root mean square error (RMSE) was utilized to evaluate the accuracy by calculating the distance differences between the extracted coastline and the digitized coastline. The classifier's performance was compared with that of the Multiple Endmember Spectral Mixture Analysis (MESMA), Mixture Tuned Matched Filtering (MTMF), Sequential Maximum Angle Convex Cone (SMACC), Constrained Energy Minimization (CEM), and one classical Normalized Difference Water Index (NDWI). The results from the three test sites indicated that the proposed ASPCE method extracted coastlines more efficiently than did the compared methods, and its coastline extraction accuracy corresponded closely to the digitized coastline, with 0.39 pixels, 0.40 pixels, and 0.35 pixels in the three test regions, showing that the ASPCE method achieves an accuracy below 12.0 m (0.40 pixels). Moreover, in the quantitative accuracy assessment for the three test sites, the ASPCE method shows the best performance in coastline extraction, achieving a 0.35 pixel-level at the Bohai Sea, China test site. Therefore, the proposed ASPCE method can extract coastline more accurately than can the hard classification methods or other spectral unmixing methods.

  9. Stabilization of ammonia-rich hydrate inside icy planets.

    PubMed

    Naden Robinson, Victor; Wang, Yanchao; Ma, Yanming; Hermann, Andreas

    2017-08-22

    The interior structure of the giant ice planets Uranus and Neptune, but also of newly discovered exoplanets, is loosely constrained, because limited observational data can be satisfied with various interior models. Although it is known that their mantles comprise large amounts of water, ammonia, and methane ices, it is unclear how these organize themselves within the planets-as homogeneous mixtures, with continuous concentration gradients, or as well-separated layers of specific composition. While individual ices have been studied in great detail under pressure, the properties of their mixtures are much less explored. We show here, using first-principles calculations, that the 2:1 ammonia hydrate, (H 2 O)(NH 3 ) 2 , is stabilized at icy planet mantle conditions due to a remarkable structural evolution. Above 65 GPa, we predict it will transform from a hydrogen-bonded molecular solid into a fully ionic phase O 2- ([Formula: see text]) 2 , where all water molecules are completely deprotonated, an unexpected bonding phenomenon not seen before. Ammonia hemihydrate is stable in a sequence of ionic phases up to 500 GPa, pressures found deep within Neptune-like planets, and thus at higher pressures than any other ammonia-water mixture. This suggests it precipitates out of any ammonia-water mixture at sufficiently high pressures and thus forms an important component of icy planets.

  10. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    PubMed

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  11. Strangeness driven phase transitions in compressed baryonic matter and their relevance for neutron stars and core collapsing supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raduta, Ad. R.; Gulminelli, F.; Oertel, M.

    2015-02-24

    We discuss the thermodynamics of compressed baryonic matter with strangeness within non-relativistic mean-field models with effective interactions. The phase diagram of the full baryonic octet under strangeness equilibrium is built and discussed in connection with its relevance for core-collapse supernovae and neutron stars. A simplified framework corresponding to (n, p, Λ)(+e)-mixtures is employed in order to test the sensitivity of the existence of a phase transition on the (poorely constrained) interaction coupling constants and the compatibility between important hyperonic abundances and 2M{sub ⊙} neutron stars.

  12. Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies

    NASA Astrophysics Data System (ADS)

    Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu

    2015-09-01

    Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.

  13. One-, two- and three-phase viscosity treatments for basaltic lava flows

    PubMed Central

    Harris, Andrew J. L.; Allen, John S.

    2009-01-01

    Lava flows comprise three-phase mixtures of melt, crystals, and bubbles. While existing one-phase treatments allow melt phase viscosity to be assessed on the basis of composition, water content, and/or temperature, two-phase treatments constrain the effects of crystallinity or vesicularity on mixture viscosity. However, three-phase treatments, allowing for the effects of coexisting crystallinity and vesicularity, are not well understood. We investigate existing one- and two-phase treatments using lava flow case studies from Mauna Loa (Hawaii) and Mount Etna (Italy) and compare these with a three-phase treatment that has not been applied previously to basaltic mixtures. At Etna, melt viscosities of 425 ± 30 Pa s are expected for well-degassed (0.1 w. % H2O), and 135 ± 10 Pa s for less well-degassed (0.4 wt % H2O), melt at 1080°C. Application of a three-phase model yields mixture viscosities (45% crystals, 25–35% vesicles) in the range 5600–12,500 Pa s. This compares with a measured value for Etnean lava of 9400 ± 1500 Pa s. At Mauna Loa, the three-phase treatment provides a fit with the full range of field measured viscosities, giving three-phase mixture viscosities, upon eruption, of 110–140 Pa s (5% crystals, no bubble effect due to sheared vesicles) to 850–1400 Pa s (25–30% crystals, 40–60% spherical vesicles). The ability of the three-phase treatment to characterize the full range of melt-crystal-bubble mixture viscosities in both settings indicates the potential of this method in characterizing basaltic lava mixture viscosity. PMID:21691456

  14. Building a functional artery: issues from the perspective of mechanics.

    PubMed

    Gleason, Rudolph L; Hu, Jin-Jia; Humphrey, Jay D

    2004-09-01

    Despite the many successes of arterial tissue engineering, clinically viable implants may be a decade or more away. Fortunately, there is much more that we can learn from native vessels with regard to designing for optimal structure, function, and properties. Herein, we examine recent observations in vascular biology from the perspective of nonlinear mechanics. Moreover, we use a constrained mixture model to study potential contributions of individual wall constituents. In both cases, the unique biological and mechanical roles of elastin come to the forefront, especially its role in generating and modulating residual stress within the wall, which appears to be key to multiple growth and remodeling responses.

  15. Perturbations of the optical properties of mineral dust particles by mixing with black carbon: a numerical simulation study

    DOE PAGES

    Scarnato, B. V.; China, S.; Nielsen, K.; ...

    2015-06-25

    Field observations show that individual aerosol particles are a complex mixture of a wide variety of species, reflecting different sources and physico-chemical transformations. The impacts of individual aerosol morphology and mixing characteristics on the Earth system are not yet fully understood. Here we present a sensitivity study on climate-relevant aerosols optical properties to various approximations. Based on aerosol samples collected in various geographical locations, we have observationally constrained size, morphology and mixing, and accordingly simulated, using the discrete dipole approximation model (DDSCAT), optical properties of three aerosols types: (1) bare black carbon (BC) aggregates, (2) bare mineral dust, and (3)more » an internal mixture of a BC aggregate laying on top of a mineral dust particle, also referred to as polluted dust. DDSCAT predicts optical properties and their spectral dependence consistently with observations for all the studied cases. Predicted values of mass absorption, scattering and extinction coefficients (MAC, MSC, MEC) for bare BC show a weak dependence on the BC aggregate size, while the asymmetry parameter ( g) shows the opposite behavior. The simulated optical properties of bare mineral dust present a large variability depending on the modeled dust shape, confirming the limited range of applicability of spheroids over different types and size of mineral dust aerosols, in agreement with previous modeling studies. The polluted dust cases show a strong decrease in MAC values with the increase in dust particle size (for the same BC size) and an increase of the single scattering albedo (SSA). Furthermore, particles with a radius between 180 and 300 nm are characterized by a decrease in SSA values compared to bare dust, in agreement with field observations.This paper demonstrates that observationally constrained DDSCAT simulations allow one to better understand the variability of the measured aerosol optical properties in ambient air and to define benchmark biases due to different approximations in aerosol parametrization.« less

  16. Atmospheric emissions from the Deepwater Horizon spill constrain air-water partitioning, hydrocarbon fate, and leak rate

    NASA Astrophysics Data System (ADS)

    Ryerson, T. B.; Aikin, K. C.; Angevine, W. M.; Atlas, E. L.; Blake, D. R.; Brock, C. A.; Fehsenfeld, F. C.; Gao, R.-S.; de Gouw, J. A.; Fahey, D. W.; Holloway, J. S.; Lack, D. A.; Lueb, R. A.; Meinardi, S.; Middlebrook, A. M.; Murphy, D. M.; Neuman, J. A.; Nowak, J. B.; Parrish, D. D.; Peischl, J.; Perring, A. E.; Pollack, I. B.; Ravishankara, A. R.; Roberts, J. M.; Schwarz, J. P.; Spackman, J. R.; Stark, H.; Warneke, C.; Watts, L. A.

    2011-04-01

    The fate of deepwater releases of gas and oil mixtures is initially determined by solubility and volatility of individual hydrocarbon species; these attributes determine partitioning between air and water. Quantifying this partitioning is necessary to constrain simulations of gas and oil transport, to predict marine bioavailability of different fractions of the gas-oil mixture, and to develop a comprehensive picture of the fate of leaked hydrocarbons in the marine environment. Analysis of airborne atmospheric data shows massive amounts (˜258,000 kg/day) of hydrocarbons evaporating promptly from the Deepwater Horizon spill; these data collected during two research flights constrain air-water partitioning, thus bioavailability and fate, of the leaked fluid. This analysis quantifies the fraction of surfacing hydrocarbons that dissolves in the water column (˜33% by mass), the fraction that does not dissolve, and the fraction that evaporates promptly after surfacing (˜14% by mass). We do not quantify the leaked fraction lacking a surface expression; therefore, calculation of atmospheric mass fluxes provides a lower limit to the total hydrocarbon leak rate of 32,600 to 47,700 barrels of fluid per day, depending on reservoir fluid composition information. This study demonstrates a new approach for rapid-response airborne assessment of future oil spills.

  17. Performance on perceptual word identification is mediated by discrete states.

    PubMed

    Swagman, April R; Province, Jordan M; Rouder, Jeffrey N

    2015-02-01

    We contrast predictions from discrete-state models of all-or-none information loss with signal-detection models of graded strength for the identification of briefly flashed English words. Previous assessments have focused on whether ROC curves are straight or not, which is a test of a discrete-state model where detection leads to the highest confidence response with certainty. We along with many others argue this certainty assumption is too constraining, and, consequently, the straight-line ROC test is too stringent. Instead, we assess a core property of discrete-state models, conditional independence, where the pattern of responses depends only on which state is entered. The conditional independence property implies that confidence ratings are a mixture of detect and guess state responses, and that stimulus strength factors, the duration of the flashed word in this report, affect only the probability of entering a state and not responses conditional on a state. To assess this mixture property, 50 participants saw words presented briefly on a computer screen at three variable flash durations followed by either a two-alternative confidence ratings task or a yes-no confidence ratings task. Comparable discrete-state and signal-detection models were fit to the data for each participant and task. The discrete-state models outperformed the signal detection models for 90 % of participants in the two-alternative task and for 68 % of participants in the yes-no task. We conclude discrete-state models are viable for predicting performance across stimulus conditions in a perceptual word identification task.

  18. Linear mixing model applied to AVHRR LAC data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  19. WHEN ISOTOPES AREN'T ENOUGH: ADDITIONAL INFORMATION TO CONSTRAIN MIXING PROBLEMS

    EPA Science Inventory

    Stable isotopes are often used as chemical tracers to determine the relative contributions of sources to a mixture. Ecological examples include partitioning pollution sources to air or water bodies, trophic links in food webs, plant water use from different soil horizons, source...

  20. WHEN ISOTOPES AREN'T ENOUGH: USING ADDITIONAL INFORMATION TO CONSTRAIN MIXING PROBLEMS

    EPA Science Inventory

    Stable isotopes are often used as chemical tracers to determine the relative contributions of sources to a mixture. Ecological examples include partitioning pollution sources to air or water bodies, trophic links in food webs, plant water use from different soil horizons, source...

  1. Constrained energy minimization applied to apparent reflectance and single-scattering albedo spectra: a comparison

    NASA Astrophysics Data System (ADS)

    Resmini, Ronald G.; Graver, William R.; Kappus, Mary E.; Anderson, Mark E.

    1996-11-01

    Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.

  2. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  3. Visible Near-infrared Spectral Evolution of Irradiated Mixed Ices and Application to Kuiper Belt Objects and Jupiter Trojans

    NASA Astrophysics Data System (ADS)

    Poston, Michael J.; Mahjoub, Ahmed; Ehlmann, Bethany L.; Blacksberg, Jordana; Brown, Michael E.; Carlson, Robert W.; Eiler, John M.; Hand, Kevin P.; Hodyss, Robert; Wong, Ian

    2018-04-01

    Understanding the history of Kuiper Belt Objects and Jupiter Trojans will help to constrain models of solar system formation and dynamical evolution. Laboratory simulations of a possible thermal and irradiation history of these bodies were conducted on ice mixtures while monitoring their spectral properties. These simulations tested the hypothesis that the presence or absence of sulfur explains the two distinct visible near-infrared spectral groups observed in each population and that Trojans and KBOs share a common formation location. Mixed ices consisting of water, methanol, and ammonia, in mixtures both with and without hydrogen sulfide, were deposited and irradiated with 10 keV electrons. Deposition and initial irradiation were performed at 50 K to simulate formation at 20 au in the early solar system, then heated to Trojan-like temperatures and irradiated further. Finally, irradiation was concluded and resulting samples were observed during heating to room temperature. Results indicated that the presence of sulfur resulted in steeper spectral slopes. Heating through the 140–200 K range decreased the slopes and total reflectance for both mixtures. In addition, absorption features at 410, 620, and 900 nm appeared under irradiation, but only in the H2S-containing mixture. These features were lost with heating once irradiation was concluded. While the results reported here are consistent with the hypothesis, additional work is needed to address uncertainties and to simulate conditions not included in the present work.

  4. How Evolution May Work Through Curiosity-Driven Developmental Process.

    PubMed

    Oudeyer, Pierre-Yves; Smith, Linda B

    2016-04-01

    Infants' own activities create and actively select their learning experiences. Here we review recent models of embodied information seeking and curiosity-driven learning and show that these mechanisms have deep implications for development and evolution. We discuss how these mechanisms yield self-organized epigenesis with emergent ordered behavioral and cognitive developmental stages. We describe a robotic experiment that explored the hypothesis that progress in learning, in and for itself, generates intrinsic rewards: The robot learners probabilistically selected experiences according to their potential for reducing uncertainty. In these experiments, curiosity-driven learning led the robot learner to successively discover object affordances and vocal interaction with its peers. We explain how a learning curriculum adapted to the current constraints of the learning system automatically formed, constraining learning and shaping the developmental trajectory. The observed trajectories in the robot experiment share many properties with those in infant development, including a mixture of regularities and diversities in the developmental patterns. Finally, we argue that such emergent developmental structures can guide and constrain evolution, in particular with regard to the origins of language. Copyright © 2016 Cognitive Science Society, Inc.

  5. A Physically Based Framework for Modelling the Organic Fractionation of Sea Spray Aerosol from Bubble Film Langmuir Equilibria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrows, Susannah M.; Ogunro, O.; Frossard, Amanda

    2014-12-19

    The presence of a large fraction of organic matter in primary sea spray aerosol (SSA) can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely-sensed chlorophyll-a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll-a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel frameworkmore » for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC), a polysaccharide-like mixture associated primarily with semi-labile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecule. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll-\\textit{a} and organic fraction are similar to existing empirical parameterizations, but can vary between biologically productive and non-productive regions, and seasonally within a given region. Major uncertainties include the bubble film thickness at bursting and the variability of organic surfactant activity in the ocean, which is poorly constrained. In addition, marine colloids and cooperative adsorption of polysaccharides may make important contributions to the aerosol, but are not included here. This organic fractionation framework is an initial step towards a closer linking of ocean biogeochemistry and aerosol chemical composition in Earth system models. Future work should focus on improving constraints on model parameters through new laboratory experiments or through empirical fitting to observed relationships in the real ocean and atmosphere, as well as on atmospheric implications of the variable composition of organic matter in sea spray.« less

  6. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  7. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  8. Sediment unmixing using detrital geochronology

    NASA Astrophysics Data System (ADS)

    Sharman, Glenn R.; Johnstone, Samuel A.

    2017-11-01

    Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the effect of environmental forcing (e.g., tectonism, climate) on the Earth's surface. Here, we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First, we summarize 'top-down' mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions ('parents') that characterize a derived sample or set of samples ('daughters'). Second, we propose the use of 'bottom-up' methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable that is well mixed over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has the potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.

  9. Nonlinear Spectral Mixture Modeling to Estimate Water-Ice Abundance of Martian Regolith

    NASA Astrophysics Data System (ADS)

    Gyalay, Szilard; Chu, Kathryn; Zeev Noe Dobrea, Eldar

    2017-10-01

    We present a novel technique to estimate the abundance of water-ice in the Martian permafrost using Phoenix Surface Stereo Imager multispectral data. In previous work, Cull et al. (2010) estimated the abundance of water-ice in trenches dug by the Mars Phoenix lander by modeling the spectra of the icy regolith using the radiative transfer methods described in Hapke (2008) with optical constants for Mauna Kea palagonite (Clancy et al., 1995) as a substitute for unknown Martian regolith optical constants. Our technique, which uses the radiative transfer methods described in Shkuratov et al. (1999), seeks to eliminate the uncertainty that stems from not knowing the composition of the Martian regolith by using observations of the Martian soil before and after the water-ice has sublimated away. We use observations of the desiccated regolith sample to estimate its complex index of refraction from its spectrum. This removes any a priori assumptions of Martian regolith composition, limiting our free parameters to the estimated real index of refraction of the dry regolith at one specific wavelength, ice grain size, and regolith porosity. We can then model mixtures of regolith and water-ice, fitting to the original icy spectrum to estimate the ice abundance. To constrain the uncertainties in this technique, we performed laboratory measurements of the spectra of known mixtures of water-ice and dry soils as well as those of soils after desiccation with controlled viewing geometries. Finally, we applied the technique to Phoenix Surface Stereo Imager observations and estimated water-ice abundances consistent with pore-fill in the near-surface ice. This abundance is consistent with atmospheric diffusion, which has implications to our understanding of the history of water-ice on Mars and the role of the regolith at high latitudes as a reservoir of atmospheric H2O.

  10. Leads Detection Using Mixture Statistical Distribution Based CRF Algorithm from Sentinel-1 Dual Polarization SAR Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting

    2017-04-01

    Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a pixel spacing of 40 meters near Prydz Bay area, East Antarctica. Main work is listed as follows: 1) A mixture statistical distribution based CRF algorithm has been developed for leads detection from Sentinel-1A dual polarization images. 2) The assessment of the proposed mixture statistical distribution based CRF method and single distribution based CRF algorithm has been presented. 3) The preferable parameters sets including statistical distributions, the aspect ratio threshold and spatial smoothing window size have been provided. In the future, the proposed algorithm will be developed for the operational Sentinel series data sets processing due to its less time consuming cost and high accuracy in leads detection.

  11. Thermal infrared spectral analysis of compacted fine-grained mineral mixtures: implications for spectral interpretation of lithified sedimentary materials on Mars

    NASA Astrophysics Data System (ADS)

    Pan, C.; Rogers, D.

    2012-12-01

    Characterizing the thermal infrared (TIR) spectral mixing behavior of compacted fine-grained mineral assemblages is necessary for facilitating quantitative mineralogy of sedimentary surfaces from spectral measurements. Previous researchers have demonstrated that TIR spectra from igneous and metamorphic rocks as well as coarse-grained (>63 micron) sand mixtures combine in proportion to their volume abundance. However, the spectral mixing behavior of compacted, fine-grained mineral mixtures that would be characteristic of sedimentary depositional environments has received little attention. Here we characterize the spectral properties of pressed pellet samples of <10 micron mineral mixtures to 1) assess linearity of spectral combinations, 2) determine whether there are consistent over- or under-estimations of different types of minerals in spectral models and 3) determine if model accuracy can be improved by including both fine- and coarse-grained end-members. Major primary and secondary minerals found on the Martian surface including feldspar, pyroxene, smectite, sulfate and carbonate were crushed with an agate mortar and pestle and centrifuged to obtain less than 10 micron size. Pure phases and mixtures of two, three and four components were made in varying proportions by volume. All of the samples were pressed into pellets at 15000PSI to minimize volume scattering. Thermal infrared spectra of pellets were measured in the Vibrational Spectroscopy Laboratory at Stony Brook University with a Thermo Fisher Nicolet 6700 Fourier transform infrared Michelson interferometer from ~225 to 2000 cm-1. Our preliminary results indicate that some pelletized samples have contributions from volume scattering, which leads to non-linear spectral combinations. It is not clear if the transparency features (which arise from multiple surface reflections of incident photons) are due to minor clinging fines on an otherwise specular pellet surface or to partially transmitted energy through optically thin grains in the compacted mixture. Inclusion of loose powder (<10 μm) sample spectra improves mineral abundance estimates for some mixtures. In general, mineral abundances are predicted to within +/- 10% (absolute) for approximately 60% of our samples; thus far, there are no clear trends in which cases produce better model results. With the exception of pyroxene/feldspar ratios being consistently overestimated, there are no consistent trends in over- or under-estimation of minerals. The results described here are based on the unsubstantiated assumption that areal abundance on the pellet surface is equal to the volume abundance. Thus future work will include micro-imaging of our samples to constrain areal abundance. We will also prepareclay mixtures using a wetting/drying sequence rather than pressure, and expand our set of samples to include additional mixture combinations to further characterize the spectral behavior of compacted mixtures. This work will be directly applicable to analysis of TES and Mini-TES data of lithified sedimentary deposits.

  12. Unsupervised Bayesian linear unmixing of gene expression microarrays.

    PubMed

    Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O

    2013-03-19

    This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.

  13. Mixture model based joint-MAP reconstruction of attenuation and activity maps in TOF-PET

    NASA Astrophysics Data System (ADS)

    Hemmati, H.; Kamali-Asl, A.; Ghafarian, P.; Ay, M. R.

    2018-06-01

    A challenge to have quantitative positron emission tomography (PET) images is to provide an accurate and patient-specific photon attenuation correction. In PET/MR scanners, the nature of MR signals and hardware limitations have led to a real challenge on the attenuation map extraction. Except for a constant factor, the activity and attenuation maps from emission data on TOF-PET system can be determined by the maximum likelihood reconstruction of attenuation and activity approach (MLAA) from emission data. The aim of the present study is to constrain the joint estimations of activity and attenuation approach for PET system using a mixture model prior based on the attenuation map histogram. This novel prior enforces non-negativity and its hyperparameters can be estimated using a mixture decomposition step from the current estimation of the attenuation map. The proposed method can also be helpful on the solving of scaling problem and is capable to assign the predefined regional attenuation coefficients with some degree of confidence to the attenuation map similar to segmentation-based attenuation correction approaches. The performance of the algorithm is studied with numerical and Monte Carlo simulations and a phantom experiment and was compared with MLAA algorithm with and without the smoothing prior. The results demonstrate that the proposed algorithm is capable of producing the cross-talk free activity and attenuation images from emission data. The proposed approach has potential to be a practical and competitive method for joint reconstruction of activity and attenuation maps from emission data on PET/MR and can be integrated on the other methods.

  14. Statistical mechanics of light elements at high pressure. VI - Liquid-state calculations with Thomas-Fermi-Dirac theory

    NASA Technical Reports Server (NTRS)

    Macfarlane, J. J.

    1984-01-01

    A model free energy is developed for hydrogen-helium mixtures based on solid-state Thomas-Fermi-Dirac calculations at pressures relevant to the interiors of giant planets. Using a model potential similar to that for a two-component plasma, effective charges for the nuclei (which are in general smaller than the actual charges because of screening effects) are parameterized, being constrained by calculations at a number of densities, compositions, and lattice structures. These model potentials are then used to compute the equilibrium properties of H-He fluids using a charged hard-sphere model. The results find critical temperatures of about 0 K, 500 K, and 1500 K, for pressures of 10, 100, and 1000 Mbar, respectively. These phase separation temperatures are considerably lower (approximately 6,000-10,000 K) than those found from calculations using free electron perturbation theory, and suggest that H-He solutions should be stable against phase separation in the metallic zones of Jupiter and Saturn.

  15. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  16. Modeling semantic aspects for cross-media image indexing.

    PubMed

    Monay, Florent; Gatica-Perez, Daniel

    2007-10-01

    To go beyond the query-by-example paradigm in image retrieval, there is a need for semantic indexing of large image collections for intuitive text-based image search. Different models have been proposed to learn the dependencies between the visual content of an image set and the associated text captions, then allowing for the automatic creation of semantic indices for unannotated images. The task, however, remains unsolved. In this paper, we present three alternatives to learn a Probabilistic Latent Semantic Analysis model (PLSA) for annotated images, and evaluate their respective performance for automatic image indexing. Under the PLSA assumptions, an image is modeled as a mixture of latent aspects that generates both image features and text captions, and we investigate three ways to learn the mixture of aspects. We also propose a more discriminative image representation than the traditional Blob histogram, concatenating quantized local color information and quantized local texture descriptors. The first learning procedure of a PLSA model for annotated images is a standard EM algorithm, which implicitly assumes that the visual and the textual modalities can be treated equivalently. The other two models are based on an asymmetric PLSA learning, allowing to constrain the definition of the latent space on the visual or on the textual modality. We demonstrate that the textual modality is more appropriate to learn a semantically meaningful latent space, which translates into improved annotation performance. A comparison of our learning algorithms with respect to recent methods on a standard dataset is presented, and a detailed evaluation of the performance shows the validity of our framework.

  17. MODELING GALACTIC EXTINCTION WITH DUST AND 'REAL' POLYCYCLIC AROMATIC HYDROCARBONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulas, Giacomo; Casu, Silvia; Cecchi-Pestellini, Cesare

    We investigate the remarkable apparent variety of galactic extinction curves by modeling extinction profiles with core-mantle grains and a collection of single polycyclic aromatic hydrocarbons. Our aim is to translate a synthetic description of dust into physically well-grounded building blocks through the analysis of a statistically relevant sample of different extinction curves. All different flavors of observed extinction curves, ranging from the average galactic extinction curve to virtually 'bumpless' profiles, can be described by the present model. We prove that a mixture of a relatively small number (54 species in 4 charge states each) of polycyclic aromatic hydrocarbons can reproducemore » the features of the extinction curve in the ultraviolet, dismissing an old objection to the contribution of polycyclic aromatic hydrocarbons to the interstellar extinction curve. Despite the large number of free parameters (at most the 54 Multiplication-Sign 4 column densities of each species in each ionization state included in the molecular ensemble plus the 9 parameters defining the physical properties of classical particles), we can strongly constrain some physically relevant properties such as the total number of C atoms in all species and the mean charge of the mixture. Such properties are found to be largely independent of the adopted dust model whose variation provides effects that are orthogonal to those brought about by the molecular component. Finally, the fitting procedure, together with some physical sense, suggests (but does not require) the presence of an additional component of chemically different very small carbonaceous grains.« less

  18. Rapid processing of chemosensor transients in a neuromorphic implementation of the insect macroglomerular complex

    PubMed Central

    Pearce, Timothy C.; Karout, Salah; Rácz, Zoltán; Capurro, Alberto; Gardner, Julian W.; Cole, Marina

    2012-01-01

    We present a biologically-constrained neuromorphic spiking model of the insect antennal lobe macroglomerular complex that encodes concentration ratios of chemical components existing within a blend, implemented using a set of programmable logic neuronal modeling cores. Depending upon the level of inhibition and symmetry in its inhibitory connections, the model exhibits two dynamical regimes: fixed point attractor (winner-takes-all type), and limit cycle attractor (winnerless competition type) dynamics. We show that, when driven by chemosensor input in real-time, the dynamical trajectories of the model's projection neuron population activity accurately encode the concentration ratios of binary odor mixtures in both dynamical regimes. By deploying spike timing-dependent plasticity in a subset of the synapses in the model, we demonstrate that a Hebbian-like associative learning rule is able to organize weights into a stable configuration after exposure to a randomized training set comprising a variety of input ratios. Examining the resulting local interneuron weights in the model shows that each inhibitory neuron competes to represent possible ratios across the population, forming a ratiometric representation via mutual inhibition. After training the resulting dynamical trajectories of the projection neuron population activity show amplification and better separation in their response to inputs of different ratios. Finally, we demonstrate that by using limit cycle attractor dynamics, it is possible to recover and classify blend ratio information from the early transient phases of chemosensor responses in real-time more rapidly and accurately compared to a nearest-neighbor classifier applied to the normalized chemosensor data. Our results demonstrate the potential of biologically-constrained neuromorphic spiking models in achieving rapid and efficient classification of early phase chemosensor array transients with execution times well beyond biological timescales. PMID:23874265

  19. Thermodynamics of mixtures of patchy and spherical colloids of different sizes: A multi-body association theory with complete reference fluid information.

    PubMed

    Bansal, Artee; Valiya Parambathu, Arjun; Asthagiri, D; Cox, Kenneth R; Chapman, Walter G

    2017-04-28

    We present a theory to predict the structure and thermodynamics of mixtures of colloids of different diameters, building on our earlier work [A. Bansal et al., J. Chem. Phys. 145, 074904 (2016)] that considered mixtures with all particles constrained to have the same size. The patchy, solvent particles have short-range directional interactions, while the solute particles have short-range isotropic interactions. The hard-sphere mixture without any association site forms the reference fluid. An important ingredient within the multi-body association theory is the description of clustering of the reference solvent around the reference solute. Here we account for the physical, multi-body clusters of the reference solvent around the reference solute in terms of occupancy statistics in a defined observation volume. These occupancy probabilities are obtained from enhanced sampling simulations, but we also present statistical mechanical models to estimate these probabilities with limited simulation data. Relative to an approach that describes only up to three-body correlations in the reference, incorporating the complete reference information better predicts the bonding state and thermodynamics of the physical solute for a wide range of system conditions. Importantly, analysis of the residual chemical potential of the infinitely dilute solute from molecular simulation and theory shows that whereas the chemical potential is somewhat insensitive to the description of the structure of the reference fluid, the energetic and entropic contributions are not, with the results from the complete reference approach being in better agreement with particle simulations.

  20. Thermodynamics of mixtures of patchy and spherical colloids of different sizes: A multi-body association theory with complete reference fluid information

    NASA Astrophysics Data System (ADS)

    Bansal, Artee; Valiya Parambathu, Arjun; Asthagiri, D.; Cox, Kenneth R.; Chapman, Walter G.

    2017-04-01

    We present a theory to predict the structure and thermodynamics of mixtures of colloids of different diameters, building on our earlier work [A. Bansal et al., J. Chem. Phys. 145, 074904 (2016)] that considered mixtures with all particles constrained to have the same size. The patchy, solvent particles have short-range directional interactions, while the solute particles have short-range isotropic interactions. The hard-sphere mixture without any association site forms the reference fluid. An important ingredient within the multi-body association theory is the description of clustering of the reference solvent around the reference solute. Here we account for the physical, multi-body clusters of the reference solvent around the reference solute in terms of occupancy statistics in a defined observation volume. These occupancy probabilities are obtained from enhanced sampling simulations, but we also present statistical mechanical models to estimate these probabilities with limited simulation data. Relative to an approach that describes only up to three-body correlations in the reference, incorporating the complete reference information better predicts the bonding state and thermodynamics of the physical solute for a wide range of system conditions. Importantly, analysis of the residual chemical potential of the infinitely dilute solute from molecular simulation and theory shows that whereas the chemical potential is somewhat insensitive to the description of the structure of the reference fluid, the energetic and entropic contributions are not, with the results from the complete reference approach being in better agreement with particle simulations.

  1. Bayesian inference of Earth's radial seismic structure from body-wave traveltimes using neural networks

    NASA Astrophysics Data System (ADS)

    de Wit, Ralph W. L.; Valentine, Andrew P.; Trampert, Jeannot

    2013-10-01

    How do body-wave traveltimes constrain the Earth's radial (1-D) seismic structure? Existing 1-D seismological models underpin 3-D seismic tomography and earthquake location algorithms. It is therefore crucial to assess the quality of such 1-D models, yet quantifying uncertainties in seismological models is challenging and thus often ignored. Ideally, quality assessment should be an integral part of the inverse method. Our aim in this study is twofold: (i) we show how to solve a general Bayesian non-linear inverse problem and quantify model uncertainties, and (ii) we investigate the constraint on spherically symmetric P-wave velocity (VP) structure provided by body-wave traveltimes from the EHB bulletin (phases Pn, P, PP and PKP). Our approach is based on artificial neural networks, which are very common in pattern recognition problems and can be used to approximate an arbitrary function. We use a Mixture Density Network to obtain 1-D marginal posterior probability density functions (pdfs), which provide a quantitative description of our knowledge on the individual Earth parameters. No linearization or model damping is required, which allows us to infer a model which is constrained purely by the data. We present 1-D marginal posterior pdfs for the 22 VP parameters and seven discontinuity depths in our model. P-wave velocities in the inner core, outer core and lower mantle are resolved well, with standard deviations of ˜0.2 to 1 per cent with respect to the mean of the posterior pdfs. The maximum likelihoods of VP are in general similar to the corresponding ak135 values, which lie within one or two standard deviations from the posterior means, thus providing an independent validation of ak135 in this part of the radial model. Conversely, the data contain little or no information on P-wave velocity in the D'' layer, the upper mantle and the homogeneous crustal layers. Further, the data do not constrain the depth of the discontinuities in our model. Using additional phases available in the ISC bulletin, such as PcP, PKKP and the converted phases SP and ScP, may enhance the resolvability of these parameters. Finally, we show how the method can be extended to obtain a posterior pdf for a multidimensional model space. This enables us to investigate correlations between model parameters.

  2. Thermal conductivity measurements in porous mixtures of methane hydrate and quartz sand

    USGS Publications Warehouse

    Waite, W.F.; deMartin, B.J.; Kirby, S.H.; Pinkston, J.; Ruppel, C.D.

    2002-01-01

    Using von Herzen and Maxwell's needle probe method, we measured thermal conductivity in four porous mixtures of quartz sand and methane gas hydrate, with hydrate composing 0, 33, 67 and 100% of the solid volume. Thermal conductivities were measured at a constant methane pore pressure of 24.8 MPa between -20 and +15??C, and at a constant temperature of -10??C between 3.5 and 27.6 MPa methane pore pressure. Thermal conductivity decreased with increasing temperature and increased with increasing methane pore pressure. Both dependencies weakened with increasing hydrate content. Despite the high thermal conductivity of quartz relative to methane hydrate, the largest thermal conductivity was measured in the mixture containing 33% hydrate rather than in hydrate-free sand. This suggests gas hydrate enhanced grain-to-grain heat transfer, perhaps due to intergranular contact growth during hydrate synthesis. These results for gas-filled porous mixtures can help constrain thermal conductivity estimates in porous, gas hydrate-bearing systems.

  3. Predictions of glass transition temperature for hydrogen bonding biomaterials.

    PubMed

    van der Sman, R G M

    2013-12-19

    We show that the glass transition of a multitude of mixtures containing hydrogen bonding materials correlates strongly with the effective number of hydroxyl groups per molecule, which are available for intermolecular hydrogen bonding. This correlation is in compliance with the topological constraint theory, wherein the intermolecular hydrogen bonds constrain the mobility of the hydrogen bonded network. The finding that the glass transition relates to hydrogen bonding rather than free volume agrees with our recent finding that there is little difference in free volume among carbohydrates and polysaccharides. For binary and ternary mixtures of sugars, polyols, or biopolymers with water, our correlation states that the glass transition temperature is linear with the inverse of the number of effective hydroxyl groups per molecule. Only for dry biopolymer/sugar or sugar/polyol mixtures do we find deviations due to nonideal mixing, imposed by microheterogeneity.

  4. Powder agglomeration in a microgravity environment

    NASA Technical Reports Server (NTRS)

    Cawley, James D.

    1994-01-01

    This is the final report for NASA Grant NAG3-755 entitled 'Powder Agglomeration in a Microgravity Environment.' The research program included both two types of numerical models and two types of experiments. The numerical modeling included the use of Monte Carlo type simulations of agglomerate growth including hydrodynamic screening and molecular dynamics type simulations of the rearrangement of particles within an agglomerate under a gravitational field. Experiments included direct observation of the agglomeration of submicron alumina and indirect observation, using small angle light scattering, of the agglomeration of colloidal silica and aluminum monohydroxide. In the former class of experiments, the powders were constrained to move on a two-dimensional surface oriented to minimize the effect of gravity. In the latter, some experiments involved mixture of suspensions containing particles of opposite charge which resulted in agglomeration on a very short time scale relative to settling under gravity.

  5. Spectral Quasi-Equilibrium Manifold for Chemical Kinetics.

    PubMed

    Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V

    2016-05-26

    The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description.

  6. Fractal dust constrains the collisional history of comets

    NASA Astrophysics Data System (ADS)

    Fulle, M.; Blum, J.

    2017-07-01

    The fractal dust particles observed by Rosetta cannot form in the physical conditions observed today in comet 67P/Churyumov-Gerasimenko (67P hereinafter), being instead consistent with models of the pristine dust aggregates coagulated in the solar nebula. Since bouncing collisions in the protoplanetary disc restructure fractals into compact aggregates (pebbles), the only way to preserve fractals in a comet is the gentle gravitational collapse of a mixture of pebbles and fractals, which must occur before their mutual collision speeds overcome ≈1 m s-1. This condition fixes the pebble radius to ≲1 cm, as confirmed by Comet Nucleus Infrared and Visible Analyser onboard Philae. Here, we show that the flux of fractal particles measured by Rosetta constrains the 67P nucleus in a random packing of cm-sized pebbles, with all the voids among them filled by fractal particles. This structure is inconsistent with any catastrophic collision, which would have compacted or dispersed most fractals, thus leaving empty most voids in the reassembled nucleus. Comets are less numerous than current estimates, as confirmed by lacking small craters on Pluto and Charon. Bilobate comets accreted at speeds <1 m s-1 from cometesimals born in the same disc stream.

  7. Software for analysis of chemical mixtures--composition, occurrence, distribution, and possible toxicity

    USGS Publications Warehouse

    Scott, Jonathon C.; Skach, Kenneth A.; Toccalino, Patricia L.

    2013-01-01

    The composition, occurrence, distribution, and possible toxicity of chemical mixtures in the environment are research concerns of the U.S. Geological Survey and others. The presence of specific chemical mixtures may serve as indicators of natural phenomena or human-caused events. Chemical mixtures may also have ecological, industrial, geochemical, or toxicological effects. Chemical-mixture occurrences vary by analyte composition and concentration. Four related computer programs have been developed by the National Water-Quality Assessment Program of the U.S. Geological Survey for research of chemical-mixture compositions, occurrences, distributions, and possible toxicities. The compositions and occurrences are identified for the user-supplied data, and therefore the resultant counts are constrained by the user’s choices for the selection of chemicals, reporting limits for the analytical methods, spatial coverage, and time span for the data supplied. The distribution of chemical mixtures may be spatial, temporal, and (or) related to some other variable, such as chemical usage. Possible toxicities optionally are estimated from user-supplied benchmark data. The software for the analysis of chemical mixtures described in this report is designed to work with chemical-analysis data files retrieved from the U.S. Geological Survey National Water Information System but can also be used with appropriately formatted data from other sources. Installation and usage of the mixture software are documented. This mixture software was designed to function with minimal changes on a variety of computer-operating systems. To obtain the software described herein and other U.S. Geological Survey software, visit http://water.usgs.gov/software/.

  8. Sizing Up the Milky Way: A Bayesian Mixture Model Meta-analysis of Photometric Scale Length Measurements

    NASA Astrophysics Data System (ADS)

    Licquia, Timothy C.; Newman, Jeffrey A.

    2016-11-01

    The exponential scale length (L d ) of the Milky Way’s (MW’s) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and are often statistically incompatible with one another. Here, we perform a Bayesian meta-analysis to determine an improved, aggregate estimate for L d , utilizing a mixture-model approach to account for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery, we explore a variety of ways of modeling the nature of problematic measurements, and then employ a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of L d available in the literature; these involve a broad assortment of observational data sets, MW models and assumptions, and methodologies, all tabulated herein. Analyzing the visible and infrared measurements separately yields estimates for L d of {2.71}-0.20+0.22 kpc and {2.51}-0.13+0.15 kpc, respectively, whereas considering them all combined yields 2.64 ± 0.13 kpc. The ratio between the visible and infrared scale lengths determined here is very similar to that measured in external spiral galaxies. We use these results to update the model of the Galactic disk from our previous work, constraining its stellar mass to be {4.8}-1.1+1.5× {10}10 M ⊙, and the MW’s total stellar mass to be {5.7}-1.1+1.5× {10}10 M ⊙.

  9. On-Going Laboratory Efforts to Quantitatively Address Clay Abundance on Mars

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Bishop, Janice L.; Brown, Adrian J.; Blake, David F.; Bristow, Thomas F.

    2012-01-01

    Data obtained at visible and near-infrared wavelengths by OMEGA on MarsExpress and CRISM on MRO provide definitive evidence for the presence of phyllosilicates and other hydrated phases on Mars. A diverse range of both Fe/Mg-OH and Al-OH-bearing phyllosilicates were identified including the smectites, nontronite, saponite, and montmorillonite. In order to constrain the abundances of these phyllosilicates spectral analyses of mixtures are needed. We report on our on-going effort to enable the quantitative evaluation of the abundance of hydrated-hydroxylated silicates when they are contained in mixtures. We include two component mixtures of hydrated/hydroxylated silicates with each other and with two analogs for other martian materials; pyroxene (enstatite) and palagonitic soil (an alteration product of basaltic glass). For the hydrated-hydroxylated silicates we include saponite and montmorillonite (Mg- and Al- rich smectites). We prepared three size separates of each end-member for study: 20-45, 63-90, and 125-150 µm. As the second phase of our effort we used scanning electron microscopy imaging and x-ray diffraction to characterize the grain size distribution, and structural nature, respectively, of the mixtures. Visible and near-infrared reflectance spectra of the 63-90 micrometers grain size of the mixture samples are shown in Figure 1. We discuss the results of our measurements of these mixtures.

  10. An olfactory cocktail party: figure-ground segregation of odorants in rodents.

    PubMed

    Rokni, Dan; Hemmelder, Vivian; Kapoor, Vikrant; Murthy, Venkatesh N

    2014-09-01

    In odorant-rich environments, animals must be able to detect specific odorants of interest against variable backgrounds. However, studies have found that both humans and rodents are poor at analyzing the components of odorant mixtures, suggesting that olfaction is a synthetic sense in which mixtures are perceived holistically. We found that mice could be easily trained to detect target odorants embedded in unpredictable and variable mixtures. To relate the behavioral performance to neural representation, we imaged the responses of olfactory bulb glomeruli to individual odors in mice expressing the Ca(2+) indicator GCaMP3 in olfactory receptor neurons. The difficulty of segregating the target from the background depended strongly on the extent of overlap between the glomerular responses to target and background odors. Our study indicates that the olfactory system has powerful analytic abilities that are constrained by the limits of combinatorial neural representation of odorants at the level of the olfactory receptors.

  11. Structural and energetic properties of La3+ in water/DMSO mixtures

    NASA Astrophysics Data System (ADS)

    Montagna, Maria; Spezia, Riccardo; Bodo, Enrico

    2017-11-01

    By using molecular dynamics based on a custom polarizable force field, we have studied the solvation of La3+ in an equimolar mixture of dimethylsulfoxide (DMSO) with water. An extended structural analysis has been performed to provide a complete picture of the physical properties at the basis of the interaction of La3+ with both solvents. Through our simulations we found that, very likely, the first solvation shell in the mixture is not unlike the one found in pure water or pure DMSO and contains 9 solvent molecules. We have also found that the solvation is preferentially due to DMSO molecules with the water initially present in first shell quickly leaving to the bulk. The dehydration process of the first shell has been analyzed by both plain MD simulations and a constrained dynamics approach; the free energy profiles for the extraction of water from first shell have also been computed.

  12. Constraints on Ceres' Internal Structure and Evolution From Its Shape and Gravity Measured by the Dawn Spacecraft

    NASA Astrophysics Data System (ADS)

    Ermakov, A. I.; Fu, R. R.; Castillo-Rogez, J. C.; Raymond, C. A.; Park, R. S.; Preusker, F.; Russell, C. T.; Smith, D. E.; Zuber, M. T.

    2017-11-01

    Ceres is the largest body in the asteroid belt with a radius of approximately 470 km. In part due to its large mass, Ceres more closely approaches hydrostatic equilibrium than major asteroids. Pre-Dawn mission shape observations of Ceres revealed a shape consistent with a hydrostatic ellipsoid of revolution. The Dawn spacecraft Framing Camera has been imaging Ceres since March 2015, which has led to high-resolution shape models of the dwarf planet, while the gravity field has been globally determined to a spherical harmonic degree 14 (equivalent to a spatial wavelength of 211 km) and locally to 18 (a wavelength of 164 km). We use these shape and gravity models to constrain Ceres' internal structure. We find a negative correlation and admittance between topography and gravity at degree 2 and order 2. Low admittances between spherical harmonic degrees 3 and 16 are well explained by Airy isostatic compensation mechanism. Different models of isostasy give crustal densities between 1,200 and 1,400 kg/m3 with our preferred model giving a crustal density of 1,287+70-87 kg/m3. The mantle density is constrained to be 2,434+5-8 kg/m3. We compute isostatic gravity anomaly and find evidence for mascon-like structures in the two biggest basins. The topographic power spectrum of Ceres and its latitude dependence suggest that viscous relaxation occurred at the long wavelengths (>246 km). Our density constraints combined with finite element modeling of viscous relaxation suggests that the rheology and density of the shallow surface are most consistent with a rock, ice, salt and clathrate mixture.

  13. Isotopically constrained lead sources in fugitive dust from unsurfaced roads in the southeast Missouri mining district.

    PubMed

    Witt, Emitt C; Pribil, Michael J; Hogan, John P; Wronkiewicz, David J

    2016-09-01

    The isotopic composition of lead (Pb) in fugitive dust suspended by a vehicle from 13 unsurfaced roads in Missouri was measured to identify the source of Pb within an established long-term mining area. A three end-member model using (207)Pb/(206)Pb and concentration as tracers resulted in fugitive dust samples plotting in the mixing field of well characterized heterogeneous end members. End members selected for this investigation include the (207)Pb/(206)Pb for 1) a Pb-mixture representing mine tailings, 2) aerosol Pb-impacted soils within close proximity to the Buick secondary recycling smelter, and 3) an average of soils, rock cores and drill cuttings representing the background conditions. Aqua regia total concentrations and (207)Pb/(206)Pb of mining area dust suggest that 35.4-84.3% of the source Pb in dust is associated with the mine tailings mixture, 9.1-52.7% is associated with the smelter mixture, and 0-21.6% is associated with background materials. Isotope ratios varied minimally within the operational phases of sequential extraction suggesting that mixing of all three Pb mixtures occurs throughout. Labile forms of Pb were attributed to all three end members. The extractable carbonate phase had as much as 96.6% of the total concentration associated with mine tailings, 51.8% associated with smelter deposition, and 34.2% with background. The next most labile geochemical phase (Fe + Mn Oxides) showed similar results with as much as 85.3% associated with mine tailings, 56.8% associated with smelter deposition, and 4.2% associated with the background soil. Published by Elsevier Ltd.

  14. Isotopically constrained lead sources in fugitive dust from unsurfaced roads in the southeast Missouri mining district

    USGS Publications Warehouse

    Witt, Emitt C.; Pribil, Michael; Hogan, John P; Wronkiewicz, David

    2016-01-01

    The isotopic composition of lead (Pb) in fugitive dust suspended by a vehicle from 13 unsurfaced roads in Missouri was measured to identify the source of Pb within an established long-term mining area. A three end-member model using 207Pb/206Pb and concentration as tracers resulted in fugitive dust samples plotting in the mixing field of well characterized heterogeneous end members. End members selected for this investigation include the 207Pb/206Pb for 1) a Pb-mixture representing mine tailings, 2) aerosol Pb-impacted soils within close proximity to the Buick secondary recycling smelter, and 3) an average of soils, rock cores and drill cuttings representing the background conditions. Aqua regia total concentrations and 207Pb/206Pb of mining area dust suggest that 35.4–84.3% of the source Pb in dust is associated with the mine tailings mixture, 9.1–52.7% is associated with the smelter mixture, and 0–21.6% is associated with background materials. Isotope ratios varied minimally within the operational phases of sequential extraction suggesting that mixing of all three Pb mixtures occurs throughout. Labile forms of Pb were attributed to all three end members. The extractable carbonate phase had as much as 96.6% of the total concentration associated with mine tailings, 51.8% associated with smelter deposition, and 34.2% with background. The next most labile geochemical phase (Fe + Mn Oxides) showed similar results with as much as 85.3% associated with mine tailings, 56.8% associated with smelter deposition, and 4.2% associated with the background soil.

  15. Fermentation profiles of Manzanilla-Aloreña cracked green table olives in different chloride salt mixtures.

    PubMed

    Bautista-Gallego, J; Arroyo-López, F N; Durán-Quintana, M C; Garrido-Fernández, A

    2010-05-01

    NaCl plays an important role in table olive processing affecting the flavour and microbiological stability of the final product. However, consumers demand foods low in sodium, which makes necessary to decrease levels of this mineral in fruits. In this work, the effects of diverse mixtures of NaCl, CaCl(2) and KCl on the fermentation profiles of cracked directly brined Manzanilla-Aloreña olives, were studied by means of response surface methodology based in a simplex lattice mixture design with constrains. All salt combinations led to lactic acid processes. The growth of Enterobacteriaceae populations was always limited and partially inhibited by the presence of CaCl(2). Only time to reach half maximum populations and decline rates of yeasts, which were higher as concentrations of NaCl or KCl increased, were affected, and correspondingly modelled, as a function of salt mixtures. However, lactic acid bacteria growth parameters could not be related to initial environmental conditions. They had a longer lag phase, slower growth and higher population levels than yeasts. Overall, the presence of CaCl(2) led to a slower Enterobacteriaceae and lactic acid bacteria growth than the traditional NaCl brine but to higher yeast activity. The presence of CaCl(2) in the fermentation brines also led to higher water activity, lower pH and combined acidity as well as a faster acidification while NaCl and KCl had fairly similar behaviours. Apparently, NaCl may be substituted in diverse proportions with KCl or CaCl(2) without substantially disturbing water activity or the usual fermentation profiles while producing olives with lower salt content. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  16. Chemistry of decomposition of freshwater wetland sedimentary organic material during ramped pyrolysis

    NASA Astrophysics Data System (ADS)

    Williams, E. K.; Rosenheim, B. E.

    2011-12-01

    Ramped pyrolysis methodology, such as that used in the programmed-temperature pyrolysis/combustion system (PTP/CS), improves radiocarbon analysis of geologic materials devoid of authigenic carbonate compounds and with low concentrations of extractable authochthonous organic molecules. The approach has improved sediment chronology in organic-rich sediments proximal to Antarctic ice shelves (Rosenheim et al., 2008) and constrained the carbon sequestration potential of suspended sediments in the lower Mississippi River (Roe et al., in review). Although ramped pyrolysis allows for separation of sedimentary organic material based upon relative reactivity, chemical information (i.e. chemical composition of pyrolysis products) is lost during the in-line combustion of pyrolysis products. A first order approximation of ramped pyrolysis/combustion system CO2 evolution, employing a simple Gaussian decomposition routine, has been useful (Rosenheim et al., 2008), but improvements may be possible. First, without prior compound-specific extractions, the molecular composition of sedimentary organic matter is unknown and/or unidentifiable. Second, even if determined as constituents of sedimentary organic material, many organic compounds have unknown or variable decomposition temperatures. Third, mixtures of organic compounds may result in significant chemistry within the pyrolysis reactor, prior to introduction of oxygen along the flow path. Gaussian decomposition of the reaction rate may be too simple to fully explain the combination of these factors. To relate both the radiocarbon age over different temperature intervals and the pyrolysis reaction thermograph (temperature (°C) vs. CO2 evolved (μmol)) obtained from PTP/CS to chemical composition of sedimentary organic material, we present a modeling framework developed based upon the ramped pyrolysis decomposition of simple mixtures of organic compounds (i.e. cellulose, lignin, plant fatty acids, etc.) often found in sedimentary organic material to account for changes in thermograph shape. The decompositions will be compositionally verified by 13C NMR analysis of pyrolysis residues from interrupted reactions. This will allow for constraint of decomposition temperatures of individual compounds as well as chemical reactions between volatilized moieties in mixtures of these compounds. We will apply this framework with 13C NMR analysis of interrupted pyrolysis residues and radiocarbon data from PTP/CS analysis of sedimentary organic material from a freshwater marsh wetland in Barataria Bay, Louisiana. We expect to characterize the bulk chemical composition during pyrolysis and as well as diagenetic changes with depth. Most importantly, we expect to constrain the potential and the limitations of this modeling framework for application to other depositional environments.

  17. A New Approach to Modeling Densities and Equilibria of Ice and Gas Hydrate Phases

    NASA Astrophysics Data System (ADS)

    Zyvoloski, G.; Lucia, A.; Lewis, K. C.

    2011-12-01

    The Gibbs-Helmholtz Constrained (GHC) equation is a new cubic equation of state that was recently derived by Lucia (2010) and Lucia et al. (2011) by constraining the energy parameter in the Soave form of the Redlich-Kwong equation to satisfy the Gibbs-Helmholtz equation. The key attributes of the GHC equation are: 1) It is a multi-scale equation because it uses the internal energy of departure, UD, as a natural bridge between the molecular and bulk phase length scales. 2) It does not require acentric factors, volume translation, regression of parameters to experimental data, binary (kij) interaction parameters, or other forms of empirical correlations. 3) It is a predictive equation of state because it uses a database of values of UD determined from NTP Monte Carlo simulations. 4) It can readily account for differences in molecular size and shape. 5) It has been successfully applied to non-electrolyte mixtures as well as weak and strong aqueous electrolyte mixtures over wide ranges of temperature, pressure and composition to predict liquid density and phase equilibrium with up to four phases. 6) It has been extensively validated with experimental data. 7) The AAD% error between predicted and experimental liquid density is 1% while the AAD% error in phase equilibrium predictions is 2.5%. 8) It has been used successfully within the subsurface flow simulation program FEHM. In this work we describe recent extensions of the multi-scale predictive GHC equation to modeling the phase densities and equilibrium behavior of hexagonal ice and gas hydrates. In particular, we show that radial distribution functions, which can be determined by NTP Monte Carlo simulations, can be used to establish correct standard state fugacities of 1h ice and gas hydrates. From this, it is straightforward to determine both the phase density of ice or gas hydrates as well as any equilibrium involving ice and/or hydrate phases. A number of numerical results for mixtures of N2, O2, CH4, CO2, water, and NaCl in permafrost conditions are presented to illustrate the predictive capabilities of the multi-scale GHC equation. In particular, we show that the GHC equation correctly predicts 1) The density of 1h ice and methane hydrate to within 1%. 2) The melting curve for hexagonal ice. 3) The hydrate-gas phase co-existence curve. 4) Various phase equilibrium involving ice and hydrate phases. We also show that the GHC equation approach can be readily incorporated into subsurface flow simulation programs like FEHM to predict the behavior of permafrost and other reservoirs where ice and/or hydrates are present. Many geometric illustrations are used to elucidate key concepts. References A. Lucia, A Multi-Scale Gibbs Helmholtz Constrained Cubic Equation of State. J. Thermodynamics: Special Issue on Advances in Gas Hydrate Thermodynamics and Transport Properties. Available on-line [doi:10.1155/2010/238365]. A. Lucia, B.M. Bonk, A. Roy and R.R. Waterman, A Multi-Scale Framework for Multi-Phase Equilibrium Flash. Comput. Chem. Engng. In press.

  18. Emulsion of Chloramphenicol: an Overwhelming Approach for Ocular Delivery.

    PubMed

    Ashara, Kalpesh C; Shah, Ketan V

    2017-03-01

    Ophthalmic formulations of chloramphenicol have poor bioavailability of chloramphenicol in the ocular cavity. The present study aimed at exploring the impact of different oil mixtures in the form of emulsion on the permeability of chloramphenicol after ocular application. Selection of oil mixture and ratio of the components was made by an equilibrium solubility method. An emulsifier was chosen according to its emulsification properties. A constrained simplex centroid design was used for the assessment of the emulsion development. Emulsions were evaluated for physicochemical properties; zone of inhibition, in-vitro diffusion and ex-vivo local accumulation of chloramphenicol. Validation of the design using check-point batch and reduced polynomial equations were also developed. Optimization of the emulsion was developed by software Design® expert 6.0.8. Assessment of the osmolarity, ocular irritation, sterility testing and isotonicity of optimized batch were also made. Parker Neem®, olive and peppermint oils were selected as an oil phase in the ratio 63.64:20.2:16.16. PEG-400 was selected as an emulsifier according to a pseudo-ternary phase diagram. Constrained simplex-centroid design was applied in the range of 25-39% water, 55-69% PEG-400, 5-19% optimized oil mixture, and 1% chloramphenicol. Unpaired Student's t-test showed for in-vitro and ex-vivo studies that there was a significant difference between the optimized batch of emulsion and Chloramphenicol eye caps (a commercial product) according to both were equally safe. The optimized batch of an emulsion of chloramphenicol was found to be as safe as and more effective than Chloramphenicol eye caps.

  19. Constrained Fisher Scoring for a Mixture of Factor Analyzers

    DTIC Science & Technology

    2016-09-01

    expectation -maximization algorithm with similar computational requirements. Lastly, we demonstrate the efficacy of the proposed method for learning a... expectation maximization 44 Gene T Whipps 301 394 2372Unclassified Unclassified Unclassified UU ii Approved for public release; distribution is unlimited...14 3.6 Relationship with Expectation -Maximization 16 4. Simulation Examples 16 4.1 Synthetic MFA Example 17 4.2 Manifold Learning Example 22 5

  20. Behaviour of mudflows realized in a laboratory apparatus and relative numerical calibration

    NASA Astrophysics Data System (ADS)

    Brezzi, Lorenzo; Gabrieli, Fabio; Kaitna, Roland; Cola, Simonetta

    2016-04-01

    Nowadays, numerical simulations are indispensable allies for the researchers to reproduce phenomena such as earth-flows, debris-flows and mudflows. One of the most difficult and problematic phases is about the choice and the calibration of the parameters to be included in the model at the real scale. Surely, it can be useful to start from laboratory experiment that simplify as much as possible the case study with the aim of reducing uncertainties related to the trigger and the propagation of a real flow. In this way, geometry of the problem, identification of the triggering mass, are well known and constrained in the experimental tests as in the numerical simulations and the focus of the study may be moved to the material parameters. This article wants to analyze the behavior of different mixtures of water and kaolin, which flow in a laboratory channel. A 10 dm3 prismatic container that discharges the material into a channel 2m long and 0.16 m wide composes the simple experimental apparatus. The chute base was roughened by glued sand and inclined with a 21° angle. Initially, we evaluated the lengths of run-out, the spread and shape of the deposit for five different mixtures. A huge quantity of information were obtained by 3 laser sensors attached to the channel and by photogrammetry, that gives out a 3D model of the deposit shape at the end of the flow. Subsequently, we reproduced these physical phenomena by using the numerical model Geoflow-SPH (Pastor et al., 2008; 2014) , governed by a Bingham rheological law (O'Brien & Julien, 1988), and we calibrated the different tests by back-analysis to assess optimum parameters. The final goal was the comprehension of the relationship that characterizes the parameters with the variation of the kaolin content in the mixtures.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobolewska, M. A.; Siemiginowska, A.; Kelly, B. C.

    We study the γ-ray variability of 13 blazars observed with the Fermi/Large Area Telescope (LAT). These blazars have the most complete light curves collected during the first four years of the Fermi sky survey. We model them with the Ornstein-Uhlenbeck (OU) process or a mixture of the OU processes. The OU process has power spectral density (PSD) proportional to 1/f {sup α} with α changing at a characteristic timescale, τ{sub 0}, from 0 (τ >> τ{sub 0}) to 2 (τ << τ{sub 0}). The PSD of the mixed OU process has two characteristic timescales and an additional intermediate region withmore » 0 < α < 2. We show that the OU model provides a good description of the Fermi/LAT light curves of three blazars in our sample. For the first time, we constrain a characteristic γ-ray timescale of variability in two BL Lac sources, 3C 66A and PKS 2155-304 (τ{sub 0} ≅ 25 days and τ{sub 0} ≅ 43 days, respectively, in the observer's frame), which are longer than the soft X-ray timescales detected in blazars and Seyfert galaxies. We find that the mixed OU process approximates the light curves of the remaining 10 blazars better than the OU process. We derive limits on their long and short characteristic timescales, and infer that their Fermi/LAT PSD resemble power-law functions. We constrain the PSD slopes for all but one source in the sample. We find hints for sub-hour Fermi/LAT variability in four flat spectrum radio quasars. We discuss the implications of our results for theoretical models of blazar variability.« less

  2. Predictive models attribute effects on fish assemblages to toxicity and habitat alteration.

    PubMed

    de Zwart, Dick; Dyer, Scott D; Posthuma, Leo; Hawkins, Charles P

    2006-08-01

    Biological assessments should both estimate the condition of a biological resource (magnitude of alteration) and provide environmental managers with a diagnosis of the potential causes of impairment. Although methods of quantifying condition are well developed, identifying and proportionately attributing impairment to probable causes remain problematic. Furthermore, analyses of both condition and cause have often been difficult to communicate. We developed an approach that (1) links fish, habitat, and chemistry data collected from hundreds of sites in Ohio (USA) streams, (2) assesses the biological condition at each site, (3) attributes impairment to multiple probable causes, and (4) provides the results of the analyses in simple-to-interpret pie charts. The data set was managed using a geographic information system. Biological condition was assessed using a RIVPACS (river invertebrate prediction and classification system)-like predictive model. The model provided probabilities of capture for 117 fish species based on the geographic location of sites and local habitat descriptors. Impaired biological condition was defined as the proportion of those native species predicted to occur at a site that were observed. The potential toxic effects of exposure to mixtures of contaminants were estimated using species sensitivity distributions and mixture toxicity principles. Generalized linear regression models described species abundance as a function of habitat characteristics. Statistically linking biological condition, habitat characteristics including mixture risks, and species abundance allowed us to evaluate the losses of species with environmental conditions. Results were mapped as simple effect and probable-cause pie charts (EPC pie diagrams), with pie sizes corresponding to magnitude of local impairment, and slice sizes to the relative probable contributions of different stressors. The types of models we used have been successfully applied in ecology and ecotoxicology, but they have not previously been used in concert to quantify impairment and its likely causes. Although data limitations constrained our ability to examine complex interactions between stressors and species, the direct relationships we detected likely represent conservative estimates of stressor contributions to local impairment. Future refinements of the general approach and specific methods described here should yield even more promising results.

  3. Reflected stochastic differential equation models for constrained animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  4. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea will be illustrated using a short-arc, angles-only observation scenario.

  5. Ultimate fate of constrained voters

    NASA Astrophysics Data System (ADS)

    Vazquez, F.; Redner, S.

    2004-09-01

    We examine the ultimate fate of individual opinions in a socially interacting population of leftists, centrists and rightists. In an elemental interaction between agents, a centrist and a leftist can both become centrists or both become leftists with equal rates (and similarly for a centrist and a rightist). However leftists and rightists do not interact. This interaction step between pairs of agents is applied repeatedly until the system can no longer evolve. In the mean-field limit, we determine the exact probability that the system reaches consensus (either leftist, rightist or centrist) or a frozen mixture of leftists and rightists as a function of the initial composition of the population. We also determine the mean time until the final state is reached. Some implications of our results for the ultimate fate in a limit of the Axelrod model are discussed.

  6. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  7. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  8. Initial Estimates of Optical Constants of Mars Candidate Materials

    NASA Technical Reports Server (NTRS)

    Rousch, Ted L.; Brown, Adrian Jon; Bishop, Janice L.; Blake, David F.; Bristow, Thomas F.

    2013-01-01

    Data obtained at visible and near-infrared wavelengths by OMEGA on Mars Express and CRISM on MRO provide definitive evidence for the presence of phyllosilicates and other hydrated phases on Mars. A diverse range of both Fe/Mg-OH and Al- OH-bearing phyllosilicates were identified including the smectites, nontronite, saponite, and montmorillonite. To constrain the abundances of these phyllosilicates, spectral analyses of mixtures are needed. We report on our effort to enable the quantitative evaluation of the abundance of hydrated-hydroxylated silicates when they are contained in mixtures. We include two component mixtures of hydrated/ hydroxylated silicates with each other and with two analogs for other Martian materials; pyroxene (enstatite) and palagonitic soil (an alteration product of basaltic glass, hereafter referred to as palagonite). For the hydrated-hydroxylated silicates we include saponite and montmorillonite (Mg- and Al-rich smectites). We prepared three size separates of each end-member for study: 20-45, 63-90, and 125-150 micron.

  9. Assessing aquifer vulnerability from lumped parameter modeling of modern water proportions in groundwater mixtures - Application to nitrate pollution in California's South Coast Range

    NASA Astrophysics Data System (ADS)

    Hagedorn, B.; Ruane, M.; Clark, N.

    2017-12-01

    In California, the overuse of synthetic fertilizers and manure in agriculture have caused nitrate (NO3) to be one of the state's most widespread groundwater pollutants. Given that nitrogen fertilizer applications have steadily increased since the 1950s and given that soil percolation and recharge transit times in California can exceed timescales of decades, the nitrate impact on groundwater resources is likely a legacy for years and even decades to come. This study presents a methodology for groundwater vulnerability assessment that operates independently of difficult-to-constrain soil and aquifer property data (i.e., saturated thickness, texture, porosity, conductivity, etc.), but rather utilizes groundwater age and, more importantly, groundwater mixing information to illustrate actual vulnerability at the water table. To accomplish this, the modern (i.e., less than 60-year old) water proportion (MWP) in groundwater mixtures is computed via lumped parameter modeling of chemical tracer (i.e., 3H, 14C and 3Hetrit) data. These MWPs are then linked to groundwater dissolved oxygen (DO) values to describe the risk for soil zone-derived nitrate to accumulate in the saturated zone. Preliminary studies carried out for 71 wells in California's South Coast Range-Coastal (SCRC) study unit reveal MWP values derived from binary dispersion models of 3.24% to 21.8%. The fact that high MWPs generally coincide with oxic (DO ≥1.5 mg/L) groundwater conditions underscores the risk towards increased groundwater NO3 pollution for many of the tested wells. These results support the conclusion that best agricultural management and policy objectives should incorporate groundwater vulnerability models that are developed at the same spatial scale as the decision making.

  10. Sources and timing of anthropogenic pollution in the Ensenada de San Simon (inner Ria de Vigo), Galicia, NW Spain: an application of mixture-modelling and nonlinear optimization to recent sedimentation.

    PubMed

    Howarth, Richard J; Evans, Graham; Croudace, Ian W; Cundy, Andrew B

    2005-03-20

    The Ensenada de San Simon is the inner part of the Ria de Vigo, one of the major mesotidal rias of the Galician coast, NW Spain. The geochemistry of its bottom sediments can be accounted for in terms of both natural and anthropogenic sources. Mixture-modelling enables much of the Cr, Ni, V, Cu, Pb and Zn concentrations of the bottom and subaqueous sediments to be explained by sediment input from the river systems and faecal matter from manmade mussel rafts. The compositions and relative contributions of additional, unknown, sources of anomalous heavy-metal concentrations are quantified using constrained nonlinear optimization. The pattern of metal enrichment is attributed to: material carried in solution and suspension in marine water entering the Ensenada from the polluted industrial areas of the adjacent Ria de Vigo; wind-borne urban dusts and/or vehicular emissions from the surrounding network of roads and a motorway road-bridge over the Estrecho de Rande; industrial and agricultural pollution from the R. Redondela; and waste from a former ceramics factory near the mouth of the combined R. Oitaben and R. Verdugo. Using (137)Cs dating, it is suggested that heavy metal build-up in the sediments since the late 1970s followed development of inshore fisheries and introduction of the mussel rafts (ca. 1960) and increasing industrialisation.

  11. Shear of ordinary and elongated granular mixtures

    NASA Astrophysics Data System (ADS)

    Hensley, Alexander; Kern, Matthew; Marschall, Theodore; Teitel, Stephen; Franklin, Scott

    2015-03-01

    We present an experimental and computational study of a mixture of discs and moderate aspect-ratio ellipses under two-dimensional annular planar Couette shear. Experimental particles are cut from acrylic sheet, are essentially incompressible, and constrained in the thin gap between two concentric cylinders. The annular radius of curvature is much larger than the particles, and so the experiment is quasi-2d and allows for arbitrarily large pure-shear strains. Synchronized video cameras and software identify all particles and track them as they move from the field of view of one camera to another. We are particularly interested in the global and local properties as the mixture ratio of discs to ellipses varies. Global quantities include average shear rate and distribution of particle species as functions of height, while locally we investigate the orientation of the ellipses and non-affine events that can be characterized as shear transformational zones or possess a quadrupole signature observed previously in systems of purely circular particles. Discrete Element Method simulations on mixtures of circles and spherocylinders extend the study to the dynamics of the force network and energy dissipated as the system evolves. Supported by NSF CBET #1243571 and PRF #51438-UR10.

  12. A New Family of Solvable Pearson-Dirichlet Random Walks

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2011-07-01

    An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.

  13. Biochar particle size, shape, and porosity act together to influence soil water properties

    PubMed Central

    Dugan, Brandon; Masiello, Caroline A.; Gonnermann, Helge M.

    2017-01-01

    Many studies report that, under some circumstances, amending soil with biochar can improve field capacity and plant-available water. However, little is known about the mechanisms that control these improvements, making it challenging to predict when biochar will improve soil water properties. To develop a conceptual model explaining biochar’s effects on soil hydrologic processes, we conducted a series of well constrained laboratory experiments using a sand matrix to test the effects of biochar particle size and porosity on soil water retention curves. We showed that biochar particle size affects soil water storage through changing pore space between particles (interpores) and by adding pores that are part of the biochar (intrapores). We used these experimental results to better understand how biochar intrapores and biochar particle shape control the observed changes in water retention when capillary pressure is the main component of soil water potential. We propose that biochar’s intrapores increase water content of biochar-sand mixtures when soils are drier. When biochar-sand mixtures are wetter, biochar particles’ elongated shape disrupts the packing of grains in the sandy matrix, increasing the volume between grains (interpores) available for water storage. These results imply that biochars with a high intraporosity and irregular shapes will most effectively increase water storage in coarse soils. PMID:28598988

  14. Biochar particle size, shape, and porosity act together to influence soil water properties.

    PubMed

    Liu, Zuolin; Dugan, Brandon; Masiello, Caroline A; Gonnermann, Helge M

    2017-01-01

    Many studies report that, under some circumstances, amending soil with biochar can improve field capacity and plant-available water. However, little is known about the mechanisms that control these improvements, making it challenging to predict when biochar will improve soil water properties. To develop a conceptual model explaining biochar's effects on soil hydrologic processes, we conducted a series of well constrained laboratory experiments using a sand matrix to test the effects of biochar particle size and porosity on soil water retention curves. We showed that biochar particle size affects soil water storage through changing pore space between particles (interpores) and by adding pores that are part of the biochar (intrapores). We used these experimental results to better understand how biochar intrapores and biochar particle shape control the observed changes in water retention when capillary pressure is the main component of soil water potential. We propose that biochar's intrapores increase water content of biochar-sand mixtures when soils are drier. When biochar-sand mixtures are wetter, biochar particles' elongated shape disrupts the packing of grains in the sandy matrix, increasing the volume between grains (interpores) available for water storage. These results imply that biochars with a high intraporosity and irregular shapes will most effectively increase water storage in coarse soils.

  15. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  16. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  17. Constrained reduced-order models based on proper orthogonal decomposition

    DOE PAGES

    Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...

    2017-04-09

    A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.

  18. Digital Image Restoration Under a Regression Model - The Unconstrained, Linear Equality and Inequality Constrained Approaches

    DTIC Science & Technology

    1974-01-01

    REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans

  19. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  20. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  1. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  2. Mineralogic constraints on sulfur-rich soils from Pancam spectra at Gusev crater, Mars

    USGS Publications Warehouse

    Johnson, J. R.; Bell, J.F.; Cloutis, E.; Staid, M.; Farrand, W. H.; McCoy, T.; Rice, M.; Wang, A.; Yen, A.

    2007-01-01

    The Mars Exploration Rover (MER) Spirit excavated sulfur-rich soils exhibiting high albedo and relatively white to yellow colors at three main locations on and south of Husband Hill in Gusev crater, Mars. The multispectral visible/near-infrared properties of these disturbed soils revealed by the Pancam stereo color camera vary appreciably over small spatial scales, but exhibit spectral features suggestive of ferric sulfates. Spectral mixture models constrain the mineralogy of these soils to include ferric sulfates in various states of hydration, such as ferricopiapite [Fe2/32+Fe43+ (SO4)6(OH)2??20(H2O)], hydronium jarosite [(H3O)Fe33+ (SO4)2(OH)6], fibroferrite [Fe3+(SO4)(OH)??5(H2O)], rhomboclase [HFe3+(SO4)2??4 (H2O)], and paracoquimbite [Fe23+ (SO4)3.9(H2O)]. Copyright 2007 by the American Geophysical Union.

  3. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  4. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  5. Thermal inertia and surface heterogeneity on Mars

    NASA Astrophysics Data System (ADS)

    Putzig, Nathaniel E.

    Thermal inertia derived from temperature observations is critical for understanding surface geology and assessing potential landing sites on Mars. Derivation methods generally assume uniform surface properties for any given observation. Consequently, horizontal heterogeneity and near-surface layering may yield apparent thermal inertia that varies with time of day and season. To evaluate the effects of horizontal heterogeneity, I modeled the thermal behavior of surfaces containing idealized material mixtures (dust, sand, duricrust, and rocks) and differing slope facets. These surfaces exhibit diurnal and seasonal variability in apparent thermal inertia of several 100 tiu, 1 even for components with moderately contrasting thermal properties. To isolate surface effects on the derived thermal inertia of Mars, I mapped inter- annual and seasonal changes in albedo and atmospheric dust opacity, accounting for their effects in a modified derivation algorithm. Global analysis of three Mars years of MGS-TES 2 data reveals diurnal and seasonal variations of ~200 tiu in the mid-latitudes and 600 tiu or greater in the polar regions. Correlation of TES results and modeled apparent thermal inertia of heterogeneous surfaces indicates pervasive surface heterogeneity on Mars. At TES resolution, the near-surface thermal response is broadly dominated by layering and is consistent with the presence of duricrusts over fines in the mid-latitudes and dry soils over ground ice in the polar regions. Horizontal surface mixtures also play a role and may dominate at higher resolution. In general, thermal inertia obtained from single observations or annually averaged maps may misrepresent surface properties. In lieu of a robust heterogeneous- surface derivation technique, repeat coverage can be used together with forward-modeling results to constrain the near-surface heterogeneity of Mars. 1 tiu == J m -2 K -1 s - 2 Mars Global Surveyor Thermal Emission Spectrometer

  6. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    PubMed

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906

  8. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directlymore » applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO 3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO 3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO 3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer-layer glasses. The experimental design was completed by a center-point glass, a Vitreous State Laboratory glass, and replicates of the center point and Vitreous State Laboratory glasses.« less

  9. Detecting Mixtures from Structural Model Differences Using Latent Variable Mixture Modeling: A Comparison of Relative Model Fit Statistics

    ERIC Educational Resources Information Center

    Henson, James M.; Reise, Steven P.; Kim, Kevin H.

    2007-01-01

    The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…

  10. Vibration control of multiferroic fibrous composite plates using active constrained layer damping

    NASA Astrophysics Data System (ADS)

    Kattimani, S. C.; Ray, M. C.

    2018-06-01

    Geometrically nonlinear vibration control of fiber reinforced magneto-electro-elastic or multiferroic fibrous composite plates using active constrained layer damping treatment has been investigated. The piezoelectric (BaTiO3) fibers are embedded in the magnetostrictive (CoFe2O4) matrix forming magneto-electro-elastic or multiferroic smart composite. A three-dimensional finite element model of such fiber reinforced magneto-electro-elastic plates integrated with the active constrained layer damping patches is developed. Influence of electro-elastic, magneto-elastic and electromagnetic coupled fields on the vibration has been studied. The Golla-Hughes-McTavish method in time domain is employed for modeling a constrained viscoelastic layer of the active constrained layer damping treatment. The von Kármán type nonlinear strain-displacement relations are incorporated for developing a three-dimensional finite element model. Effect of fiber volume fraction, fiber orientation and boundary conditions on the control of geometrically nonlinear vibration of the fiber reinforced magneto-electro-elastic plates is investigated. The performance of the active constrained layer damping treatment due to the variation of piezoelectric fiber orientation angle in the 1-3 Piezoelectric constraining layer of the active constrained layer damping treatment has also been emphasized.

  11. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  12. Molecular simulations of Hugoniots of detonation product mixtures at chemical equilibrium: Microscopic calculation of the Chapman-Jouguet state

    NASA Astrophysics Data System (ADS)

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-08-01

    In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at PCJ and TCJ is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Grüneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.

  13. Molecular simulations of Hugoniots of detonation product mixtures at chemical equilibrium: microscopic calculation of the Chapman-Jouguet state.

    PubMed

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-08-28

    In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at P(CJ) and T(CJ) is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Gruneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.

  14. Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.

    PubMed

    Nagai, Takashi

    2017-10-01

    The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.

  15. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  16. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  17. Coupling Poisson rectangular pulse and multiplicative microcanonical random cascade models to generate sub-daily precipitation timeseries

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph

    2018-07-01

    To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.

  18. The Penetration of Solar Radiation Into Carbon Dioxide Ice

    NASA Astrophysics Data System (ADS)

    Chinnery, H. E.; Hagermann, A.; Kaufmann, E.; Lewis, S. R.

    2018-04-01

    Icy surfaces behave differently to rocky or regolith-covered surfaces in response to irradiation. A key factor is the ability of visible light to penetrate partially into the subsurface. This results in the solid-state greenhouse effect, as ices can be transparent or translucent to visible and shorter wavelengths, while opaque in the infrared. This can lead to significant differences in shallow subsurface temperature profiles when compared to rocky surfaces. Of particular significance for modeling the solid-state greenhouse effect is the e-folding scale, otherwise known as the absorption scale length, or penetration depth, of the ice. While there have been measurements for water ice and snow, pure and with mixtures, to date, there have been no such measurements published for carbon dioxide ice. After an extensive series of measurements we are able to constrain the e-folding scale of CO2 ice for the cumulative wavelength range 300 to 1,100 nm, which is a vital parameter in heat transfer models for the Martian surface, enabling us to better understand surface-atmosphere interactions at Mars' polar caps.

  19. Surface anisotropy of iron oxide nanoparticles and slabs from first principles: Influence of coatings and ligands as a test of the Heisenberg model

    NASA Astrophysics Data System (ADS)

    Brymora, Katarzyna; Calvayrac, Florent

    2017-07-01

    We performed ab initio computations of the magnetic properties of simple iron oxide clusters and slabs. We considered an iron oxide cluster functionalized by a molecule or glued to a gold cluster of the same size. We also considered a magnetite slab coated by cobalt oxide or a mixture of iron oxide and cobalt oxide. The changes in magnetic behavior were explored using constrained magnetic calculations. A possible value for the surface anisotropy was estimated from the fit of a classical Heisenberg model on ab initio results. The value was found to be compatible with estimations obtained by other means, or inferred from experimental results. The addition of a ligand, coating, or of a metallic nanoparticle to the systems degraded the quality of the description by the Heisenberg Hamiltonian. Proposing a change in the anisotropies allowing for the proportion of each transition atom we could get a much better description of the magnetism of series of hybrid cobalt and iron oxide systems.

  20. Substructure of fuzzy dark matter haloes

    NASA Astrophysics Data System (ADS)

    Du, Xiaolong; Behrens, Christoph; Niemeyer, Jens C.

    2017-02-01

    We derive the halo mass function (HMF) for fuzzy dark matter (FDM) by solving the excursion set problem explicitly with a mass-dependent barrier function, which has not been done before. We find that compared to the naive approach of the Sheth-Tormen HMF for FDM, our approach has a higher cutoff mass and the cutoff mass changes less strongly with redshifts. Using merger trees constructed with a modified version of the Lacey & Cole formalism that accounts for suppressed small-scale power and the scale-dependent growth of FDM haloes and the semi-analytic GALACTICUS code, we study the statistics of halo substructure including the effects from dynamical friction and tidal stripping. We find that if the dark matter is a mixture of cold dark matter (CDM) and FDM, there will be a suppression on the halo substructure on small scales which may be able to solve the missing satellites problem faced by the pure CDM model. The suppression becomes stronger with increasing FDM fraction or decreasing FDM mass. Thus, it may be used to constrain the FDM model.

  1. Detection and inpainting of facial wrinkles using texture orientation fields and Markov random field modeling.

    PubMed

    Batool, Nazre; Chellappa, Rama

    2014-09-01

    Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.

  2. Construction of ground-state preserving sparse lattice models for predictive materials simulations

    NASA Astrophysics Data System (ADS)

    Huang, Wenxuan; Urban, Alexander; Rong, Ziqin; Ding, Zhiwei; Luo, Chuan; Ceder, Gerbrand

    2017-08-01

    First-principles based cluster expansion models are the dominant approach in ab initio thermodynamics of crystalline mixtures enabling the prediction of phase diagrams and novel ground states. However, despite recent advances, the construction of accurate models still requires a careful and time-consuming manual parameter tuning process for ground-state preservation, since this property is not guaranteed by default. In this paper, we present a systematic and mathematically sound method to obtain cluster expansion models that are guaranteed to preserve the ground states of their reference data. The method builds on the recently introduced compressive sensing paradigm for cluster expansion and employs quadratic programming to impose constraints on the model parameters. The robustness of our methodology is illustrated for two lithium transition metal oxides with relevance for Li-ion battery cathodes, i.e., Li2xFe2(1-x)O2 and Li2xTi2(1-x)O2, for which the construction of cluster expansion models with compressive sensing alone has proven to be challenging. We demonstrate that our method not only guarantees ground-state preservation on the set of reference structures used for the model construction, but also show that out-of-sample ground-state preservation up to relatively large supercell size is achievable through a rapidly converging iterative refinement. This method provides a general tool for building robust, compressed and constrained physical models with predictive power.

  3. Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC

    ERIC Educational Resources Information Center

    Depaoli, Sarah

    2012-01-01

    Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…

  4. Communication: phase transitions, criticality, and three-phase coexistence in constrained cell models.

    PubMed

    Nayhouse, Michael; Kwon, Joseph Sang-Il; Orkoulas, G

    2012-05-28

    In simulation studies of fluid-solid transitions, the solid phase is usually modeled as a constrained system in which each particle is confined to move in a single Wigner-Seitz cell. The constrained cell model has been used in the determination of fluid-solid coexistence via thermodynamic integration and other techniques. In the present work, the phase diagram of such a constrained system of Lennard-Jones particles is determined from constant-pressure simulations. The pressure-density isotherms exhibit inflection points which are interpreted as the mechanical stability limit of the solid phase. The phase diagram of the constrained system contains a critical and a triple point. The temperature and pressure at the critical and the triple point are both higher than those of the unconstrained system due to the reduction in the entropy caused by the single occupancy constraint.

  5. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  6. Accounting for non-independent detection when estimating abundance of organisms with a Bayesian approach

    USGS Publications Warehouse

    Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth

    2011-01-01

    Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.

  7. A competitive binding model predicts the response of mammalian olfactory receptors to mixtures

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay

    Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.

  8. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  9. Constraints on the frequency-magnitude relation and maximum magnitudes in the UK from observed seismicity and glacio-isostatic recovery rates

    NASA Astrophysics Data System (ADS)

    Main, Ian; Irving, Duncan; Musson, Roger; Reading, Anya

    1999-05-01

    Earthquake populations have recently been shown to have many similarities with critical-point phenomena, with fractal scaling of source sizes (energy or seismic moment) corresponding to the observed Gutenberg-Richter (G-R) frequency-magnitude law holding at low magnitudes. At high magnitudes, the form of the distribution depends on the seismic moment release rate Msolar and the maximum magnitude m_max . The G-R law requires a sharp truncation at an absolute maximum magnitude for finite Msolar. In contrast, the gamma distribution has an exponential tail which allows a soft or `credible' maximum to be determined by negligible contribution to the total seismic moment release. Here we apply both distributions to seismic hazard in the mainland UK and its immediate continental shelf, constrained by a mixture of instrumental, historical and neotectonic data. Tectonic moment release rates for the seismogenic part of the lithosphere are calculated from a flexural-plate model for glacio-isostatic recovery, constrained by vertical deformation rates from tide-gauge and geomorphological data. Earthquake focal mechanisms in the UK show near-vertical strike-slip faulting, with implied directions of maximum compressive stress approximately in the NNW-SSE direction, consistent with the tectonic model. Maximum magnitudes are found to be in the range 6.3-7.5 for the G-R law, or 7.0-8.2 m_L for the gamma distribution, which compare with a maximum observed in the time period of interest of 6.1 m_L . The upper bounds are conservative estimates, based on 100 per cent seismic release of the observed vertical neotectonic deformation. Glacio-isostatic recovery is predominantly an elastic rather than a seismic process, so the true value of m_max is likely to be nearer the lower end of the quoted range.

  10. Astrophysical Model Selection in Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  11. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  12. Dynamics and universal scaling law in geometrically-controlled sessile drop evaporation

    PubMed Central

    Sáenz, P. J.; Wray, A. W.; Che, Z.; Matar, O. K.; Valluri, P.; Kim, J.; Sefiane, K.

    2017-01-01

    The evaporation of a liquid drop on a solid substrate is a remarkably common phenomenon. Yet, the complexity of the underlying mechanisms has constrained previous studies to spherically symmetric configurations. Here we investigate well-defined, non-spherical evaporating drops of pure liquids and binary mixtures. We deduce a universal scaling law for the evaporation rate valid for any shape and demonstrate that more curved regions lead to preferential localized depositions in particle-laden drops. Furthermore, geometry induces well-defined flow structures within the drop that change according to the driving mechanism. In the case of binary mixtures, geometry dictates the spatial segregation of the more volatile component as it is depleted. Our results suggest that the drop geometry can be exploited to prescribe the particle deposition and evaporative dynamics of pure drops and the mixing characteristics of multicomponent drops, which may be of interest to a wide range of industrial and scientific applications. PMID:28294114

  13. Resolving Mixed Algal Species in Hyperspectral Images

    PubMed Central

    Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.

    2014-01-01

    We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451

  14. Dynamics and universal scaling law in geometrically-controlled sessile drop evaporation.

    PubMed

    Sáenz, P J; Wray, A W; Che, Z; Matar, O K; Valluri, P; Kim, J; Sefiane, K

    2017-03-15

    The evaporation of a liquid drop on a solid substrate is a remarkably common phenomenon. Yet, the complexity of the underlying mechanisms has constrained previous studies to spherically symmetric configurations. Here we investigate well-defined, non-spherical evaporating drops of pure liquids and binary mixtures. We deduce a universal scaling law for the evaporation rate valid for any shape and demonstrate that more curved regions lead to preferential localized depositions in particle-laden drops. Furthermore, geometry induces well-defined flow structures within the drop that change according to the driving mechanism. In the case of binary mixtures, geometry dictates the spatial segregation of the more volatile component as it is depleted. Our results suggest that the drop geometry can be exploited to prescribe the particle deposition and evaporative dynamics of pure drops and the mixing characteristics of multicomponent drops, which may be of interest to a wide range of industrial and scientific applications.

  15. Describing litho-constrained layout by a high-resolution model filter

    NASA Astrophysics Data System (ADS)

    Tsai, Min-Chun

    2008-05-01

    A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.

  16. A Climatology of Global Aerosol Mixtures to Support Sentinel-5P and Earthcare Mission Applications

    NASA Astrophysics Data System (ADS)

    Taylor, M.; Kazadzis, S.; Amaridis, V.; Kahn, R. A.

    2015-11-01

    Since constraining aerosol type with satellite remote sensing continues to be a challenge, we present a newly derived global climatology of aerosol mixtures to support atmospheric composition studies that are planned for Sentinel-5P and EarthCARE.The global climatology is obtained via application of iterative cluster analysis to gridded global decadal and seasonal mean values of the aerosol optical depth (AOD) of sulfate, biomass burning, mineral dust and marine aerosol as a proportion of the total AOD at 500nm output from the Goddard Chemistry Aerosol Radiation and Transport (GOCART). For both the decadal and seasonal means, the number of aerosol mixtures (clusters) identified is ≈10. Analysis of the percentage contribution of the component aerosol types to each mixture allowed development of a straightforward naming convention and taxonomy, and assignment of primary colours for the generation of true colour-mixing and easy-to-interpret maps of the spatial distribution of clusters across the global grid. To further help characterize the mixtures, aerosol robotic network (AERONET) Level 2.0 Version 2 inversion products were extracted from each cluster‟s spatial domain and used to estimate climatological values of key optical and microphysical parameters.The aerosol type climatology represents current knowledge that would be enhanced, possibly corrected, and refined by high temporal and spectral resolution, cloud-free observations produced by Sentinel-5P and EarthCARE instruments. The global decadal mean and seasonal gridded partitions comprise a preliminary reference framework and global climatology that can help inform the choice of components and mixtures in aerosol retrieval algorithms used by instruments such as TROPOMI and ATLID, and to test retrieval results.

  17. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  18. Modeling and analysis of personal exposures to VOC mixtures using copulas

    PubMed Central

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2014-01-01

    Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991

  19. Modeling Explosive Eruptions at Kīlauea, Hawai'i

    NASA Astrophysics Data System (ADS)

    Gonnermann, H. M.; Ferguson, D. J.; Blaser, A. P.; Houghton, B. F.; Plank, T. A.; Hauri, E. H.; Swanson, D. A.

    2014-12-01

    We have modeled eruptive magma ascent during two explosive eruptions of Kīlauea volcano, Hawai'i. They are the Hawaiian style Kīlauea Iki eruption, 1959, and the subplinian Keanakāko'i eruption, 1650 CE. We have modeled combined magma ascent in the volcanic conduit and exsolution of H2O and CO2 from the erupting magma. To better assess the relative roles of conduit processes and magma chamber, we also coupled conduit flow and magma chamber through mass balance and pressure. We predict magma discharge rates, superficial gas velocities, H2O and CO2 concentrations of the melt, magma chamber pressure, surface deformation, and height of the volcanic jet. Models are in part constrained by H2O and CO2 measured in olivine-hosted melt inclusions and by decompression rates recorded in melt embayment diffusion profiles. We present a parametric analysis, indicating that the pressure within the chamber that fed the subplinian Keanakāko'i eruption was significantly higher than lithostatic pressure. In contrast, chamber pressure for the Hawaiian Kīlauea Iki eruption was close to lithostatic. In both cases the superficial gas velocity, which affects the geometrical distribution of gas-liquid mixtures during upward flow in conduits, may have exceeded values at which bubble coalescence did not affect the flow.

  20. OCO-2 Column Carbon Dioxide and Biometric Data Jointly Constrain Parameterization and Projection of a Global Land Model

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III

    2015-12-01

    Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.

  1. Constrained and Unconstrained Partial Adjacent Category Logit Models for Ordinal Response Variables

    ERIC Educational Resources Information Center

    Fullerton, Andrew S.; Xu, Jun

    2018-01-01

    Adjacent category logit models are ordered regression models that focus on comparisons of adjacent categories. These models are particularly useful for ordinal response variables with categories that are of substantive interest. In this article, we consider unconstrained and constrained versions of the partial adjacent category logit model, which…

  2. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    ERIC Educational Resources Information Center

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  3. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  4. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    PubMed

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    NASA Astrophysics Data System (ADS)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  6. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Optimal vibration control of a rotating plate with self-sensing active constrained layer damping

    NASA Astrophysics Data System (ADS)

    Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng

    2012-04-01

    This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.

  8. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  9. Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution

    NASA Astrophysics Data System (ADS)

    Baldacchino, Tara; Worden, Keith; Rowson, Jennifer

    2017-02-01

    A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.

  10. Development and validation of a metal mixture bioavailability model (MMBM) to predict chronic toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia.

    PubMed

    Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-01-01

    Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix  = 16; RMSE Zn only  = 18; RMSE Ni only  = 17; RMSE Pb only  = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Rasch Mixture Models for DIF Detection

    PubMed Central

    Strobl, Carolin; Zeileis, Achim

    2014-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819

  12. Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data

    ERIC Educational Resources Information Center

    Kim, Su-Young; Kim, Jee-Seon

    2012-01-01

    This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…

  13. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  14. A Multidimensional Item Response Model: Constrained Latent Class Analysis Using the Gibbs Sampler and Posterior Predictive Checks.

    ERIC Educational Resources Information Center

    Hoijtink, Herbert; Molenaar, Ivo W.

    1997-01-01

    This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)

  15. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40)

    NASA Astrophysics Data System (ADS)

    Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.

    2017-06-01

    In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol (SOA) surrogates was determined to be between 55 000 and 35 000 J mol-1, which implies a yield increase of 0.03-0.06 % K-1 with decreasing temperature. The improved VBS scheme is suitable for implementation into chemical transport models to predict the burden and oxidation state of primary and secondary biomass-burning aerosols.

  16. Local Solutions in the Estimation of Growth Mixture Models

    ERIC Educational Resources Information Center

    Hipp, John R.; Bauer, Daniel J.

    2006-01-01

    Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…

  17. Minimally Informative Prior Distributions for PSA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana L. Kelly; Robert W. Youngblood; Kurt G. Vedros

    2010-06-01

    A salient feature of Bayesian inference is its ability to incorporate information from a variety of sources into the inference model, via the prior distribution (hereafter simply “the prior”). However, over-reliance on old information can lead to priors that dominate new data. Some analysts seek to avoid this by trying to work with a minimally informative prior distribution. Another reason for choosing a minimally informative prior is to avoid the often-voiced criticism of subjectivity in the choice of prior. Minimally informative priors fall into two broad classes: 1) so-called noninformative priors, which attempt to be completely objective, in that themore » posterior distribution is determined as completely as possible by the observed data, the most well known example in this class being the Jeffreys prior, and 2) priors that are diffuse over the region where the likelihood function is nonnegligible, but that incorporate some information about the parameters being estimated, such as a mean value. In this paper, we compare four approaches in the second class, with respect to their practical implications for Bayesian inference in Probabilistic Safety Assessment (PSA). The most commonly used such prior, the so-called constrained noninformative prior, is a special case of the maximum entropy prior. This is formulated as a conjugate distribution for the most commonly encountered aleatory models in PSA, and is correspondingly mathematically convenient; however, it has a relatively light tail and this can cause the posterior mean to be overly influenced by the prior in updates with sparse data. A more informative prior that is capable, in principle, of dealing more effectively with sparse data is a mixture of conjugate priors. A particular diffuse nonconjugate prior, the logistic-normal, is shown to behave similarly for some purposes. Finally, we review the so-called robust prior. Rather than relying on the mathematical abstraction of entropy, as does the constrained noninformative prior, the robust prior places a heavy-tailed Cauchy prior on the canonical parameter of the aleatory model.« less

  18. Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less

  19. Coupling fluid dynamics and host-rock deformation associated with magma intrusion in the crust: Insights from analogue experiments

    NASA Astrophysics Data System (ADS)

    Kavanagh, J. L.; Dennis, D. J.

    2014-12-01

    Models of magma ascent in the crust tend to either consider the dynamics of fluid flow within intrusions or the associated host-rock deformation. However, these processes are coupled in nature, and so to develop a more complete understanding of magma ascent dynamics in the crust both need to be taken into account. We present a series of gelatine analogue experiments that use both Particle Image Velocimentry (PIV) and Digital Image Correlation (DIC) techniques to characterise the dynamics of fluid flow within intrusions and to quantify the associated deformation of the intruded media. Experiments are prepared by filling a 40x40x30 cm3 clear-Perspex tank with a low-concentration gelatine mixture (2-5 wt%) scaled to be of comparable stiffness to crustal strata. Fluorescent seeding particles are added to the gelatine mixture during its preparation and to the magma analogue prior to injection. Two Dantec CCD cameras are positioned outside the tank and a vertical high-power laser sheet positioned along the centre line is triggered to illuminate the seeding particles with short intense pulses. Dyed water (the magma analogue) injected into the solid gelatine from below causes a vertically propagating penny-shaped crack (dike) to form. Incremental and cumulative displacement vectors are calculated by cross-correlation between successive images at a defined time interval. Spatial derivatives map the fluid flow within the intrusion and associated strain and stress evolution of the host, both during dike propagation and on to eruption. As the gelatine deforms elastically at the experimental conditions, strain calculations correlate with stress. Models which couple fluid dynamics and host deformation make an important step towards improving our understanding of the dynamics of magma transport through the crust and to help constrain the tendency for eruption.

  20. Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2

    NASA Astrophysics Data System (ADS)

    Ni, Dongdong

    2018-05-01

    Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size of Jupiter's two-layer interior models.

  1. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  2. Cluster kinetics model for mixtures of glassformers

    NASA Astrophysics Data System (ADS)

    Brenskelle, Lisa A.; McCoy, Benjamin J.

    2007-10-01

    For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.

  3. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  4. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  5. A Multiple Group Measurement Model of Children's Reports of Parental Socioeconomic Status. Discussion Papers No. 531-78.

    ERIC Educational Resources Information Center

    Mare, Robert D.; Mason, William M.

    An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…

  6. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  7. Order-Constrained Bayes Inference for Dichotomous Models of Unidimensional Nonparametric IRT

    ERIC Educational Resources Information Center

    Karabatsos, George; Sheu, Ching-Fan

    2004-01-01

    This study introduces an order-constrained Bayes inference framework useful for analyzing data containing dichotomous scored item responses, under the assumptions of either the monotone homogeneity model or the double monotonicity model of nonparametric item response theory (NIRT). The framework involves the implementation of Gibbs sampling to…

  8. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  9. The Potential of Growth Mixture Modelling

    ERIC Educational Resources Information Center

    Muthen, Bengt

    2006-01-01

    The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…

  10. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  11. Development of PBPK Models for Gasoline in Adult and ...

    EPA Pesticide Factsheets

    Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of

  12. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  13. Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.

    2017-10-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.

  14. Fitting a Mixture Item Response Theory Model to Personality Questionnaire Data: Characterizing Latent Classes and Investigating Possibilities for Improving Prediction

    ERIC Educational Resources Information Center

    Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk

    2008-01-01

    Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…

  15. Characterization of moderate ash-and-gas explosions at Santiaguito volcano, Guatemala, from infrasound waveform inversion and thermal infrared measurements

    NASA Astrophysics Data System (ADS)

    Angelis, S. De; Lamb, O. D.; Lamur, A.; Hornby, A. J.; von Aulock, F. W.; Chigna, G.; Lavallée, Y.; Rietbrock, A.

    2016-06-01

    The rapid discharge of gas and rock fragments during volcanic eruptions generates acoustic infrasound. Here we present results from the inversion of infrasound signals associated with small and moderate gas-and-ash explosions at Santiaguito volcano, Guatemala, to retrieve the time history of mass eruption rate at the vent. Acoustic waveform inversion is complemented by analyses of thermal infrared imagery to constrain the volume and rise dynamics of the eruption plume. Finally, we combine results from the two methods in order to assess the bulk density of the erupted mixture, constrain the timing of the transition from a momentum-driven jet to a buoyant plume, and to evaluate the relative volume fractions of ash and gas during the initial thrust phase. Our results demonstrate that eruptive plumes associated with small-to-moderate size explosions at Santiaguito only carry minor fractions of ash, suggesting that these events may not involve extensive magma fragmentation in the conduit.

  16. Characterization of moderate ash-and-gas explosions at Santiaguito volcano, Guatemala, from infrasound waveform inversion and thermal infrared measurements.

    PubMed

    Angelis, S De; Lamb, O D; Lamur, A; Hornby, A J; von Aulock, F W; Chigna, G; Lavallée, Y; Rietbrock, A

    2016-06-28

    The rapid discharge of gas and rock fragments during volcanic eruptions generates acoustic infrasound. Here we present results from the inversion of infrasound signals associated with small and moderate gas-and-ash explosions at Santiaguito volcano, Guatemala, to retrieve the time history of mass eruption rate at the vent. Acoustic waveform inversion is complemented by analyses of thermal infrared imagery to constrain the volume and rise dynamics of the eruption plume. Finally, we combine results from the two methods in order to assess the bulk density of the erupted mixture, constrain the timing of the transition from a momentum-driven jet to a buoyant plume, and to evaluate the relative volume fractions of ash and gas during the initial thrust phase. Our results demonstrate that eruptive plumes associated with small-to-moderate size explosions at Santiaguito only carry minor fractions of ash, suggesting that these events may not involve extensive magma fragmentation in the conduit.

  17. Characterization of new functionalized calcium carbonate-polycaprolactone composite material for application in geometry-constrained drug release formulation development.

    PubMed

    Wagner-Hattler, Leonie; Schoelkopf, Joachim; Huwyler, Jörg; Puchkov, Maxim

    2017-10-01

    A new mineral-polymer composite (FCC-PCL) performance was assessed to produce complex geometries to aid in development of controlled release tablet formulations. The mechanical characteristics of a developed material such as compactibility, compressibility and elastoplastic deformation were measured. The results and comparative analysis versus other common excipients suggest efficient formation of a complex, stable and impermeable geometries for constrained drug release modifications under compression. The performance of the proposed composite material has been tested by compacting it into a geometrically altered tablet (Tablet-In-Cup, TIC) and the drug release was compared to commercially available product. The TIC device exhibited a uniform surface, showed high physical stability, and showed absence of friability. FCC-PCL composite had good binding properties and good compactibility. It was possible to reveal an enhanced plasticity characteristic of a new material which was not present in the individual components. The presented FCC-PCL composite mixture has the potential to become a successful tool to formulate controlled-release dosage solid forms.

  18. Microstructure and hydrogen bonding in water-acetonitrile mixtures.

    PubMed

    Mountain, Raymond D

    2010-12-16

    The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.

  19. Development of a Regional Structured and Unstructured Grid Methodology for Chemically Reactive Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Stefanski, Douglas Lawrence

    A finite volume method for solving the Reynolds Averaged Navier-Stokes (RANS) equations on unstructured hybrid grids is presented. Capabilities for handling arbitrary mixtures of reactive gas species within the unstructured framework are developed. The modeling of turbulent effects is carried out via the 1998 Wilcox k -- o model. This unstructured solver is incorporated within VULCAN -- a multi-block structured grid code -- as part of a novel patching procedure in which non-matching interfaces between structured blocks are replaced by transitional unstructured grids. This approach provides a fully-conservative alternative to VULCAN's non-conservative patching methods for handling such interfaces. In addition, the further development of the standalone unstructured solver toward large-eddy simulation (LES) applications is also carried out. Dual time-stepping using a Crank-Nicholson formulation is added to recover time-accuracy, and modeling of sub-grid scale effects is incorporated to provide higher fidelity LES solutions for turbulent flows. A switch based on the work of Ducros, et al., is implemented to transition from a monotonicity-preserving flux scheme near shocks to a central-difference method in vorticity-dominated regions in order to better resolve small-scale turbulent structures. The updated unstructured solver is used to carry out large-eddy simulations of a supersonic constrained mixing layer.

  20. Vertical dependence of black carbon, sulphate and biomass burning aerosol radiative forcing

    NASA Astrophysics Data System (ADS)

    Samset, Bjørn H.; Myhre, Gunnar

    2011-12-01

    A global radiative transfer model is used to calculate the vertical profile of shortwave radiative forcing from a prescribed amount of aerosols. We study black carbon (BC), sulphate (SO4) and a black and organic carbon mixture typical of biomass burning (BIO), by prescribing aerosol burdens in layers between 1000 hPa and 20 hPa and calculating the resulting direct radiative forcing divided by the burden (NDRF). We find a strong sensitivity in the NDRF for BC with altitude, with a tenfold increase between BC close to the surface and the lower part of the stratosphere. Clouds are a major contributor to this dependence with altitude, but other factors also contribute. We break down and explain the different physical contributors to this strong sensitivity. The results show a modest regional dependence of the altitudinal dependence of BC NDRF between industrial regions, while for regions with properties deviating from the global mean NDRF variability is significant. Variations due to seasons and interannual changes in cloud conditions are found to be small. We explore the effect that large altitudinal variation in NDRF may have on model estimates of BC radiative forcing when vertical aerosol distributions are insufficiently constrained, and discuss possible applications of the present results for reducing inter-model differences.

  1. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    PubMed

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  2. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  3. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    NASA Astrophysics Data System (ADS)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  4. Transport of Perfluorocarbon Tracers in the Cranfield Geological Carbon Sequestration Project

    NASA Astrophysics Data System (ADS)

    Moortgat, J.; Soltanian, M. R.; Amooie, M. A.; Cole, D. R.; Graham, D. E.; Pfiffner, S. M.; Phelps, T.

    2017-12-01

    A field-scale carbon dioxide (CO2) injection pilot project was conducted by the Southeast Regional Sequestration Partnership (SECARB) at Cranfield, Mississippi. Two associated campaigns in 2009 and 2010 were carried out to co-inject perfluorocarbon tracers (PFTs) and sulfur hexafluoride (SF6) with CO2. Tracers in gas samples from two observation wells were analyzed to construct breakthrough curves. We present the compiled field data as well as detailed numerical modeling of the flow and transport of CO2, brine, and introduced tracers. A high-resolution static model of the formation geology in the Detailed Area Study (DAS) was used in order to capture the impact of connected flow pathways created by fluvial channels on breakthrough curves and breakthrough times of PFTs and SF6 tracers. We use the cubic-plus-association (CPA) equation of state, which takes into account the polar nature of water molecules, to describe the phase behavior of CO2-brine-tracer mixtures. We show how the combination of multiple tracer injection pulses with detailed numerical simulations provide a powerful tool in constraining both formation properties and how complex flow pathways develop over time.

  5. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  6. A Method to Constrain Mass and Spin of GRB Black Holes within the NDAF Model

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Xue, Li; Zhao, Xiao-Hong; Zhang, Fu-Wen; Zhang, Bing

    2016-04-01

    Black holes (BHs) hide themselves behind various astronomical phenomena and their properties, I.e., mass and spin, are usually difficult to constrain. One leading candidate for the central engine model of gamma-ray bursts (GRBs) invokes a stellar mass BH and a neutrino-dominated accretion flow (NDAF), with the relativistic jet launched due to neutrino-anti-neutrino annihilations. Such a model gives rise to a matter-dominated fireball, and is suitable to interpret GRBs with a dominant thermal component with a photospheric origin. We propose a method to constrain BH mass and spin within the framework of this model and apply the method to the thermally dominant GRB 101219B, whose initial jet launching radius, r0, is constrained from the data. Using our numerical model of NDAF jets, we estimate the following constraints on the central BH: mass MBH ˜ 5-9 M⊙, spin parameter a* ≳ 0.6, and disk mass 3 M⊙ ≲ Mdisk ≲ 4 M⊙. Our results also suggest that the NDAF model is a competitive candidate for the central engine of GRBs with a strong thermal component.

  7. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    PubMed

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The Mass Distribution of Companions to Low-mass White Dwarfs

    NASA Astrophysics Data System (ADS)

    Andrews, Jeff J.; Price-Whelan, Adrian M.; Agüeros, Marcel A.

    2014-12-01

    Measuring the masses of companions to single-line spectroscopic binary stars is (in general) not possible because of the unknown orbital plane inclination. Even when the mass of the visible star can be measured, only a lower limit can be placed on the mass of the unseen companion. However, since these inclination angles should be isotropically distributed, for a large enough, unbiased sample, the companion mass distribution can be deconvolved from the distribution of observables. In this work, we construct a hierarchical probabilistic model to infer properties of unseen companion stars given observations of the orbital period and projected radial velocity of the primary star. We apply this model to three mock samples of low-mass white dwarfs (LMWDs; M <~ 0.45 M ⊙) and a sample of post-common-envelope binaries. We use a mixture of two Gaussians to model the WD and neutron star (NS) companion mass distributions. Our model successfully recovers the initial parameters of these test data sets. We then apply our model to 55 WDs in the extremely low-mass (ELM) WD Survey. Our maximum a posteriori model for the WD companion population has a mean mass μWD = 0.74 M ⊙, with a standard deviation σWD = 0.24 M ⊙. Our model constrains the NS companion fraction f NS to be <16% at 68% confidence. We make samples from the posterior distribution publicly available so that future observational efforts may compute the NS probability for newly discovered LMWDs.

  9. A viable dark fluid model

    NASA Astrophysics Data System (ADS)

    Elkhateeb, Esraa

    2018-01-01

    We consider a cosmological model based on a generalization of the equation of state proposed by Nojiri and Odintsov (2004) and Štefančić (2005, 2006). We argue that this model works as a dark fluid model which can interpolate between dust equation of state and the dark energy equation of state. We show how the asymptotic behavior of the equation of state constrained the parameters of the model. The causality condition for the model is also studied to constrain the parameters and the fixed points are tested to determine different solution classes. Observations of Hubble diagram of SNe Ia supernovae are used to further constrain the model. We present an exact solution of the model and calculate the luminosity distance and the energy density evolution. We also calculate the deceleration parameter to test the state of the universe expansion.

  10. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    ERIC Educational Resources Information Center

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  11. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  12. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  13. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  14. Solubility modeling of refrigerant/lubricant mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels, H.H.; Sienel, T.H.

    1996-12-31

    A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less

  15. On meeting capital requirements with a chance-constrained optimization model.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  16. Mineralogic and compositional properties of Martian soil and dust: results from Mars Pathfinder

    USGS Publications Warehouse

    Bell, J.F.; McSween, H.Y.; Crisp, J.A.; Morris, R.V.; Murchie, S.L.; Bridges, N.T.; Johnson, J. R.; Britt, D.T.; Golombek, M.P.; Moore, H.J.; Ghosh, A.; Bishop, J.L.; Anderson, R.C.; Brückner, J.; Economou, T.; Greenwood, J.P.; Gunnlaugsson, H.P.; Hargraves, R.M.; Hviid, S.; Knudsen, J.M.; Madsen, M.B.; Reid, R.; Rieder, R.; Soderblom, L.

    2000-01-01

    Mars Pathfinder obtained multispectral, elemental, magnetic, and physical measurements of soil and dust at the Sagan Memorial Station during the course of its 83 sol mission. We describe initial results from these measurements, concentrating on multispectral and elemental data, and use these data, along with previous Viking, SNC meteorite, and telescopic results, to help constrain the origin and evolution of Martian soil and dust. We find that soils and dust can be divided into at least eight distinct spectral units, based on parameterization of Imager for Mars Pathfinder (IMP) 400 to 1000 nm multispectral images. The most distinctive spectral parameters for soils and dust are the reflectivity in the red, the red/blue reflectivity ratio, the near-IR spectral slope, and the strength of the 800 to 1000 nm absorption feature. Most of the Pathfinder spectra are consistent with the presence of poorly crystalline or nanophase ferric oxide(s), sometimes mixed with small but varying degrees of well-crystalline ferric and ferrous phases. Darker soil units appear to be coarser-grained, compacted, and/or mixed with a larger amount of dark ferrous materials relative to bright soils. Nanophase goethite, akaganeite, schwertmannite, and maghemite are leading candidates for the origin of the absorption centered near 900 nm in IMP spectra. The ferrous component in the soil cannot be well-constrained based on IMP data. Alpha proton X-ray spectrometer (APXS) measurements of six soil units show little variability within the landing site and show remarkable overall similarity to the average Viking-derived soil elemental composition. Differences exist between Viking and Pathfinder soils, however, including significantly higher S and Cl abundances and lower Si abundances in Viking soils and the lack of a correlation between Ti and Fe in Pathfinder soils. No significant linear correlations were observed between IMP spectral properties and APXS elemental chemistry. Attempts at constraining the mineralogy of soils and dust using normative calculations involving mixtures of smectites and silicate and oxide minerals did not yield physically acceptable solutions. We attempted to use the Pathfinder results to constrain a number of putative soil and dust formation scenarios, including palagonitization and acid-fog weathering. While the Pathfinder soils cannot be chemically linked to the Pathfinder rocks by palagonitization, this study and McSween et al. [1999] suggest that palagonitic alteration of a Martian basaltic rock, plus mixture with a minor component of locally derived andesitic rock fragments, could be consistent with the observed soil APXS and IMP properties.

  17. Constraining new physics models with isotope shift spectroscopy

    NASA Astrophysics Data System (ADS)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  18. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    USGS Publications Warehouse

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  19. Process Dissociation and Mixture Signal Detection Theory

    ERIC Educational Resources Information Center

    DeCarlo, Lawrence T.

    2008-01-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…

  20. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    ERIC Educational Resources Information Center

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  1. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  2. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    PubMed

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.

  3. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  4. Context-Aware Generative Adversarial Privacy

    NASA Astrophysics Data System (ADS)

    Huang, Chong; Kairouz, Peter; Chen, Xiao; Sankar, Lalitha; Rajagopal, Ram

    2017-12-01

    Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals' private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP's performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model, and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.

  5. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  6. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  7. Constraints on Ceres internal strcuture from the Dawn gravity and shape data

    NASA Astrophysics Data System (ADS)

    Ermakov, A.; Zuber, M. T.; Smith, D. E.; Fu, R. R.; Raymond, C. A.; Russell, C. T.; Park, R. S.

    2015-12-01

    Ceres is the largest body in the asteroid belt with a radius of approximately 470 km. It is large enough to attain a shape much closer to hydrostatic equilibrium than major asteroids. Pre-Dawn shape models of Ceres (e.g. Thomas et al., 2005; Carry et al., 2008) revealed that its shape is consistent with a hydrostatic ellipsoid. After the arrival of the Dawn spacecraft in Ceres orbit in March 2015, Framing Camera images were used to construct shape models of Ceres. Meanwhile, radio-tracking data are being used to develop gravity models. We use the Dawn-derived shape and gravity models to constrain Ceres' internal structure. These data for the first time allow estimation of the degree to which Ceres is hydrostatic. Observed non-hydrostatic effects include a 2.1 km triaxiality (difference between the two equatorial axes) as well as an 660-m center-of-mass - center-of-figure offset. The Dawn gravity data from the Survey orbit shows that Ceres has a central density concentration. Second-degree sectorial gravity coefficients are negatively correlated with topography indicating a peculiar interior structure. We compute the relative crustal thickness based on the observed Bouguer anomaly. Hydrostatic models show that Ceres appears more differentiated based on its gravity than on its shape. We expand the Ceres shape in spherical harmonics, observing that the power spectrum of topography deviates from the power law at low degrees (Fig. 1). We interpret the decrease of power at low degrees to be due to viscous relaxation. We suggest that relaxation happens on Ceres but, unlike modeled in Bland (2013), it is important only at the lowest degrees that correspond to scales of several hundreds of km. There are only a few features on Ceres of that size and at least one of them (an impact basin provisionally named Kerwan) appears relaxed. The simplest explanation is that Ceres's outer shell is not pure ice or pure rock but an ice-rock mixture that allows some relaxation at the longest wavelengths. We use the deal.ii finite-element library (Bangerth 2007) to compute relaxed topography spectra. In out future work, we plan to model viscous relaxation to constrain the viscosity profile and thermal evolution.

  8. Importance of Geodetically Controlled Topography to Constrain Rates of Volcanism and Internal Magma Plumbing Systems

    NASA Astrophysics Data System (ADS)

    Glaze, L. S.; Baloga, S. M.; Garvin, J. B.; Quick, L. C.

    2014-05-01

    Lava flows and flow fields on Venus lack sufficient topographic data for any type of quantitative modeling to estimate eruption rates and durations. Such modeling can constrain rates of resurfacing and provide insights into magma plumbing systems.

  9. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  10. A general mixture model and its application to coastal sandbar migration simulation

    NASA Astrophysics Data System (ADS)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.

  11. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less

  12. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    PubMed

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  14. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    NASA Astrophysics Data System (ADS)

    Volk, Brent L.; Lagoudas, Dimitris C.; Maitland, Duncan J.

    2011-09-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5-4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data.

  15. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  16. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  17. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    PubMed

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  18. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  19. Radiocarbon (14C) Constraints On The Fraction Of Refractory Dissolved Organic Carbon In Primary Marine Aerosol From The Northwest Atlantic

    NASA Astrophysics Data System (ADS)

    Beaupre, S. R.; Kieber, D. J.; Keene, W. C.; Long, M. S.; Frossard, A. A.; Kinsey, J. D.; Duplessis, P.; Chang, R.; Maben, J. R.; Lu, X.; Zhu, Y.; Bisgrove, J.

    2017-12-01

    Nearly all organic carbon in seawater is dissolved (DOC), with more than 95% considered refractory based on modeled average lifetimes ( 16,000 years) and characteristically old bulk radiocarbon (14C) ages (4000 - 6000 years) that exceed the timescales of overturning circulation. Although this refractory dissolved organic carbon (RDOC) is present throughout the oceans as a major reservoir of the global carbon cycle, its sources and sinks are poorly constrained. Recently, RDOC was proposed to be removed from the oceans through adsorption onto the surfaces of rising bubble plumes produced by breaking waves, ejection into the atmosphere via bubble bursting as a component of primary marine aerosol (PMA), and subsequent oxidation in the atmosphere. To test this mechanism, we used natural abundance 14C (5730 ± 40 yr half-life) to trace the fraction of RDOC in PMA produced in a high capacity generator at two biologically-productive and two oligotrophic hydrographic stations in the Northwest Atlantic Ocean during a research cruise aboard the R/V Endeavor (Sep - Oct 2016). The 14C signatures of PMA separately generated day and night from near-surface (5 m) and deep (2500 m) seawater were compared with corresponding 14C signatures in seawater of near-surface dissolved inorganic carbon (DIC, a proxy for recently produced organic matter), bulk deep DOC (a proxy for RDOC), and near-surface bulk DOC. Results constrain the selectivity of PMA formation from RDOC in natural mixtures of recently produced and refractory DOC. The implications of these results for PMA formation and RDOC biogeochemistry will be discussed.

  20. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  1. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    ERIC Educational Resources Information Center

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  2. Effects of three veterinary antibiotics and their binary mixtures on two green alga species.

    PubMed

    Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A

    2018-03-01

    The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network

    PubMed Central

    2012-01-01

    Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945

  4. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  5. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for ecosystem carbon cycle studies

    Treesearch

    Y. He; Q. Zhuang; A.D. McGuire; Y. Liu; M. Chen

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations inmodeling regional carbon dynamics and explore the...

  6. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  7. A chance-constrained programming model to allocate wildfire initial attack resources for a fire season

    Treesearch

    Yu Wei; Michael Bevers; Erin Belval; Benjamin Bird

    2015-01-01

    This research developed a chance-constrained two-stage stochastic programming model to support wildfire initial attack resource acquisition and location on a planning unit for a fire season. Fire growth constraints account for the interaction between fire perimeter growth and construction to prevent overestimation of resource requirements. We used this model to examine...

  8. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    PubMed Central

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272

  9. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  10. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  11. Mixed-up trees: the structure of phylogenetic mixtures.

    PubMed

    Matsen, Frederick A; Mossel, Elchanan; Steel, Mike

    2008-05-01

    In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.

  12. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  13. New approach in direct-simulation of gas mixtures

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren

    1991-01-01

    Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.

  14. Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation

    NASA Astrophysics Data System (ADS)

    Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter

    2016-11-01

    Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.

  15. Bayesian 2-Stage Space-Time Mixture Modeling With Spatial Misalignment of the Exposure in Small Area Health Data.

    PubMed

    Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong

    2012-09-01

    We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.

  16. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.

    PubMed

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-03-13

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.

  17. Some comments on thermodynamic consistency for equilibrium mixture equations of state

    DOE PAGES

    Grove, John W.

    2018-03-28

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  18. Using degrees of rate control to improve selective n-butane oxidation over model MOF-encapsulated catalysts: sterically-constrained Ag 3 Pd(111)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dix, Sean T.; Scott, Joseph K.; Getman, Rachel B.

    2016-01-01

    Metal nanoparticles encapsulated within metal organic frameworks (MOFs) offer steric restrictions near the catalytic metal that can improve selectivity, much like in enzymes. A microkinetic model is developed for the regio-selective oxidation ofn-butane to 1-butanol with O 2over a model for MOF-encapsulated bimetallic nanoparticles. The model consists of a Ag 3Pd(111) surface decorated with a 2-atom-thick ring of (immobile) helium atoms which creates an artificial pore of similar size to that in common MOFs, which sterically constrains the adsorbed reaction intermediates. The kinetic parameters are based on energies calculated using density functional theory (DFT). The microkinetic model was analysed atmore » 423 K to determine the dominant pathways and which species (adsorbed intermediates and transition states in the reaction mechanism) have energies that most sensitively affect the reaction rates to the different products, using degree-of-rate-control (DRC) analysis. This analysis revealed that activation of the C–H bond is assisted by adsorbed oxygen atoms, O*. Unfortunately, O* also abstracts H from adsorbed 1-butanol and butoxy as well, leading to butanal as the only significant product. This suggested to (1) add water to produce more OH*, thus inhibiting these undesired steps which produce OH*, and (2) eliminate most of the O 2pressure to reduce the O* coverage, thus also inhibiting these steps. Combined with increasing butane pressure, this dramatically improved the 1-butanol selectivity (from 0 to 95%) and the rate (to 2 molecules per site per s). Moreover, 40% less O 2was consumed per oxygen atom in the products. Under these conditions, a terminal H in butane is directly eliminated to the Pd site, and the resulting adsorbed butyl combines with OH* to give the desired 1-butanol. These results demonstrate that DRC analysis provides a powerful approach for optimizing catalytic process conditions, and that highly selectivity oxidation can sometimes be achieved by using a mixture of O 2and H 2O as the oxidant. This was further demonstrated by DRC analysis of a second microkinetic model based on a related but hypothetical catalyst, where the activation energies for two of the steps were modified.« less

  19. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  20. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  1. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  2. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  3. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  4. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  5. Single- and mixture toxicity of three organic UV-filters, ethylhexyl methoxycinnamate, octocrylene, and avobenzone on Daphnia magna.

    PubMed

    Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun

    2017-03-01

    In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    NASA Astrophysics Data System (ADS)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  7. Cumulative toxicity of neonicotinoid insecticide mixtures to Chironomus dilutus under acute exposure scenarios.

    PubMed

    Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-11-01

    Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.

  8. A chance-constrained stochastic approach to intermodal container routing problems.

    PubMed

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  9. A chance-constrained stochastic approach to intermodal container routing problems

    PubMed Central

    Zhao, Yi; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389

  10. Hydrologic and hydraulic flood forecasting constrained by remote sensing data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2017-12-01

    Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.

  11. Tests of chameleon gravity

    NASA Astrophysics Data System (ADS)

    Burrage, Clare; Sakstein, Jeremy

    2018-03-01

    Theories of modified gravity, where light scalars with non-trivial self-interactions and non-minimal couplings to matter—chameleon and symmetron theories—dynamically suppress deviations from general relativity in the solar system. On other scales, the environmental nature of the screening means that such scalars may be relevant. The highly-nonlinear nature of screening mechanisms means that they evade classical fifth-force searches, and there has been an intense effort towards designing new and novel tests to probe them, both in the laboratory and using astrophysical objects, and by reinterpreting existing datasets. The results of these searches are often presented using different parametrizations, which can make it difficult to compare constraints coming from different probes. The purpose of this review is to summarize the present state-of-the-art searches for screened scalars coupled to matter, and to translate the current bounds into a single parametrization to survey the state of the models. Presently, commonly studied chameleon models are well-constrained but less commonly studied models have large regions of parameter space that are still viable. Symmetron models are constrained well by astrophysical and laboratory tests, but there is a desert separating the two scales where the model is unconstrained. The coupling of chameleons to photons is tightly constrained but the symmetron coupling has yet to be explored. We also summarize the current bounds on f( R) models that exhibit the chameleon mechanism (Hu and Sawicki models). The simplest of these are well constrained by astrophysical probes, but there are currently few reported bounds for theories with higher powers of R. The review ends by discussing the future prospects for constraining screened modified gravity models further using upcoming and planned experiments.

  12. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  13. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    PubMed

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  14. Comparisons of a Constrained Least Squares Model versus Human-in-the-Loop for Spectral Unmixing to Determine Material Type of GEO Debris

    NASA Technical Reports Server (NTRS)

    Abercromby, Kira J.; Rapp, Jason; Bedard, Donald; Seitzer, Patrick; Cardona, Tommaso; Cowardin, Heather; Barker, Ed; Lederer, Susan

    2013-01-01

    Constrained Linear Least Squares model is generally more accurate than the "human-in-the-loop". However, "human-in-the-loop" can remove materials that make no sense. The speed of the model in determining a "first cut" at the material ID makes it a viable option for spectral unmixing of debris objects.

  15. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    NASA Astrophysics Data System (ADS)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.

  16. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  17. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  18. Predicting the shock compression response of heterogeneous powder mixtures

    NASA Astrophysics Data System (ADS)

    Fredenburg, D. A.; Thadhani, N. N.

    2013-06-01

    A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.

  19. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  20. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    NASA Astrophysics Data System (ADS)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.

  1. Improved design of constrained model predictive tracking control for batch processes against unknown uncertainties.

    PubMed

    Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong

    2017-07-01

    In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. A numerical study of granular dam-break flow

    NASA Astrophysics Data System (ADS)

    Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.

    2017-12-01

    Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.

  3. Using seismically constrained magnetotelluric inversion to recover velocity structure in the shallow lithosphere

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Fishwick, S.; Jones, A. G.

    2015-12-01

    Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.

  4. Periodic Forced Response of Structures Having Three-Dimensional Frictional Constraints

    NASA Astrophysics Data System (ADS)

    CHEN, J. J.; YANG, B. D.; MENQ, C. H.

    2000-01-01

    Many mechanical systems have moving components that are mutually constrained through frictional contacts. When subjected to cyclic excitations, a contact interface may undergo constant changes among sticks, slips and separations, which leads to very complex contact kinematics. In this paper, a 3-D friction contact model is employed to predict the periodic forced response of structures having 3-D frictional constraints. Analytical criteria based on this friction contact model are used to determine the transitions among sticks, slips and separations of the friction contact, and subsequently the constrained force which consists of the induced stick-slip friction force on the contact plane and the contact normal load. The resulting constrained force is often a periodic function and can be considered as a feedback force that influences the response of the constrained structures. By using the Multi-Harmonic Balance Method along with Fast Fourier Transform, the constrained force can be integrated with the receptance of the structures so as to calculate the forced response of the constrained structures. It results in a set of non-linear algebraic equations that can be solved iteratively to yield the relative motion as well as the constrained force at the friction contact. This method is used to predict the periodic response of a frictionally constrained 3-d.o.f. oscillator. The predicted results are compared with those of the direct time integration method so as to validate the proposed method. In addition, the effect of super-harmonic components on the resonant response and jump phenomenon is examined.

  5. Detecting Darwinism from Molecules in the Enceladus Plumes, Jupiter's Moons, and Other Planetary Water Lagoons.

    PubMed

    Benner, Steven A

    2017-09-01

    To the astrobiologist, Enceladus offers easy access to a potential subsurface biosphere via the intermediacy of a plume of water emerging directly into space. A direct question follows: If we were to collect a sample of this plume, what in that sample, through its presence or its absence, would suggest the presence and/or absence of life in this exotic locale? This question is, of course, relevant for life detection in any aqueous lagoon that we might be able to sample. This manuscript reviews physical chemical constraints that must be met by a genetic polymer for it to support Darwinism, a process believed to be required for a chemical system to generate properties that we value in biology. We propose that the most important of these is a repeating backbone charge; a Darwinian genetic biopolymer must be a "polyelectrolyte." Relevant to mission design, such biopolymers are especially easy to recover and concentrate from aqueous mixtures for detection, simply by washing the aqueous mixtures across a polycharged support. Several device architectures are described to ensure that, once captured, the biopolymer meets two other requirements for Darwinism, homochirality and a small building block "alphabet." This approach is compared and contrasted with alternative biomolecule detection approaches that seek homochirality and constrained alphabets in non-encoded biopolymers. This discussion is set within a model for the history of the terran biosphere, identifying points in that natural history where these alternative approaches would have failed to detect terran life. Key Words: Enceladus-Life detection-Europa-Icy moon-Biosignatures-Polyelectrolyte theory of the gene. Astrobiology 17, 840-851.

  6. The effect of rock particles and D2O replacement on the flow behaviour of ice.

    PubMed

    Middleton, Ceri A; Grindrod, Peter M; Sammonds, Peter R

    2017-02-13

    Ice-rock mixtures are found in a range of natural terrestrial and planetary environments. To understand how flow processes occur in these environments, laboratory-derived properties can be extrapolated to natural conditions through flow laws. Here, deformation experiments have been carried out on polycrystalline samples of pure ice, ice-rock and D 2 O-ice-rock mixtures at temperatures of 263, 253 and 233 K, confining pressure of 0 and 48 MPa, rock fraction of 0-50 vol.% and strain-rates of 5 × 10 -7 to 5 × 10 -5  s -1 Both the presence of rock particles and replacement of H 2 O by D 2 O increase bulk strength. Calculated flow law parameters for ice and H 2 O-ice-rock are similar to literature values at equivalent conditions, except for the value of the rock fraction exponent, here found to be 1. D 2 O samples are 1.8 times stronger than H 2 O samples, probably due to the higher mass of deuterons when compared with protons. A gradual transition between dislocation creep and grain-size-sensitive deformation at the lowest strain-rates in ice and ice-rock samples is suggested. These results demonstrate that flow laws can be found to describe ice-rock behaviour, and should be used in modelling of natural processes, but that further work is required to constrain parameters and mechanisms for the observed strength enhancement.This article is part of the themed issue 'Microdynamics of ice'. © 2016 The Author(s).

  7. The effect of rock particles and D2O replacement on the flow behaviour of ice

    PubMed Central

    Grindrod, Peter M.

    2017-01-01

    Ice–rock mixtures are found in a range of natural terrestrial and planetary environments. To understand how flow processes occur in these environments, laboratory-derived properties can be extrapolated to natural conditions through flow laws. Here, deformation experiments have been carried out on polycrystalline samples of pure ice, ice–rock and D2O-ice–rock mixtures at temperatures of 263, 253 and 233 K, confining pressure of 0 and 48 MPa, rock fraction of 0–50 vol.% and strain-rates of 5 × 10−7 to 5 × 10−5 s−1. Both the presence of rock particles and replacement of H2O by D2O increase bulk strength. Calculated flow law parameters for ice and H2O-ice–rock are similar to literature values at equivalent conditions, except for the value of the rock fraction exponent, here found to be 1. D2O samples are 1.8 times stronger than H2O samples, probably due to the higher mass of deuterons when compared with protons. A gradual transition between dislocation creep and grain-size-sensitive deformation at the lowest strain-rates in ice and ice–rock samples is suggested. These results demonstrate that flow laws can be found to describe ice–rock behaviour, and should be used in modelling of natural processes, but that further work is required to constrain parameters and mechanisms for the observed strength enhancement. This article is part of the themed issue ‘Microdynamics of ice’. PMID:28025298

  8. Mixture theory-based poroelasticity as a model of interstitial tissue growth

    PubMed Central

    Cowin, Stephen C.; Cardoso, Luis

    2011-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481

  9. Mixture theory-based poroelasticity as a model of interstitial tissue growth.

    PubMed

    Cowin, Stephen C; Cardoso, Luis

    2012-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.

  10. Water Budget Estimation by Assimilating Multiple Observations and Hydrological Modeling Using Constrained Ensemble Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Pan, M.; Wood, E. F.

    2004-05-01

    This study explores a method to estimate various components of the water cycle (ET, runoff, land storage, etc.) based on a number of different info sources, including both observations and observation-enhanced model simulations. Different from existing data assimilations, this constrained Kalman filtering approach keeps the water budget perfectly closed while updating the states of the underlying model (VIC model) optimally using observations. Assimilating different data sources in this way has several advantages: (1) physical model is included to make estimation time series smooth, missing-free, and more physically consistent; (2) uncertainties in the model and observations are properly addressed; (3) model is constrained by observation thus to reduce model biases; (4) balance of water is always preserved along the assimilation. Experiments are carried out in Southern Great Plain region where necessary observations have been collected. This method may also be implemented in other applications with physical constraints (e.g. energy cycles) and at different scales.

  11. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches.

    PubMed

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.

  12. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches

    PubMed Central

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository. PMID:29551985

  13. Hydration properties of adenosine phosphate series as studied by microwave dielectric spectroscopy.

    PubMed

    Mogami, George; Wazawa, Tetsuichi; Morimoto, Nobuyuki; Kodama, Takao; Suzuki, Makoto

    2011-02-01

    Hydration properties of adenine nucleotides and orthophosphate (Pi) in aqueous solutions adjusted to pH=8 with NaOH were studied by high-resolution microwave dielectric relaxation (DR) spectroscopy at 20 °C. The dielectric spectra were analyzed using a mixture theory combined with a least-squares Debye decomposition method. Solutions of Pi and adenine nucleotides showed qualitatively similar dielectric properties described by two Debye components. One component was characterized by a relaxation frequency (f(c)=18.8-19.7 GHz) significantly higher than that of bulk water (17 GHz) and the other by a much lower f(c) (6.4-7.6 GHz), which are referred to here as hyper-mobile water and constrained water, respectively. By contrast, a hydration shell of only the latter type was found for adenosine (f(c)~6.7 GHz). The present results indicate that phosphoryl groups are mostly responsible for affecting the structure of the water surrounding the adenine nucleotides by forming one constrained water layer and an additional three or four layers of hyper-mobile water. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Trajectory optimization and guidance law development for national aerospace plane applications

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1988-01-01

    The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.

  15. Constraints on Dark Energy from Baryon Acoustic Peak and Galaxy Cluster Gas Mass Measurements

    NASA Astrophysics Data System (ADS)

    Samushia, Lado; Ratra, Bharat

    2009-10-01

    We use baryon acoustic peak measurements by Eisenstein et al. and Percival et al., together with the Wilkinson Microwave Anisotropy Probe (WMAP) measurement of the apparent acoustic horizon angle, and galaxy cluster gas mass fraction measurements of Allen et al., to constrain a slowly rolling scalar field dark energy model, phiCDM, in which dark energy's energy density changes in time. We also compare our phiCDM results with those derived for two more common dark energy models: the time-independent cosmological constant model, ΛCDM, and the XCDM parameterization of dark energy's equation of state. For time-independent dark energy, the Percival et al. measurements effectively constrain spatial curvature and favor a close to the spatially flat model, mostly due to the WMAP cosmic microwave background prior used in the analysis. In a spatially flat model the Percival et al. data less effectively constrain time-varying dark energy. The joint baryon acoustic peak and galaxy cluster gas mass constraints on the phiCDM model are consistent with but tighter than those derived from other data. A time-independent cosmological constant in a spatially flat model provides a good fit to the joint data, while the α parameter in the inverse power-law potential phiCDM model is constrained to be less than about 4 at 3σ confidence level.

  16. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete

    PubMed Central

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-01-01

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990

  17. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  18. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  19. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  20. Diagenesis and clay mineral formation at Gale Crater, Mars

    PubMed Central

    Bridges, J C; Schwenzer, S P; Leveille, R; Westall, F; Wiens, R C; Mangold, N; Bristow, T; Edwards, P; Berger, G

    2015-01-01

    The Mars Science Laboratory rover Curiosity found host rocks of basaltic composition and alteration assemblages containing clay minerals at Yellowknife Bay, Gale Crater. On the basis of the observed host rock and alteration minerals, we present results of equilibrium thermochemical modeling of the Sheepbed mudstones of Yellowknife Bay in order to constrain the formation conditions of its secondary mineral assemblage. Building on conclusions from sedimentary observations by the Mars Science Laboratory team, we assume diagenetic, in situ alteration. The modeling shows that the mineral assemblage formed by the reaction of a CO2-poor and oxidizing, dilute aqueous solution (Gale Portage Water) in an open system with the Fe-rich basaltic-composition sedimentary rocks at 10–50°C and water/rock ratio (mass of rock reacted with the starting fluid) of 100–1000, pH of ∽7.5–12. Model alteration assemblages predominantly contain phyllosilicates (Fe-smectite, chlorite), the bulk composition of a mixture of which is close to that of saponite inferred from Chemistry and Mineralogy data and to that of saponite observed in the nakhlite Martian meteorites and terrestrial analogues. To match the observed clay mineral chemistry, inhomogeneous dissolution dominated by the amorphous phase and olivine is required. We therefore deduce a dissolving composition of approximately 70% amorphous material, with 20% olivine, and 10% whole rock component. PMID:26213668

  1. Diagenesis and clay mineral formation at Gale Crater, Mars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, J. C.; Schwenzer, S. P.; Leveille, R.

    The Mars Science Laboratory rover Curiosity found host rocks of basaltic composition and alteration assemblages containing clay minerals at Yellowknife Bay, Gale Crater. On the basis of the observed host rock and alteration minerals, we present results of equilibrium thermochemical modeling of the Sheepbed mudstones of Yellowknife Bay in order to constrain the formation conditions of its secondary mineral assemblage. Building on conclusions from sedimentary observations by the Mars Science Laboratory team, we assume diagenetic, in situ alteration. The modeling shows that the mineral assemblage formed by the reaction of a CO₂-poor and oxidizing, dilute aqueous solution (Gale Portage Water)more » in an open system with the Fe-rich basaltic-composition sedimentary rocks at 10–50°C and water/rock ratio (mass of rock reacted with the starting fluid) of 100–1000, pH of ~7.5–12. Model alteration assemblages predominantly contain phyllosilicates (Fe-smectite, chlorite), the bulk composition of a mixture of which is close to that of saponite inferred from Chemistry and Mineralogy data and to that of saponite observed in the nakhlite Martian meteorites and terrestrial analogues. To match the observed clay mineral chemistry, inhomogeneous dissolution dominated by the amorphous phase and olivine is required. We therefore deduce a dissolving composition of approximately 70% amorphous material, with 20% olivine, and 10% whole rock component.« less

  2. Diagenesis and clay mineral formation at Gale Crater, Mars

    DOE PAGES

    Bridges, J. C.; Schwenzer, S. P.; Leveille, R.; ...

    2015-01-18

    The Mars Science Laboratory rover Curiosity found host rocks of basaltic composition and alteration assemblages containing clay minerals at Yellowknife Bay, Gale Crater. On the basis of the observed host rock and alteration minerals, we present results of equilibrium thermochemical modeling of the Sheepbed mudstones of Yellowknife Bay in order to constrain the formation conditions of its secondary mineral assemblage. Building on conclusions from sedimentary observations by the Mars Science Laboratory team, we assume diagenetic, in situ alteration. The modeling shows that the mineral assemblage formed by the reaction of a CO₂-poor and oxidizing, dilute aqueous solution (Gale Portage Water)more » in an open system with the Fe-rich basaltic-composition sedimentary rocks at 10–50°C and water/rock ratio (mass of rock reacted with the starting fluid) of 100–1000, pH of ~7.5–12. Model alteration assemblages predominantly contain phyllosilicates (Fe-smectite, chlorite), the bulk composition of a mixture of which is close to that of saponite inferred from Chemistry and Mineralogy data and to that of saponite observed in the nakhlite Martian meteorites and terrestrial analogues. To match the observed clay mineral chemistry, inhomogeneous dissolution dominated by the amorphous phase and olivine is required. We therefore deduce a dissolving composition of approximately 70% amorphous material, with 20% olivine, and 10% whole rock component.« less

  3. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  4. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, S.; Tebby, C.

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less

  5. Critical Robotic Lunar Missions

    NASA Astrophysics Data System (ADS)

    Plescia, J. B.

    2018-04-01

    Perhaps the most critical missions to understanding lunar history are in situ dating and network missions. These would constrain the volcanic and thermal history and interior structure. These data would better constrain lunar evolution models.

  6. Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method

    NASA Astrophysics Data System (ADS)

    Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.

    2018-03-01

    The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.

  7. Universally Sloppy Parameter Sensitivities in Systems Biology Models

    PubMed Central

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-01-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568

  8. Universally sloppy parameter sensitivities in systems biology models.

    PubMed

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  9. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  10. An interval chance-constrained fuzzy modeling approach for supporting land-use planning and eco-environment planning at a watershed level.

    PubMed

    Ou, Guoliang; Tan, Shukui; Zhou, Min; Lu, Shasha; Tao, Yinghui; Zhang, Zuo; Zhang, Lu; Yan, Danping; Guan, Xingliang; Wu, Gang

    2017-12-15

    An interval chance-constrained fuzzy land-use allocation (ICCF-LUA) model is proposed in this study to support solving land resource management problem associated with various environmental and ecological constraints at a watershed level. The ICCF-LUA model is based on the ICCF (interval chance-constrained fuzzy) model which is coupled with interval mathematical model, chance-constrained programming model and fuzzy linear programming model and can be used to deal with uncertainties expressed as intervals, probabilities and fuzzy sets. Therefore, the ICCF-LUA model can reflect the tradeoff between decision makers and land stakeholders, the tradeoff between the economical benefits and eco-environmental demands. The ICCF-LUA model has been applied to the land-use allocation of Wujiang watershed, Guizhou Province, China. The results indicate that under highly land suitable conditions, optimized area of cultivated land, forest land, grass land, construction land, water land, unused land and landfill in Wujiang watershed will be [5015, 5648] hm 2 , [7841, 7965] hm 2 , [1980, 2056] hm 2 , [914, 1423] hm 2 , [70, 90] hm 2 , [50, 70] hm 2 and [3.2, 4.3] hm 2 , the corresponding system economic benefit will be between 6831 and 7219 billion yuan. Consequently, the ICCF-LUA model can effectively support optimized land-use allocation problem in various complicated conditions which include uncertainties, risks, economic objective and eco-environmental constraints. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Optimization of Modeled Land-Atmosphere Exchanges of Water and Energy in an Isotopically-Enabled Land Surface Model by Bayesian Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Wong, T. E.; Noone, D. C.; Kleiber, W.

    2014-12-01

    The single largest uncertainty in climate model energy balance is the surface latent heating over tropical land. Furthermore, the partitioning of the total latent heat flux into contributions from surface evaporation and plant transpiration is of great importance, but notoriously poorly constrained. Resolving these issues will require better exploiting information which lies at the interface between observations and advanced modeling tools, both of which are imperfect. There are remarkably few observations which can constrain these fluxes, placing strict requirements on developing statistical methods to maximize the use of limited information to best improve models. Previous work has demonstrated the power of incorporating stable water isotopes into land surface models for further constraining ecosystem processes. We present results from a stable water isotopically-enabled land surface model (iCLM4), including model experiments partitioning the latent heat flux into contributions from plant transpiration and surface evaporation. It is shown that the partitioning results are sensitive to the parameterization of kinetic fractionation used. We discuss and demonstrate an approach to calibrating select model parameters to observational data in a Bayesian estimation framework, requiring Markov Chain Monte Carlo sampling of the posterior distribution, which is shown to constrain uncertain parameters as well as inform relevant values for operational use. Finally, we discuss the application of the estimation scheme to iCLM4, including entropy as a measure of information content and specific challenges which arise in calibration models with a large number of parameters.

  12. Mixture models for estimating the size of a closed population when capture rates vary among individuals

    USGS Publications Warehouse

    Dorazio, R.M.; Royle, J. Andrew

    2003-01-01

    We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.

  13. Chemical mixtures in potable water in the U.S.

    USGS Publications Warehouse

    Ryker, Sarah J.

    2014-01-01

    In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.

  14. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    NASA Astrophysics Data System (ADS)

    Gato-Rivera, B.; Semikhatov, A. M.

    1992-08-01

    A direct relation between the conformal formalism for 2D quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the W ( l) -constrained KP hierarchy to the ( p‧, p‧) minimal model, with the tau function being given by the correlator of a product of (dressed) ( l, 1) [or (1, l)] operators, provided the Miwa parameter ni and the free parameter (an abstract bc spin) present in the constraint are expressed through the ratio p‧/ p and the level l.

  15. Combined constraints on the structure and physical properties of the East Antarctic lithosphere from geology and geophysics.

    NASA Astrophysics Data System (ADS)

    Reading, A. M.; Staal, T.; Halpin, J.; Whittaker, J. M.; Morse, P. E.

    2017-12-01

    The lithosphere of East Antarctica is one of the least explored regions of the planet, yet it is gaining in importance in global scientific research. Continental heat flux density and 3D glacial isostatic adjustment studies, for example, rely on a good knowledge of the deep structure in constraining model inputs.In this contribution, we use a multidisciplinary approach to constrain lithospheric domains. To seismic tomography models, we add constraints from magnetic studies and also new geological constraints. Geological knowledge exists around the periphery of East Antarctica and is reinforced in the knowledge of plate tectonic reconstructions. The subglacial geology of the Antarctic hinterland is largely unknown but the plate reconstructions allow the well-posed extrapolation of major terranes into the interior of the continent, guided by the seismic tomography and magnetic images. We find that the northern boundary of the lithospheric domain centred on the Gamburtsev Subglacial Mountains has a possible trend that runs south of the Lambert Glacier region, turning coastward through Wilkes Land. Other periphery-to-interior connections are less well constrained and the possibility of lithospheric domains that are entirely sub-glacial is high. We develop this framework to include a probabilistic method of handling alternate models and quantifiable uncertainties. We also show first results in using a Bayesian approach to predicting lithospheric boundaries from multivariate data.Within the newly constrained domains, we constrain heat flux (density) as the sum of basal heat flux and upper crustal heat flux. The basal heat flux is constrained by geophysical methods while the upper crustal heat flux is constrained by geology or predicted geology. In addition to heat flux constraints, we also consider the variations in friction experienced by moving ice sheets due to varying geology.

  16. Testing and Improving Theories of Radiative Transfer for Determining the Mineralogy of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.

    2012-12-01

    Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu

  17. Applying mixture toxicity modelling to predict bacterial bioluminescence inhibition by non-specifically acting pharmaceuticals and specifically acting antibiotics.

    PubMed

    Neale, Peta A; Leusch, Frederic D L; Escher, Beate I

    2017-04-01

    Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC 50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC 50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC 50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    NASA Astrophysics Data System (ADS)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  20. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  1. Process dissociation and mixture signal detection theory.

    PubMed

    DeCarlo, Lawrence T

    2008-11-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.

  2. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    NASA Astrophysics Data System (ADS)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model indicates that increased γ-γ attraction can raise γ-α mixture light scattering far more than it does for solutions of γ-crystallin alone, and can produce marked turbidity tens of degrees celsius above liquid-liquid separation.

  3. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  4. Toxicity interactions between manganese (Mn) and lead (Pb) or cadmium (Cd) in a model organism the nematode C. elegans.

    PubMed

    Lu, Cailing; Svoboda, Kurt R; Lenz, Kade A; Pattison, Claire; Ma, Hongbo

    2018-06-01

    Manganese (Mn) is considered as an emerging metal contaminant in the environment. However, its potential interactions with companying toxic metals and the associated mixture effects are largely unknown. Here, we investigated the toxicity interactions between Mn and two commonly seen co-occurring toxic metals, Pb and Cd, in a model organism the nematode Caenorhabditis elegans. The acute lethal toxicity of mixtures of Mn+Pb and Mn+Cd were first assessed using a toxic unit model. Multiple toxicity endpoints including reproduction, lifespan, stress response, and neurotoxicity were then examined to evaluate the mixture effects at sublethal concentrations. Stress response was assessed using a daf-16::GFP transgenic strain that expresses GFP under the control of DAF-16 promotor. Neurotoxicity was assessed using a dat-1::GFP transgenic strain that expresses GFP in dopaminergic neurons. The mixture of Mn+Pb induced a more-than-additive (synergistic) lethal toxicity in the worm whereas the mixture of Mn+Cd induced a less-than-additive (antagonistic) toxicity. Mixture effects on sublethal toxicity showed more complex patterns and were dependent on the toxicity endpoints as well as the modes of toxic action of the metals. The mixture of Mn+Pb induced additive effects on both reproduction and lifespan, whereas the mixture of Mn+Cd induced additive effects on lifespan but not reproduction. Both mixtures seemed to induce additive effects on stress response and neurotoxicity, although a quantitative assessment was not possible due to the single concentrations used in mixture tests. Our findings demonstrate the complexity of metal interactions and the associated mixture effects. Assessment of metal mixture toxicity should take into consideration the unique property of individual metals, their potential toxicity mechanisms, and the toxicity endpoints examined.

  5. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    NASA Astrophysics Data System (ADS)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  6. Mixture IRT Model with a Higher-Order Structure for Latent Traits

    ERIC Educational Resources Information Center

    Huang, Hung-Yu

    2017-01-01

    Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…

  7. Identification of different geologic units using fuzzy constrained resistivity tomography

    NASA Astrophysics Data System (ADS)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  8. How well can future CMB missions constrain cosmic inflation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Jérôme; Vennin, Vincent; Ringeval, Christophe, E-mail: jmartin@iap.fr, E-mail: christophe.ringeval@uclouvain.be, E-mail: vennin@iap.fr

    2014-10-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from 10{sup -1} down to 10{sup -7}. We then compute the Bayesian evidences and complexities of all Encyclopædia Inflationaris models in order to assess the constraining power of PRISM alone andmore » LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a violation of slow roll at second order. Finally, our results suggest that describing an inflationary model by its potential shape only, without specifying a reheating temperature, will no longer be possible given the accuracy level reached by the future CMB missions.« less

  9. Beta Regression Finite Mixture Models of Polarization and Priming

    ERIC Educational Resources Information Center

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  10. Packing optimization for automated generation of complex system's initial configurations for molecular dynamics and docking.

    PubMed

    Martínez, José Mario; Martínez, Leandro

    2003-05-01

    Molecular Dynamics is a powerful methodology for the comprehension at molecular level of many chemical and biochemical systems. The theories and techniques developed for structural and thermodynamic analyses are well established, and many software packages are available. However, designing starting configurations for dynamics can be cumbersome. Easily generated regular lattices can be used when simple liquids or mixtures are studied. However, for complex mixtures, polymer solutions or solid adsorbed liquids (for example) this approach is inefficient, and it turns out to be very hard to obtain an adequate coordinate file. In this article, the problem of obtaining an adequate initial configuration is treated as a "packing" problem and solved by an optimization procedure. The initial configuration is chosen in such a way that the minimum distance between atoms of different molecules is greater than a fixed tolerance. The optimization uses a well-known algorithm for box-constrained minimization. Applications are given for biomolecule solvation, many-component mixtures, and interfaces. This approach can reduce the work of designing starting configurations from days or weeks to few minutes or hours, in an automated fashion. Packing optimization is also shown to be a powerful methodology for space search in docking of small ligands to proteins. This is demonstrated by docking of the thyroid hormone to its nuclear receptor. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 819-825, 2003

  11. Predicting mixture toxicity of seven phenolic compounds with similar and dissimilar action mechanisms to Vibrio qinghaiensis sp.nov.Q67.

    PubMed

    Huang, Wei Ying; Liu, Fei; Liu, Shu Shen; Ge, Hui Lin; Chen, Hong Han

    2011-09-01

    The predictions of mixture toxicity for chemicals are commonly based on two models: concentration addition (CA) and independent action (IA). Whether the CA and IA can predict mixture toxicity of phenolic compounds with similar and dissimilar action mechanisms was studied. The mixture toxicity was predicted on the basis of the concentration-response data of individual compounds. Test mixtures at different concentration ratios and concentration levels were designed using two methods. The results showed that the Weibull function fit well with the concentration-response data of all the components and their mixtures, with all relative coefficients (Rs) greater than 0.99 and root mean squared errors (RMSEs) less than 0.04. The predicted values from CA and IA models conformed to observed values of the mixtures. Therefore, it can be concluded that both CA and IA can predict reliable results for the mixture toxicity of the phenolic compounds with similar and dissimilar action mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Modeling slow-slip segmentation in Cascadia subduction zone constrained by tremor locations and gravity anomalies

    NASA Astrophysics Data System (ADS)

    Li, Duo; Liu, Yajing

    2017-04-01

    Along-strike segmentation of slow-slip events (SSEs) and nonvolcanic tremors in Cascadia may reflect heterogeneities of the subducting slab or overlying continental lithosphere. However, the nature behind this segmentation is not fully understood. We develop a 3-D model for episodic SSEs in northern and central Cascadia, incorporating both seismological and gravitational observations to constrain the heterogeneities in the megathrust fault properties. The 6 year automatically detected tremors are used to constrain the rate-state friction parameters. The effective normal stress at SSE depths is constrained by along-margin free-air and Bouguer gravity anomalies. The along-strike variation in the long-term plate convergence rate is also taken into consideration. Simulation results show five segments of ˜Mw6.0 SSEs spontaneously appear along the strike, correlated to the distribution of tremor epicenters. Modeled SSE recurrence intervals are equally comparable to GPS observations using both types of gravity anomaly constraints. However, the model constrained by free-air anomaly does a better job in reproducing the cumulative slip as well as more consistent surface displacements with GPS observations. The modeled along-strike segmentation represents the averaged slip release over many SSE cycles, rather than permanent barriers. Individual slow-slip events can still propagate across the boundaries, which may cause interactions between adjacent SSEs, as observed in time-dependent GPS inversions. In addition, the moment-duration scaling is sensitive to the selection of velocity criteria for determining when SSEs occur. Hence, the detection ability of the current GPS network should be considered in the interpretation of slow earthquake source parameter scaling relations.

  13. Modelling and Vibration Control of Beams with Partially Debonded Active Constrained Layer Damping Patch

    NASA Astrophysics Data System (ADS)

    SUN, D.; TONG, L.

    2002-05-01

    A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.

  14. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  15. Enceladus Plume Structure and Time Variability: Comparison of Cassini Observations

    PubMed Central

    Perry, Mark E.; Hansen, Candice J.; Waite, J. Hunter; Porco, Carolyn C.; Spencer, John R.; Howett, Carly J. A.

    2017-01-01

    Abstract During three low-altitude (99, 66, 66 km) flybys through the Enceladus plume in 2010 and 2011, Cassini's ion neutral mass spectrometer (INMS) made its first high spatial resolution measurements of the plume's gas density and distribution, detecting in situ the individual gas jets within the broad plume. Since those flybys, more detailed Imaging Science Subsystem (ISS) imaging observations of the plume's icy component have been reported, which constrain the locations and orientations of the numerous gas/grain jets. In the present study, we used these ISS imaging results, together with ultraviolet imaging spectrograph stellar and solar occultation measurements and modeling of the three-dimensional structure of the vapor cloud, to constrain the magnitudes, velocities, and time variability of the plume gas sources from the INMS data. Our results confirm a mixture of both low and high Mach gas emission from Enceladus' surface tiger stripes, with gas accelerated as fast as Mach 10 before escaping the surface. The vapor source fluxes and jet intensities/densities vary dramatically and stochastically, up to a factor 10, both spatially along the tiger stripes and over time between flyby observations. This complex spatial variability and dynamics may result from time-variable tidal stress fields interacting with subsurface fissure geometry and tortuosity beyond detectability, including changing gas pathways to the surface, and fluid flow and boiling in response evolving lithostatic stress conditions. The total plume gas source has 30% uncertainty depending on the contributions assumed for adiabatic and nonadiabatic gas expansion/acceleration to the high Mach emission. The overall vapor plume source rate exhibits stochastic time variability up to a factor ∼5 between observations, reflecting that found in the individual gas sources/jets. Key Words: Cassini at Saturn—Geysers—Enceladus—Gas dynamics—Icy satellites. Astrobiology 17, 926–940. PMID:28872900

  16. The Dos and Don'ts of how to Build a Planet, Using the Moon as an Example

    NASA Technical Reports Server (NTRS)

    Jones, J. H.

    2006-01-01

    The bulk chemical compositions of planets may yield important clues concerning planetary origins. Failing that, bulk compositions are still important, in that they constrain calculation of planetary mineralogies and also constrain the petrogenesis of basaltic magmas. In the case of the Earth, there is little or no debate about the composition of the Earth's upper mantle. This is because our sample collections contain peridotitic xenoliths of that mantle. The most fertile of these are believed to have been little modified from their primary compositions. Using these samples and chondritic meteorites as a starting point, small perturbations on the compositions of existing samples allow useful reconstruction of the bulk silicate Earth (BSE). Elsewhere, I have argued that the next simplest case is the Eucrite Parent Body (EPB). Reconstructions based on Sc partitioning indicate that the EPB can be well approximated by a mixture of 20% eucrite and 80% equilibrium olivine. This leads to a parent body that is similar to CO (or devolatilized CM) chondrites. Partial melting experiments on CM chondrites confirm this model, because the residual solids in these experiments are dominated by olivine with minor pigonite [3]. The most difficult bodies to reconstruct are those that have undergone the most differentiation. Both the Moon and Mars may have passed through a magma ocean stage. In any event, lunar and martian basalts, unlike eucrites, were not derived from undifferentiated source regions. Reconstructions are primarily based on compositional trends within the basalts themselves with some critical assumptions: (i) Refractory lithophile elements (Ca, Al, REE, actinides) are presumed to be in chondritic relative abundances; and (ii) some major element ratio is believed to exist in a chondritic ratio (e.g., Mg/Si, Mg/Al). The most commonly used parameter is Mg/Si.

  17. Understanding Subsurface Geoelectrical and Structural Constrains for Low Frequency Radar Sounding of Jovian Satellites

    NASA Astrophysics Data System (ADS)

    Heggy, Essam; Bruzzone, Lorenzo; Beck, Pierre; Doute, Sylvain; Gim, Youngyu; Herique, Alain; Kofman, Wlodek; Orosei, Roberto; Plaut, Jeffery; Rosen, Paul; Seu, Roberto

    2010-05-01

    Thermally stable Ice sheets on earth are known to be among the most favorable geophysical contexts for deep subsurface sounding radars. Penetrations ranging from few to several hundreds of meters have been observed at 10 to 60 MHz when sounding homogenous and pure ice sheets in Antarctica and in Alaskan glaciers. Unlike the terrestrial case, ice sheets on Jovian satellites are older formations with a more complex matrix of mineral inclusions with an even three dimensional distribution on the surface and subsurface that is yet to be understood in order to quantify its effect on the dielectric attenuation at the experiment sounding frequencies. Moreover, ridges, tectonic and shock features, may results in a complex and heterogeneous subsurface structure that can induce scattering attenuation with different amplitudes depending on the subsurface heterogeneity levels. Such attenuation phenomena's has to be accounted in the instrument design and future data analysis in order to optimize the science return, reduce mission risk and define proper operation modes. In order to address those challenges in the current performance studies and instrument design of the proposed radar sounding experiments, we present an attempt to quantify both the dielectric and scattering losses on both icy satellites, Ganymede and Europa, based on experimental dielectric characterization of relevant icy-dust mixtures samples, field work from analog environment and radar propagation simulations in parametric subsurface geophysical models representing potential geological scenarios of the two Jovian satellites. Our preliminary results suggest that the use of a dual band radar enable to overcome several of these constrains and reduces ambiguities associated subsurface interface mapping. Acknowledgement. This research is carried out by the Jet Propulsion Laboratory/Caltech, under a grant from the National Aeronautics and Space Administration.

  18. Existence, uniqueness and positivity of solutions for BGK models for mixtures

    NASA Astrophysics Data System (ADS)

    Klingenberg, C.; Pirner, M.

    2018-01-01

    We consider kinetic models for a multi component gas mixture without chemical reactions. In the literature, one can find two types of BGK models in order to describe gas mixtures. One type has a sum of BGK type interaction terms in the relaxation operator, for example the model described by Klingenberg, Pirner and Puppo [20] which contains well-known models of physicists and engineers for example Hamel [16] and Gross and Krook [15] as special cases. The other type contains only one collision term on the right-hand side, for example the well-known model of Andries, Aoki and Perthame [1]. For each of these two models [20] and [1], we prove existence, uniqueness and positivity of solutions in the first part of the paper. In the second part, we use the first model [20] in order to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described by Dellacherie [11].

  19. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    PubMed

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  1. Application of correlation constrained multivariate curve resolution alternating least-squares methods for determination of compounds of interest in biodiesel blends using NIR and UV-visible spectroscopic data.

    PubMed

    de Oliveira, Rodrigo Rocha; de Lima, Kássio Michell Gomes; Tauler, Romà; de Juan, Anna

    2014-07-01

    This study describes two applications of a variant of the multivariate curve resolution alternating least squares (MCR-ALS) method with a correlation constraint. The first application describes the use of MCR-ALS for the determination of biodiesel concentrations in biodiesel blends using near infrared (NIR) spectroscopic data. In the second application, the proposed method allowed the determination of the synthetic antioxidant N,N'-Di-sec-butyl-p-phenylenediamine (PDA) present in biodiesel mixtures from different vegetable sources using UV-visible spectroscopy. Well established multivariate regression algorithm, partial least squares (PLS), were calculated for comparison of the quantification performance in the models developed in both applications. The correlation constraint has been adapted to handle the presence of batch-to-batch matrix effects due to ageing effects, which might occur when different groups of samples were used to build a calibration model in the first application. Different data set configurations and diverse modes of application of the correlation constraint are explored and guidelines are given to cope with different type of analytical problems, such as the correction of matrix effects among biodiesel samples, where MCR-ALS outperformed PLS reducing the relative error of prediction RE (%) from 9.82% to 4.85% in the first application, or the determination of minor compound with overlapped weak spectroscopic signals, where MCR-ALS gave higher (RE (%)=3.16%) for prediction of PDA compared to PLS (RE (%)=1.99%), but with the advantage of recovering the related pure spectral profile of analytes and interferences. The obtained results show the potential of the MCR-ALS method with correlation constraint to be adapted to diverse data set configurations and analytical problems related to the determination of biodiesel mixtures and added compounds therein. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Treesearch

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  3. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  4. Finite mixture modeling for vehicle crash data with application to hotspot identification.

    PubMed

    Park, Byung-Jung; Lord, Dominique; Lee, Chungwon

    2014-10-01

    The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  6. The in situ transverse lamina strength of composite laminates

    NASA Technical Reports Server (NTRS)

    Flaggs, D. L.

    1983-01-01

    The objective of the work reported in this presentation is to determine the in situ transverse strength of a lamina within a composite laminate. From a fracture mechanics standpoint, in situ strength may be viewed as constrained cracking that has been shown to be a function of both lamina thickness and the stiffness of adjacent plies that serve to constrain the cracking process. From an engineering point of view, however, constrained cracking can be perceived as an apparent increase in lamina strength. With the growing need to design more highly loaded composite structures, the concept of in situ strength may prove to be a viable means of increasing the design allowables of current and future composite material systems. A simplified one dimensional analytical model is presented that is used to predict the strain at onset of transverse cracking. While it is accurate only for the most constrained cases, the model is important in that the predicted failure strain is seen to be a function of a lamina's thickness d and of the extensional stiffness bE theta of the adjacent laminae that constrain crack propagation in the 90 deg laminae.

  7. Closed-form solutions in stress-driven two-phase integral elasticity for bending of functionally graded nano-beams

    NASA Astrophysics Data System (ADS)

    Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de

    2018-03-01

    Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.

  8. A modified procedure for mixture-model clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.; Horton, John D.

    2014-01-01

    A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.

  9. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  10. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  11. Freezing Transition Studies Through Constrained Cell Model Simulation

    NASA Astrophysics Data System (ADS)

    Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.

    2014-10-01

    In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.

  12. Aftershock distribution as a constraint on the geodetic model of coseismic slip for the 2004 Parkfield earthquake

    USGS Publications Warehouse

    Bennington, Ninfa; Thurber, Clifford; Feigl, Kurt; ,

    2011-01-01

    Several studies of the 2004 Parkfield earthquake have linked the spatial distribution of the event’s aftershocks to the mainshock slip distribution on the fault. Using geodetic data, we find a model of coseismic slip for the 2004 Parkfield earthquake with the constraint that the edges of coseismic slip patches align with aftershocks. The constraint is applied by encouraging the curvature of coseismic slip in each model cell to be equal to the negative of the curvature of seismicity density. The large patch of peak slip about 15 km northwest of the 2004 hypocenter found in the curvature-constrained model is in good agreement in location and amplitude with previous geodetic studies and the majority of strong motion studies. The curvature-constrained solution shows slip primarily between aftershock “streaks” with the continuation of moderate levels of slip to the southeast. These observations are in good agreement with strong motion studies, but inconsistent with the majority of published geodetic slip models. Southeast of the 2004 hypocenter, a patch of peak slip observed in strong motion studies is absent from our curvature-constrained model, but the available GPS data do not resolve slip in this region. We conclude that the geodetic slip model constrained by the aftershock distribution fits the geodetic data quite well and that inconsistencies between models derived from seismic and geodetic data can be attributed largely to resolution issues.

  13. Constraining the dark energy equation of state using Bayes theorem and the Kullback–Leibler divergence

    DOE PAGES

    Hee, S.; Vázquez, J. A.; Handley, W. J.; ...

    2016-12-01

    Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less

  14. Constraining the dark energy equation of state using Bayes theorem and the Kullback–Leibler divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hee, S.; Vázquez, J. A.; Handley, W. J.

    Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less

  15. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  16. Forward Modeling of Atmospheric Carbon Dioxide in GEOS-5: Uncertainties Related to Surface Fluxes and Sub-Grid Transport

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Ott, Lesley E.; Zhu, Zhengxin; Bowman, Kevin; Brix, Holger; Collatz, G. James; Dutkiewicz, Stephanie; Fisher, Joshua B.; Gregg, Watson W.; Hill, Chris; hide

    2011-01-01

    Forward GEOS-5 AGCM simulations of CO2, with transport constrained by analyzed meteorology for 2009-2010, are examined. The CO2 distributions are evaluated using AIRS upper tropospheric CO2 and ACOS-GOSAT total column CO2 observations. Different combinations of surface C02 fluxes are used to generate ensembles of runs that span some uncertainty in surface emissions and uptake. The fluxes are specified in GEOS-5 from different inventories (fossil and biofuel), different data-constrained estimates of land biological emissions, and different data-constrained ocean-biology estimates. One set of fluxes is based on the established "Transcom" database and others are constructed using contemporary satellite observations to constrain land and ocean process models. Likewise, different approximations to sub-grid transport are employed, to construct an ensemble of CO2 distributions related to transport variability. This work is part of NASA's "Carbon Monitoring System Flux Pilot Project,"

  17. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China.

    PubMed

    Ji, Cuicui; Jia, Yonghong; Gao, Zhihai; Wei, Huaidong; Li, Xiaosong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement.

  18. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China

    PubMed Central

    Jia, Yonghong; Gao, Zhihai; Wei, Huaidong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement. PMID:29240777

  19. Polarization and studies of evolved star mass loss

    NASA Astrophysics Data System (ADS)

    Sargent, Benjamin; Srinivasan, Sundar; Riebel, David; Meixner, Margaret

    2012-05-01

    Polarization studies of astronomical dust have proven very useful in constraining its properties. Such studies are used to constrain the spatial arrangement, shape, composition, and optical properties of astronomical dust grains. Here we explore possible connections between astronomical polarization observations to our studies of mass loss from evolved stars. We are studying evolved star mass loss in the Large Magellanic Cloud (LMC) by using photometry from the Surveying the Agents of a Galaxy's Evolution (SAGE; PI: M. Meixner) Spitzer Space Telescope Legacy program. We use the radiative transfer program 2Dust to create our Grid of Red supergiant and Asymptotic giant branch ModelS (GRAMS), in order to model this mass loss. To model emission of polarized light from evolved stars, however, we appeal to other radiative transfer codes. We probe how polarization observations might be used to constrain the dust shell and dust grain properties of the samples of evolved stars we are studying.

  20. A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions

    NASA Astrophysics Data System (ADS)

    Lienert, Sebastian; Joos, Fortunat

    2018-05-01

    A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.

  1. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    PubMed

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  2. Neurotoxicological and statistical analyses of a mixture of five organophosphorus pesticides using a ray design.

    PubMed

    Moser, V C; Casey, M; Hamm, A; Carter, W H; Simmons, J E; Gennings, C

    2005-07-01

    Environmental exposures generally involve chemical mixtures instead of single chemicals. Statistical models such as the fixed-ratio ray design, wherein the mixing ratio (proportions) of the chemicals is fixed across increasing mixture doses, allows for the detection and characterization of interactions among the chemicals. In this study, we tested for interaction(s) in a mixture of five organophosphorus (OP) pesticides (chlorpyrifos, diazinon, dimethoate, acephate, and malathion). The ratio of the five pesticides (full ray) reflected the relative dietary exposure estimates of the general population as projected by the US EPA Dietary Exposure Evaluation Model (DEEM). A second mixture was tested using the same dose levels of all pesticides, but excluding malathion (reduced ray). The experimental approach first required characterization of dose-response curves for the individual OPs to build a dose-additivity model. A series of behavioral measures were evaluated in adult male Long-Evans rats at the time of peak effect following a single oral dose, and then tissues were collected for measurement of cholinesterase (ChE) activity. Neurochemical (blood and brain cholinesterase [ChE] activity) and behavioral (motor activity, gait score, tail-pinch response score) endpoints were evaluated statistically for evidence of additivity. The additivity model constructed from the single chemical data was used to predict the effects of the pesticide mixture along the full ray (10-450 mg/kg) and the reduced ray (1.75-78.8 mg/kg). The experimental mixture data were also modeled and statistically compared to the additivity models. Analysis of the 5-OP mixture (the full ray) revealed significant deviation from additivity for all endpoints except tail-pinch response. Greater-than-additive responses (synergism) were observed at the lower doses of the 5-OP mixture, which contained non-effective dose levels of each of the components. The predicted effective doses (ED20, ED50) were about half that predicted by additivity, and for brain ChE and motor activity, there was a threshold shift in the dose-response curves. For the brain ChE and motor activity, there was no difference between the full (5-OP mixture) and reduced (4-OP mixture) rays, indicating that malathion did not influence the non-additivity. While the reduced ray for blood ChE showed greater deviation from additivity without malathion in the mixture, the non-additivity observed for the gait score was reversed when malathion was removed. Thus, greater-than-additive interactions were detected for both the full and reduced ray mixtures, and the role of malathion in the interactions varied depending on the endpoint. In all cases, the deviations from additivity occurred at the lower end of the dose-response curves.

  3. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  4. Numerical study of underwater dispersion of dilute and dense sediment-water mixtures

    NASA Astrophysics Data System (ADS)

    Chan, Ziying; Dao, Ho-Minh; Tan, Danielle S.

    2018-05-01

    As part of the nodule-harvesting process, sediment tailings are released underwater. Due to the long period of clouding in the water during the settling process, this presents a significant environmental and ecological concern. One possible solution is to release a mixture of sediment tailings and seawater, with the aim of reducing the settling duration as well as the amount of spreading. In this paper, we present some results of numerical simulations using the smoothed particle hydrodynamics (SPH) method to model the release of a fixed volume of pre-mixed sediment-water mixture into a larger body of quiescent water. Both the sediment-water mixture and the “clean” water are modeled as two different fluids, with concentration-dependent bulk properties of the sediment-water mixture adjusted according to the initial solids concentration. This numerical model was validated in a previous study, which indicated significant differences in the dispersion and settling process between dilute and dense mixtures, and that a dense mixture may be preferable. For this study, we investigate a wider range of volumetric concentration with the aim of determining the optimum volumetric concentration, as well as its overall effectiveness compared to the original process (100% sediment).

  5. Space-time variation of respiratory cancers in South Carolina: a flexible multivariate mixture modeling approach to risk estimation.

    PubMed

    Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin

    2017-01-01

    Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Impact of chemical proportions on the acute neurotoxicity of a mixture of seven carbamates in preweanling and adult rats.

    PubMed

    Moser, Virginia C; Padilla, Stephanie; Simmons, Jane Ellen; Haber, Lynne T; Hertzberg, Richard C

    2012-09-01

    Statistical design and environmental relevance are important aspects of studies of chemical mixtures, such as pesticides. We used a dose-additivity model to test experimentally the default assumptions of dose additivity for two mixtures of seven N-methylcarbamates (carbaryl, carbofuran, formetanate, methomyl, methiocarb, oxamyl, and propoxur). The best-fitting models were selected for the single-chemical dose-response data and used to develop a combined prediction model, which was then compared with the experimental mixture data. We evaluated behavioral (motor activity) and cholinesterase (ChE)-inhibitory (brain, red blood cells) outcomes at the time of peak acute effects following oral gavage in adult and preweanling (17 days old) Long-Evans male rats. The mixtures varied only in their mixing ratios. In the relative potency mixture, proportions of each carbamate were set at equitoxic component doses. A California environmental mixture was based on the 2005 sales of each carbamate in California. In adult rats, the relative potency mixture showed dose additivity for red blood cell ChE and motor activity, and brain ChE inhibition showed a modest greater-than additive (synergistic) response, but only at a middle dose. In rat pups, the relative potency mixture was either dose-additive (brain ChE inhibition, motor activity) or slightly less-than additive (red blood cell ChE inhibition). On the other hand, at both ages, the environmental mixture showed greater-than additive responses on all three endpoints, with significant deviations from predicted at most to all doses tested. Thus, we observed different interactive properties for different mixing ratios of these chemicals. These approaches for studying pesticide mixtures can improve evaluations of potential toxicity under varying experimental conditions that may mimic human exposures.

  7. Implementation of remote sensing data for flood forecasting

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2016-12-01

    Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.

  8. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  9. Dielectric relaxation and hydrogen bonding interaction in xylitol-water mixtures using time domain reflectometry

    NASA Astrophysics Data System (ADS)

    Rander, D. N.; Joshi, Y. S.; Kanse, K. S.; Kumbharkhane, A. C.

    2016-01-01

    The measurements of complex dielectric permittivity of xylitol-water mixtures have been carried out in the frequency range of 10 MHz-30 GHz using a time domain reflectometry technique. Measurements have been done at six temperatures from 0 to 25 °C and at different weight fractions of xylitol (0 < W X ≤ 0.7) in water. There are different models to explain the dielectric relaxation behaviour of binary mixtures, such as Debye, Cole-Cole or Cole-Davidson model. We have observed that the dielectric relaxation behaviour of binary mixtures of xylitol-water can be well described by Cole-Davidson model having an asymmetric distribution of relaxation times. The dielectric parameters such as static dielectric constant and relaxation time for the mixtures have been evaluated. The molecular interaction between xylitol and water molecules is discussed using the Kirkwood correlation factor ( g eff ) and thermodynamic parameter.

  10. Variation and distribution of metals and metalloids in soil/ash mixtures from Agbogbloshie e-waste recycling site in Accra, Ghana.

    PubMed

    Itai, Takaaki; Otsuka, Masanari; Asante, Kwadwo Ansong; Muto, Mamoru; Opoku-Ankomah, Yaw; Ansa-Asare, Osmund Duodu; Tanabe, Shinsuke

    2014-02-01

    Illegal import and improper recycling of electronic waste (e-waste) are an environmental issue in developing countries around the world. African countries are no exception to this problem and the Agbogbloshie market in Accra, Ghana is a well-known e-waste recycling site. We have studied the levels of metal(loid)s in the mixtures of residual ash, formed by the burning of e-waste, and the cover soil, obtained using a portable X-ray fluorescence spectrometer (P-XRF) coupled with determination of the 1M HCl-extractable fraction by an inductively coupled plasma mass spectrometer. The accuracy and precision of the P-XRF measurements were evaluated by measuring 18 standard reference materials; this indicated the acceptable but limited quality of this method as a screening tool. The HCl-extractable levels of Al, Co, Cu, Zn, Cd, In, Sb, Ba, and Pb in 10 soil/ash mixtures varied by more than one order of magnitude. The levels of these metal(loid)s were found to be correlated with the color (i.e., soil/ash ratio), suggesting that they are being released from disposed e-waste via open burning. The source of rare elements could be constrained using correlation to the predominant metals. Human hazard quotient values based on ingestion of soil/ash mixtures exceeded unity for Pb, As, Sb, and Cu in a high-exposure scenario. This study showed that along with common metals, rare metal(loid)s are also enriched in the e-waste burning site. We suggest that risk assessment considering exposure to multiple metal(loid)s should be addressed in studies of e-waste recycling sites. © 2013. Published by Elsevier B.V. All rights reserved.

  11. Sensory irritating potency of some microbial volatile organic compounds (MVOCs) and a mixture of five MVOCs.

    PubMed

    Korpi, A; Kasanen, J P; Alarie, Y; Kosma, V M; Pasanen, A L

    1999-01-01

    The authors investigated the ability/potencies of 3 microbial volatile organic compounds and a mixture of 5 microbial volatile organic compounds to cause eye and upper respiratory tract irritation (i.e., sensory irritation), with an animal bioassay. The authors estimated potencies by determining the concentration capable of decreasing the respiratory frequency of mice by 50% (i.e., the RD50 value). The RD50 values for 1-octen-3-ol, 3-octanol, and 3-octanone were 182 mg/m3 (35 ppm), 1359 mg/m3 (256 ppm), and 17586 mg/m3 (3360 ppm), respectively. Recommended indoor air levels calculated from the individual RD50 values for 1-octen-3-ol, 3-octanol, and 3-octanone were 100, 1000, and 13000 microg/m3, respectively-values considerably higher than the reported measured indoor air levels for these compounds. The RD50 value for a mixture of 5 microbial volatile organic compounds was also determined and found to be 3.6 times lower than estimated from the fractional concentrations and the respective RD50s of the individual components. The data support the conclusion that a variety of microbial volatile organic compounds may have some synergistic effects for the sensory irritation response, which constrains the interpretation and application of recommended indoor air levels of individual microbial volatile organic compounds. The results also showed that if a particular component of a mixture was much more potent than the other components, it may dominate the sensory irritation effect. With respect to irritation symptoms reported in moldy houses, the results of this study indicate that the contribution of microbial volatile organic compounds to these symptoms seems less than previously supposed.

  12. Broad Feshbach resonance in the 6Li-40K mixture.

    PubMed

    Tiecke, T G; Goosen, M R; Ludewig, A; Gensemer, S D; Kraft, S; Kokkelmans, S J J M F; Walraven, J T M

    2010-02-05

    We study the widths of interspecies Feshbach resonances in a mixture of the fermionic quantum gases 6Li and 40K. We develop a model to calculate the width and position of all available Feshbach resonances for a system. Using the model, we select the optimal resonance to study the {6}Li/{40}K mixture. Experimentally, we obtain the asymmetric Fano line shape of the interspecies elastic cross section by measuring the distillation rate of 6Li atoms from a potassium-rich 6Li/{40}K mixture as a function of magnetic field. This provides us with the first experimental determination of the width of a resonance in this mixture, DeltaB=1.5(5) G. Our results offer good perspectives for the observation of universal crossover physics using this mass-imbalanced fermionic mixture.

  13. Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Li, Jun

    In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.

  14. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    PubMed

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  15. Tropospheric transport differences between models using the same large-scale meteorological fields

    NASA Astrophysics Data System (ADS)

    Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.

    2017-01-01

    The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.

  16. Fermion masses in SO(10)

    NASA Astrophysics Data System (ADS)

    Jungman, Gerard

    1992-11-01

    Yukawa-coupling-constant unification together with the known fermion masses is used to constrain SO(10) models. We consider the case of one (heavy) generation, with the tree-level relation mb=mτ, calculating the limits on the intermediate scales due to the known limits on fermion masses. This analysis extends previous analyses which addressed only the simplest symmetry-breaking schemes. In the case where the low-energy model is the standard model with one Higgs doublet, there are very strong constraints due to the known limits on the top-quark mass and the τ-neutrino mass. The two-Higgs-doublet case is less constrained, though we can make progress in constraining this model also. We identify those parameters to which the viability of the model is most sensitive. We also discuss the ``triviality'' bounds on mt obtained from the analysis of the Yukawa renormalization-group equations. Finally we address the role of a speculative constraint on the τ-neutrino mass, arising from the cosmological implications of anomalous B+L violation in the early Universe.

  17. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Assessment of the Risks of Mixtures of Major Use Veterinary Antibiotics in European Surface Waters.

    PubMed

    Guo, Jiahua; Selby, Katherine; Boxall, Alistair B A

    2016-08-02

    Effects of single veterinary antibiotics on a range of aquatic organisms have been explored in many studies. In reality, surface waters will be exposed to mixtures of these substances. In this study, we present an approach for establishing risks of antibiotic mixtures to surface waters and illustrate this by assessing risks of mixtures of three major use antibiotics (trimethoprim, tylosin, and lincomycin) to algal and cyanobacterial species in European surface waters. Ecotoxicity tests were initially performed to assess the combined effects of the antibiotics to the cyanobacteria Anabaena flos-aquae. The results were used to evaluate two mixture prediction models: concentration addition (CA) and independent action (IA). The CA model performed best at predicting the toxicity of the mixture with the experimental 96 h EC50 for the antibiotic mixture being 0.248 μmol/L compared to the CA predicted EC50 of 0.21 μmol/L. The CA model was therefore used alongside predictions of exposure for different European scenarios and estimations of hazards obtained from species sensitivity distributions to estimate risks of mixtures of the three antibiotics. Risk quotients for the different scenarios ranged from 0.066 to 385 indicating that the combination of three substances could be causing adverse impacts on algal communities in European surface waters. This could have important implications for primary production and nutrient cycling. Tylosin contributed most to the risk followed by lincomycin and trimethoprim. While we have explored only three antibiotics, the combined experimental and modeling approach could readily be applied to the wider range of antibiotics that are in use.

  19. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  20. Mesoscale Modeling of LX-17 Under Isentropic Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, H K; Willey, T M; Friedman, G

    Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less

  1. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  2. Terrestrial Sagnac delay constraining modified gravity models

    NASA Astrophysics Data System (ADS)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  3. Modeling and simulating networks of interdependent protein interactions.

    PubMed

    Stöcker, Bianca K; Köster, Johannes; Zamir, Eli; Rahmann, Sven

    2018-05-21

    Protein interactions are fundamental building blocks of biochemical reaction systems underlying cellular functions. The complexity and functionality of these systems emerge not only from the protein interactions themselves but also from the dependencies between these interactions, as generated by allosteric effects or mutual exclusion due to steric hindrance. Therefore, formal models for integrating and utilizing information about interaction dependencies are of high interest. Here, we describe an approach for endowing protein networks with interaction dependencies using propositional logic, thereby obtaining constrained protein interaction networks ("constrained networks"). The construction of these networks is based on public interaction databases as well as text-mined information about interaction dependencies. We present an efficient data structure and algorithm to simulate protein complex formation in constrained networks. The efficiency of the model allows fast simulation and facilitates the analysis of many proteins in large networks. In addition, this approach enables the simulation of perturbation effects, such as knockout of single or multiple proteins and changes of protein concentrations. We illustrate how our model can be used to analyze a constrained human adhesome protein network, which is responsible for the formation of diverse and dynamic cell-matrix adhesion sites. By comparing protein complex formation under known interaction dependencies versus without dependencies, we investigate how these dependencies shape the resulting repertoire of protein complexes. Furthermore, our model enables investigating how the interplay of network topology with interaction dependencies influences the propagation of perturbation effects across a large biochemical system. Our simulation software CPINSim (for Constrained Protein Interaction Network Simulator) is available under the MIT license at http://github.com/BiancaStoecker/cpinsim and as a Bioconda package (https://bioconda.github.io).

  4. Activities of mixtures of soil-applied herbicides with different molecular targets.

    PubMed

    Kaushik, Shalini; Streibig, Jens Carl; Cedergreen, Nina

    2006-11-01

    The joint action of soil-applied herbicide mixtures with similar or different modes of action has been assessed by using the additive dose model (ADM). The herbicides chlorsulfuron, metsulfuron-methyl, pendimethalin and pretilachlor, applied either singly or in binary mixtures, were used on rice (Oryza sativa L.). The growth (shoot) response curves were described by a logistic dose-response model. The ED50 values and their corresponding standard errors obtained from the response curves were used to test statistically if the shape of the isoboles differed from the reference model (ADM). Results showed that mixtures of herbicides with similar molecular targets, i.e. chlorsulfuron and metsulfuron (acetolactate synthase (ALS) inhibitors), and with different molecular targets, i.e. pendimethalin (microtubule assembly inhibitor) and pretilachlor (very long chain fatty acids (VLCFAs) inhibitor), followed the ADM. Mixing herbicides with different molecular targets gave different results depending on whether pretilachlor or pendimethalin was involved. In general, mixtures of pretilachlor and sulfonylureas showed synergistic interactions, whereas mixtures of pendimethalin and sulfonylureas exhibited either antagonistic or additive activities. Hence, there is a large potential for both increasing the specificity of herbicides by using mixtures and lowering the total dose for weed control, while at the same time delaying the development of herbicide resistance by using mixtures with different molecular targets. Copyright (c) 2006 Society of Chemical Industry.

  5. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  6. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    ERIC Educational Resources Information Center

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  7. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  8. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  9. Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake

    NASA Astrophysics Data System (ADS)

    Muller, S. J.; Gerber, S.

    2013-12-01

    The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better constrain projections for the land carbon sink.

  10. Structure investigations on assembled astaxanthin molecules

    NASA Astrophysics Data System (ADS)

    Köpsel, Christian; Möltgen, Holger; Schuch, Horst; Auweter, Helmut; Kleinermanns, Karl; Martin, Hans-Dieter; Bettermann, Hans

    2005-08-01

    The carotenoid r,r-astaxanthin (3R,3‧R-dihydroxy-4,4‧-diketo-β-carotene) forms different types of aggregates in acetone-water mixtures. H-type aggregates were found in mixtures with a high part of water (e.g. 1:9 acetone-water mixture) whereas two different types of J-aggregates were identified in mixtures with a lower part of water (3:7 acetone-water mixture). These aggregates were characterized by recording UV/vis-absorption spectra, CD-spectra and fluorescence emissions. The sizes of the molecular assemblies were determined by dynamic light scattering experiments. The hydrodynamic diameter of the assemblies amounts 40 nm in 1:9 acetone-water mixtures and exceeds up to 1 μm in 3:7 acetone-water mixtures. Scanning tunneling microscopy monitored astaxanthin aggregates on graphite surfaces. The structure of the H-aggregate was obtained by molecular modeling calculations. The structure was confirmed by calculating the electronic absorption spectrum and the CD-spectrum where the molecular modeling structure was used as input.

  11. Mixture modelling for cluster analysis.

    PubMed

    McLachlan, G J; Chang, S U

    2004-10-01

    Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

  12. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  13. Compact determination of hydrogen isotopes

    DOE PAGES

    Robinson, David

    2017-04-06

    Scanning calorimetry of a confined, reversible hydrogen sorbent material has been previously proposed as a method to determine compositions of unknown mixtures of diatomic hydrogen isotopologues and helium. Application of this concept could result in greater process knowledge during the handling of these gases. Previously published studies have focused on mixtures that do not include tritium. This paper focuses on modeling to predict the effect of tritium in mixtures of the isotopologues on a calorimetry scan. Furthermore, the model predicts that tritium can be measured with a sensitivity comparable to that observed for hydrogen-deuterium mixtures, and that under so memore » conditions, it may be possible to determine the atomic fractions of all three isotopes in a gas mixture.« less

  14. Detecting Darwinism from Molecules in the Enceladus Plumes, Jupiter's Moons, and Other Planetary Water Lagoons

    PubMed Central

    2017-01-01

    Abstract To the astrobiologist, Enceladus offers easy access to a potential subsurface biosphere via the intermediacy of a plume of water emerging directly into space. A direct question follows: If we were to collect a sample of this plume, what in that sample, through its presence or its absence, would suggest the presence and/or absence of life in this exotic locale? This question is, of course, relevant for life detection in any aqueous lagoon that we might be able to sample. This manuscript reviews physical chemical constraints that must be met by a genetic polymer for it to support Darwinism, a process believed to be required for a chemical system to generate properties that we value in biology. We propose that the most important of these is a repeating backbone charge; a Darwinian genetic biopolymer must be a “polyelectrolyte.” Relevant to mission design, such biopolymers are especially easy to recover and concentrate from aqueous mixtures for detection, simply by washing the aqueous mixtures across a polycharged support. Several device architectures are described to ensure that, once captured, the biopolymer meets two other requirements for Darwinism, homochirality and a small building block “alphabet.” This approach is compared and contrasted with alternative biomolecule detection approaches that seek homochirality and constrained alphabets in non-encoded biopolymers. This discussion is set within a model for the history of the terran biosphere, identifying points in that natural history where these alternative approaches would have failed to detect terran life. Key Words: Enceladus—Life detection—Europa—Icy moon—Biosignatures—Polyelectrolyte theory of the gene. Astrobiology 17, 840–851. PMID:28665680

  15. Development of a Front Tracking Method for Two-Phase Micromixing of Incompressible Viscous Fluids with Interfacial Tension in Solvent Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yijie; Lim, Hyun-Kyung; de Almeida, Valmor F

    2012-06-01

    This progress report describes the development of a front tracking method for the solution of the governing equations of motion for two-phase micromixing of incompressible, viscous, liquid-liquid solvent extraction processes. The ability to compute the detailed local interfacial structure of the mixture allows characterization of the statistical properties of the two-phase mixture in terms of droplets, filaments, and other structures which emerge as a dispersed phase embedded into a continuous phase. Such a statistical picture provides the information needed for building a consistent coarsened model applicable to the entire mixing device. Coarsening is an undertaking for a future mathematical developmentmore » and is outside the scope of the present work. We present here a method for accurate simulation of the micromixing dynamics of an aqueous and an organic phase exposed to intense centrifugal force and shearing stress. The onset of mixing is the result of the combination of the classical Rayleigh- Taylor and Kelvin-Helmholtz instabilities. A mixing environment that emulates a sector of the annular mixing zone of a centrifugal contactor is used for the mathematical domain. The domain is small enough to allow for resolution of the individual interfacial structures and large enough to allow for an analysis of their statistical distribution of sizes and shapes. A set of accurate algorithms for this application requires an advanced front tracking approach constrained by the incompressibility condition. This research is aimed at designing and implementing these algorithms. We demonstrate verification and convergence results for one-phase and unmixed, two-phase flows. In addition we report on preliminary results for mixed, two-phase flow for realistic operating flow parameters.« less

  16. Bladed Terrain on Pluto: Possible Origins and Evolutions

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey M.; Howard, Alan D.; Umurhan, Orkan M.; White, Oliver L.; Schenk, Paul; Beyer, Ross A.; McKinnon, William B.; Spencer, John R.; Singer, Kelsi N.; Grundy, William N.; hide

    2016-01-01

    Pluto's Bladed Terrain (centered roughly 20 deg N, 225 deg E) covers the flanks and crests of the informally named Tartarus Dorsa with numerous roughly aligned blade-like ridges oriented approx. North-South; it may also stretch considerably farther east onto the non-close approach hemisphere but that inference is tentative. Individual ridges are typically several hundred meters high, and are spaced 5 to 10 km crest to crest, separated by V-shaped valleys. Many ridges merge at acute angles to form Y-shape junctions in plan view. The principle composition of the blades themselves we suspect is methane or a methane-rich mixture. (Methane is spectroscopically strongly observed on the optical surfaces of blades.) Nitrogen ice is very probably too soft to support their topography. Cemented mixtures of volatile and non-volatile ices may also provide a degradable but relief supporting "bedrock" for the blades, perhaps analogous to Callisto. Currently we are considering several hypotheses for the origins of the deposit from which Bladed Terrain has evolved, including aeolian disposition, atmospheric condensation, updoming and exhumation, volcanic intrusions or extrusions, crystal growth, among others. We are reviewing several processes as candidate creators or sculptors of the blades. Perhaps they are primary depositional patterns such as dunes, or differential condensation patterns (like on Callisto), or fissure extrusions. Or alternatively perhaps they are the consequence of differential erosion (such as sublimation erosion widening and deepening along cracks), variations in substrate properties, mass wasting into the subsurface, or sculpted by a combination of directional winds and solar isolation orientation. We will consider the roles of the long-term increasing solar flux and short periods of warm thick atmospheres. Hypotheses will be ordered based on observational constrains and modeling to be presented at the conference.

  17. Bladed Terrain on Pluto: Possible Origins and Evolutions

    NASA Astrophysics Data System (ADS)

    Moore, J. M.; Howard, A. D.; Umurhan, O. M.; White, O. L.; Schenk, P.; Beyer, R. A.; McKinnon, W. B.; Spencer, J. R.; Singer, K. N.; Grundy, W. M.; Nimmo, F.; Young, L. A.; Stern, A.; Weaver, H. A., Jr.; Olkin, C.; Ennico Smith, K.; Collins, G. C.

    2016-12-01

    Pluto's Bladed Terrain (centered roughly 20°N, 225°E) covers the flanks and crests of the informally named Tartarus Dorsa with numerous roughly aligned blade-like ridges oriented North-South; it may also stretch considerably farther east onto the non-close approach hemisphere but that inference is tentative. Individual ridges are typically several hundred meters high, and are spaced 5 to 10 km crest to crest, separated by V-shaped valleys. Many ridges merge at acute angles to form Y-shape junctions in plan view. The principle composition of the blades themselves we suspect is methane or a methane-rich mixture. (Methane is spectroscopically strongly observed on the optical surfaces of blades.) Nitrogen ice is very probably too soft to support their topography. Cemented mixtures of volatile and non-volatile ices may also provide a degradable but relief supporting "bedrock" for the blades, perhaps analogous to Callisto. Currently we are considering several hypotheses for the origins of the deposit from which Bladed Terrain has evolved, including aeolian disposition, atmospheric condensation, updoming and exhumation, volcanic intrusions or extrusions, crystal growth, among others. We are reviewing several processes as candidate creators or sculptors of the blades. Perhaps they are primary depositional patterns such as dunes, or differential condensation patterns (like on Callisto), or fissure extrusions. Or alternatively perhaps they are the consequence of differential erosion (such as sublimation erosion widening and deepening along cracks), variations in substrate properties, mass wasting into the subsurface, or sculpted by a combination of directional winds and solar isolation orientation. We will consider the roles of the long-term increasing solar flux and short periods of warm thick atmospheres. Hypotheses will be ordered based on observational constrains and modeling to be presented at the conference.

  18. Lattice model for water-solute mixtures.

    PubMed

    Furlan, A P; Almarza, N G; Barbosa, M C

    2016-10-14

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.

  19. Highly Constrained Bicyclic Scaffolds for the Discovery of Protease-Stable Peptides via mRNA Display.

    PubMed

    Hacker, David E; Hoinka, Jan; Iqbal, Emil S; Przytycka, Teresa M; Hartman, Matthew C T

    2017-03-17

    Highly constrained peptides such as the knotted peptide natural products are promising medicinal agents because of their impressive biostability and potent activity. Yet, libraries of highly constrained peptides are challenging to prepare. Here, we present a method which utilizes two robust, orthogonal chemical steps to create highly constrained bicyclic peptide libraries. This technology was optimized to be compatible with in vitro selections by mRNA display. We performed side-by-side monocyclic and bicyclic selections against a model protein (streptavidin). Both selections resulted in peptides with mid-nanomolar affinity, and the bicyclic selection yielded a peptide with remarkable protease resistance.

  20. Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation

    NASA Astrophysics Data System (ADS)

    Du, Jiaoman; Yu, Lean; Li, Xiang

    2016-04-01

    Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.

  1. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    NASA Astrophysics Data System (ADS)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely constraining the starting model. We also explore what types of datasets are needed to uniquely constrain the orientation(s) of anisotropic symmetry if the mechanism is assumed.

  2. Constraining the dark energy equation of state using Bayes theorem and the Kullback-Leibler divergence

    NASA Astrophysics Data System (ADS)

    Hee, S.; Vázquez, J. A.; Handley, W. J.; Hobson, M. P.; Lasenby, A. N.

    2017-04-01

    Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era cosmic microwave background, baryonic acoustic oscillations (BAO), Type Ia supernova (SNIa) and Lyman α (Lyα) data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance Λ cold dark matter (ΛCDM) model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other, a supernegative equation of state (also known as 'phantom dark energy') is identified within the 1.5σ confidence intervals of the posterior distribution. To identify the power of different data sets in constraining the dark energy equation of state, we use a novel formulation of the Kullback-Leibler divergence. This formalism quantifies the information the data add when moving from priors to posteriors for each possible data set combination. The SNIa and BAO data sets are shown to provide much more constraining power in comparison to the Lyα data sets. Further, SNIa and BAO constrain most strongly around redshift range 0.1-0.5, whilst the Lyα data constrain weakly over a broader range. We do not attribute the supernegative favouring to any particular data set, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.

  3. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  4. Constraining ecosystem processes from tower fluxes and atmospheric profiles.

    PubMed

    Hill, T C; Williams, M; Woodward, F I; Moncrieff, J B

    2011-07-01

    The planetary boundary layer (PBL) provides an important link between the scales and processes resolved by global atmospheric sampling/modeling and site-based flux measurements. The PBL is in direct contact with the land surface, both driving and responding to ecosystem processes. Measurements within the PBL (e.g., by radiosondes, aircraft profiles, and flask measurements) have a footprint, and thus an integrating scale, on the order of 1-100 km. We use the coupled atmosphere-biosphere model (CAB) and a Bayesian data assimilation framework to investigate the amount of biosphere process information that can be inferred from PBL measurements. We investigate the information content of PBL measurements in a two-stage study. First, we demonstrate consistency between the coupled model (CAB) and measurements, by comparing the model to eddy covariance flux tower measurements (i.e., water and carbon fluxes) and also PBL scalar profile measurements (i.e., water, carbon dioxide, and temperature) from Canadian boreal forest. Second, we use the CAB model in a set of Bayesian inversions experiments using synthetic data for a single day. In the synthetic experiment, leaf area and respiration were relatively well constrained, whereas surface albedo and plant hydraulic conductance were only moderately constrained. Finally, the abilities of the PBL profiles and the eddy covariance data to constrain the parameters were largely similar and only slightly lower than the combination of both observations.

  5. Weighting climate model projections using observational constraints.

    PubMed

    Gillett, Nathan P

    2015-11-13

    Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.

  6. Metal silicate mixtures - Spectral properties and applications to asteroid taxonomy

    NASA Technical Reports Server (NTRS)

    Cloutis, Edward A.; Smith, Dorian G. W.; Lambert, Richard St. J.; Gaffey, Michael J.

    1990-01-01

    The reflectance spectra of combinations of olivine, orthopyroxene, and iron meteorite metal are experimentally studied, and the obtained variations in spectral properties are used to constrain the physical and chemical properties of the assemblages. The presence of metal most noticeably affects band area ratios, peak-to-peak and peak-to-minimum reflectance ratios, and band widths. Band width and band areas are useful for determining metal abundance in olivine and metal and orthopyroxene and metal assemblages, respectively. Mafic silicate grain size variations are best determined using band depth criteria. Band centers are most useful for determining mafic silicate composition. An application of these parameters to the S-class asteroid Flora is presented.

  7. Competitive adsorption in model charged protein mixtures: Equilibrium isotherms and kinetics behavior

    NASA Astrophysics Data System (ADS)

    Fang, F.; Szleifer, I.

    2003-07-01

    The competitive adsorption of proteins of different sizes and charges is studied using a molecular theory. The theory enables the study of charged systems explicitly including the size, shape, and charge distributions in all the molecular species in the mixture. Thus, this approach goes beyond the commonly used Poisson-Boltzmann approximation. The adsorption isotherms of the protein mixtures are studied for mixtures of two proteins of different size and charge. The amount of proteins adsorbed and the fraction of each protein is calculated as a function of the bulk composition of the solution and the amount of salt in the system. It is found that the total amount of proteins adsorbed is a monotonically decreasing function of the fraction of large proteins on the bulk solution and for fixed protein composition of the salt concentration. However, the composition of the adsorbed layer is a complicated function of the bulk composition and solution ionic strength. The structure of the adsorb layer depends upon the bulk composition and salt concentration. In general, there are multilayers adsorbed due to the long-range character of the electrostatic interactions. When the composition of large proteins in bulk is in very large excess it is found that the structure of the adsorb multilayer is such that the layer in contact with the surface is composed by a mixture of large and small proteins. However, the second and third layers are almost exclusively composed of large proteins. The theory is also generalized to study the time-dependent adsorption. The approach is based on separation of time scales into fast modes for the ions from the salt and the solvent and slow for the proteins. The dynamic equations are written for the slow modes, while the fast ones are obtained from the condition of equilibrium constrained to the distribution of proteins given by the slow modes. Two different processes are presented: the adsorption from a homogeneous solution to a charged surface at low salt concentration, and large excess of the large proteins in bulk. The second process is the kinetics of structural and adsorption change by changing the salt concentration of the bulk solution from low to high. The first process shows a large overshoot of the large proteins on the surface due to their excess in solution, followed by a surface replacement by the smaller molecules. The second process shows a very fast desorption of the large proteins followed by adsorption at latter stages. This process is found to be driven by large electrostatic repulsions induced by the fast ions from the salt approaching the surface. The relevance of the theoretical predictions to experimental system and possible directions for improvements of the theory are discussed.

  8. The nonlinear model for emergence of stable conditions in gas mixture in force field

    NASA Astrophysics Data System (ADS)

    Kalutskov, Oleg; Uvarova, Liudmila

    2016-06-01

    The case of M-component liquid evaporation from the straight cylindrical capillary into N - component gas mixture in presence of external forces was reviewed. It is assumed that the gas mixture is not ideal. The stable states in gas phase can be formed during the evaporation process for the certain model parameter valuesbecause of the mass transfer initial equationsnonlinearity. The critical concentrations of the resulting gas mixture components (the critical component concentrations at which the stable states occur in mixture) were determined mathematically for the case of single-component fluid evaporation into two-component atmosphere. It was concluded that this equilibrium concentration ratio of the mixture components can be achieved by external force influence on the mass transfer processes. It is one of the ways to create sustainable gas clusters that can be used effectively in modern nanotechnology.

  9. A general mixture theory. I. Mixtures of spherical molecules

    NASA Astrophysics Data System (ADS)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  10. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    PubMed

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Thermodynamics of concentrated electrolyte mixtures and the prediction of mineral solubilities to high temperatures for mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O

    NASA Astrophysics Data System (ADS)

    Pabalan, Roberto T.; Pitzer, Kenneth S.

    1987-09-01

    Mineral solubilities in binary and ternary electrolyte mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O are calculated to high temperatures using available thermodynamic data for solids and for aqueous electrolyte solutions. Activity and osmotic coefficients are derived from the ion-interaction model of Pitzer (1973, 1979) and co-workers, the parameters of which are evaluated from experimentally determined solution properties or from solubility data in binary and ternary mixtures. Excellent to good agreement with experimental solubilities for binary and ternary mixtures indicate that the model can be successfully used to predict mineral-solution equilibria to high temperatures. Although there are currently no theoretical forms for the temperature dependencies of the various model parameters, the solubility data in ternary mixtures can be adequately represented by constant values of the mixing term θ ij and values of ψ ijk which are either constant or have a simple temperature dependence. Since no additional parameters are needed to describe the thermodynamic properties of more complex electrolyte mixtures, the calculations can be extended to equilibrium studies relevant to natural systems. Examples of predicted solubilities are given for the quaternary system NaCl-KCl-MgCl 2-H 2O.

  12. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    PubMed

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

  13. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    NASA Astrophysics Data System (ADS)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  14. Modeling and stabilization results for a charge or current-actuated active constrained layer (ACL) beam model with the electrostatic assumption

    NASA Astrophysics Data System (ADS)

    Özer, Ahmet Özkan

    2016-04-01

    An infinite dimensional model for a three-layer active constrained layer (ACL) beam model, consisting of a piezoelectric elastic layer at the top and an elastic host layer at the bottom constraining a viscoelastic layer in the middle, is obtained for clamped-free boundary conditions by using a thorough variational approach. The Rao-Nakra thin compliant layer approximation is adopted to model the sandwich structure, and the electrostatic approach (magnetic effects are ignored) is assumed for the piezoelectric layer. Instead of the voltage actuation of the piezoelectric layer, the piezoelectric layer is proposed to be activated by a charge (or current) source. We show that, the closed-loop system with all mechanical feedback is shown to be uniformly exponentially stable. Our result is the outcome of the compact perturbation argument and a unique continuation result for the spectral problem which relies on the multipliers method. Finally, the modeling methodology of the paper is generalized to the multilayer ACL beams, and the uniform exponential stabilizability result is established analogously.

  15. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  16. A Mixtures-of-Trees Framework for Multi-Label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011

  17. Liquid class predictor for liquid handling of complex mixtures

    DOEpatents

    Seglke, Brent W [San Ramon, CA; Lekin, Timothy P [Livermore, CA

    2008-12-09

    A method of establishing liquid classes of complex mixtures for liquid handling equipment. The mixtures are composed of components and the equipment has equipment parameters. The first step comprises preparing a response curve for the components. The next step comprises using the response curve to prepare a response indicator for the mixtures. The next step comprises deriving a model that relates the components and the mixtures to establish the liquid classes.

  18. Regression mixture models: Does modeling the covariance between independent variables and latent classes improve the results?

    PubMed Central

    Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956

  19. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  20. Constraining gross primary production and ecosystem respiration estimates for North America using atmospheric observations of carbonyl sulfide (OCS) and CO2

    NASA Astrophysics Data System (ADS)

    He, W.; Ju, W.; Chen, H.; Peters, W.; van der Velde, I.; Baker, I. T.; Andrews, A. E.; Zhang, Y.; Launois, T.; Campbell, J. E.; Suntharalingam, P.; Montzka, S. A.

    2016-12-01

    Carbonyl sulfide (OCS) is a promising novel atmospheric tracer for studying carbon cycle processes. OCS shares a similar pathway as CO2 during photosynthesis but not released through a respiration-like process, thus could be used to partition Gross Primary Production (GPP) from Net Ecosystem-atmosphere CO2 Exchange (NEE). This study uses joint atmospheric observations of OCS and CO2 to constrain GPP and ecosystem respiration (Re). Flask data from tower and aircraft sites over North America are collected. We employ our recently developed CarbonTracker (CT)-Lagrange carbon assimilation system, which is based on the CT framework and the Weather Research and Forecasting - Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) model, and the Simple Biosphere model with simulated OCS (SiB3-OCS) that provides prior GPP, Re and plant uptake fluxes of OCS. Derived plant OCS fluxes from both process model and GPP-scaled model are tested in our inversion. To investigate the ability of OCS to constrain GPP and understand the uncertainty propagated from OCS modeling errors to constrained fluxes in a dual-tracer system including OCS and CO2, two inversion schemes are implemented and compared: (1) a two-step scheme, which firstly optimizes GPP using OCS observations, and then simultaneously optimizes GPP and Re using CO2 observations with OCS-constrained GPP in the first step as prior; (2) a joint scheme, which simultaneously optimizes GPP and Re using OCS and CO2 observations. We will evaluate the result using an estimated GPP from space-borne solar-induced fluorescence observations and a data-driven GPP upscaled from FLUXNET data with a statistical model (Jung et al., 2011). Preliminary result for the year 2010 shows the joint inversion makes simulated mole fractions more consistent with observations for both OCS and CO2. However, the uncertainty of OCS simulation is larger than that of CO2. The two-step and joint schemes perform similarly in improving the consistence with observations for OCS, implicating that OCS could provide independent constraint in joint inversion. Optimization makes less total GPP and Re but more NEE, when testing with prior CO2 fluxes from two biosphere models. This study gives an in-depth insight into the role of joint atmospheric OCS and CO2 observations in constraining CO2 fluxes.

  1. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  2. A Concentration Addition Model to Assess Activation of the Pregnane X Receptor (PXR) by Pesticide Mixtures Found in the French Diet

    PubMed Central

    de Sousa, Georges; Nawaz, Ahmad; Cravedi, Jean-Pierre; Rahmani, Roger

    2014-01-01

    French consumers are exposed to mixtures of pesticide residues in part through food consumption. As a xenosensor, the pregnane X receptor (hPXR) is activated by numerous pesticides, the combined effect of which is currently unknown. We examined the activation of hPXR by seven pesticide mixtures most likely found in the French diet and their individual components. The mixture's effect was estimated using the concentration addition (CA) model. PXR transactivation was measured by monitoring luciferase activity in hPXR/HepG2 cells and CYP3A4 expression in human hepatocytes. The three mixtures with the highest potency were evaluated using the CA model, at equimolar concentrations and at their relative proportion in the diet. The seven mixtures significantly activated hPXR and induced the expression of CYP3A4 in human hepatocytes. Of the 14 pesticides which constitute the three most active mixtures, four were found to be strong hPXR agonists, four medium, and six weak. Depending on the mixture and pesticide proportions, additive, greater than additive or less than additive effects between compounds were demonstrated. Predictions of the combined effects were obtained with both real-life and equimolar proportions at low concentrations. Pesticides act mostly additively to activate hPXR, when present in a mixture. Modulation of hPXR activation and its target genes induction may represent a risk factor contributing to exacerbate the physiological response of the hPXR signaling pathways and to explain some adverse effects in humans. PMID:25028461

  3. Transient Catalytic Combustor Model With Detailed Gas and Surface Chemistry

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Mellish, Benjamin P.; Miller, Fletcher J.; Tien, James S.

    2005-01-01

    In this work, we numerically investigate the transient combustion of a premixed gas mixture in a narrow, perfectly-insulated, catalytic channel which can represent an interior channel of a catalytic monolith. The model assumes a quasi-steady gas-phase and a transient, thermally thin solid phase. The gas phase is one-dimensional, but it does account for heat and mass transfer in a direction perpendicular to the flow via appropriate heat and mass transfer coefficients. The model neglects axial conduction in both the gas and in the solid. The model includes both detailed gas-phase reactions and catalytic surface reactions. The reactants modeled so far include lean mixtures of dry CO and CO/H2 mixtures, with pure oxygen as the oxidizer. The results include transient computations of light-off and system response to inlet condition variations. In some cases, the model predicts two different steady-state solutions depending on whether the channel is initially hot or cold. Additionally, the model suggests that the catalytic ignition of CO/O2 mixtures is extremely sensitive to small variations of inlet equivalence ratios and parts per million levels of H2.

  4. Using dynamic N-mixture models to test cavity limitation on northern flying squirrel demographic parameters using experimental nest box supplementation.

    PubMed

    Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica

    2014-06-01

    Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.

  5. Charge redistribution in QM:QM ONIOM model systems: a constrained density functional theory approach

    NASA Astrophysics Data System (ADS)

    Beckett, Daniel; Krukau, Aliaksandr; Raghavachari, Krishnan

    2017-11-01

    The ONIOM hybrid method has found considerable success in QM:QM studies designed to approximate a high level of theory at a significantly reduced cost. This cost reduction is achieved by treating only a small model system with the target level of theory and the rest of the system with a low, inexpensive, level of theory. However, the choice of an appropriate model system is a limiting factor in ONIOM calculations and effects such as charge redistribution across the model system boundary must be considered as a source of error. In an effort to increase the general applicability of the ONIOM model, a method to treat the charge redistribution effect is developed using constrained density functional theory (CDFT) to constrain the charge experienced by the model system in the full calculation to the link atoms in the truncated model system calculations. Two separate CDFT-ONIOM schemes are developed and tested on a set of 20 reactions with eight combinations of levels of theory. It is shown that a scheme using a scaled Lagrange multiplier term obtained from the low-level CDFT model calculation outperforms ONIOM at each combination of levels of theory from 32% to 70%.

  6. Spurious Latent Classes in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Alexeev, Natalia; Templin, Jonathan; Cohen, Allan S.

    2011-01-01

    Mixture Rasch models have been used to study a number of psychometric issues such as goodness of fit, response strategy differences, strategy shifts, and multidimensionality. Although these models offer the potential for improving understanding of the latent variables being measured, under some conditions overextraction of latent classes may…

  7. Individual and binary toxicity of anatase and rutile nanoparticles towards Ceriodaphnia dubia.

    PubMed

    Iswarya, V; Bhuvaneshwari, M; Chandrasekaran, N; Mukherjee, Amitava

    2016-09-01

    Increasing usage of engineered nanoparticles, especially Titanium dioxide (TiO2) in various commercial products has necessitated their toxicity evaluation and risk assessment, especially in the aquatic ecosystem. In the present study, a comprehensive toxicity assessment of anatase and rutile NPs (individual as well as a binary mixture) has been carried out in a freshwater matrix on Ceriodaphnia dubia under different irradiation conditions viz., visible and UV-A. Anatase and rutile NPs produced an LC50 of about 37.04 and 48mg/L, respectively, under visible irradiation. However, lesser LC50 values of about 22.56 (anatase) and 23.76 (rutile) mg/L were noted under UV-A irradiation. A toxic unit (TU) approach was followed to determine the concentrations of binary mixtures of anatase and rutile. The binary mixture resulted in an antagonistic and additive effect under visible and UV-A irradiation, respectively. Among the two different modeling approaches used in the study, Marking-Dawson model was noted to be a more appropriate model than Abbott model for the toxicity evaluation of binary mixtures. The agglomeration of NPs played a significant role in the induction of antagonistic and additive effects by the mixture based on the irradiation applied. TEM and zeta potential analysis confirmed the surface interactions between anatase and rutile NPs in the mixture. Maximum uptake was noticed at 0.25 total TU of the binary mixture under visible irradiation and 1 TU of anatase NPs for UV-A irradiation. Individual NPs showed highest uptake under UV-A than visible irradiation. In contrast, binary mixture showed a difference in the uptake pattern based on the type of irradiation exposed. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Rasch Mixture Models for DIF Detection: A Comparison of Old and New Score Specifications

    ERIC Educational Resources Information Center

    Frick, Hannah; Strobl, Carolin; Zeileis, Achim

    2015-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch…

  9. Modeling biofiltration of VOC mixtures under steady-state conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baltzis, B.C.; Wojdyla, S.M.; Zarook, S.M.

    1997-06-01

    Treatment of air streams contaminated with binary volatile organic compound (VOC) mixtures in classical biofilters under steady-state conditions of operation was described with a general mathematical model. The model accounts for potential kinetic interactions among the pollutants, effects of oxygen availability on biodegradation, and biomass diversification in the filter bed. While the effects of oxygen were always taken into account, two distinct cases were considered for the experimental model validation. The first involves kinetic interactions, but no biomass differentiation, used for describing data from biofiltration of benzene/toluene mixtures. The second case assumes that each pollutant is treated by a differentmore » type of biomass. Each biomass type is assumed to form separate patches of biofilm on the solid packing material, thus kinetic interference does not occur. This model was used for describing biofiltration of ethanol/butanol mixtures. Experiments were performed with classical biofilters packed with mixtures of peat moss and perlite (2:3, volume:volume). The model equations were solved through the use of computer codes based on the fourth-order Runge-Kutta technique for the gas-phase mass balances and the method of orthogonal collocation for the concentration profiles in the biofilms. Good agreement between model predictions and experimental data was found in almost all cases. Oxygen was found to be extremely important in the case of polar VOCs (ethanol/butanol).« less

  10. Rational Engineering and Characterization of an mAb that Neutralizes Zika Virus by Targeting a Mutationally Constrained Quaternary Epitope.

    PubMed

    Tharakaraman, Kannan; Watanabe, Satoru; Chan, Kuan Rong; Huan, Jia; Subramanian, Vidya; Chionh, Yok Hian; Raguram, Aditya; Quinlan, Devin; McBee, Megan; Ong, Eugenia Z; Gan, Esther S; Tan, Hwee Cheng; Tyagi, Anu; Bhushan, Shashi; Lescar, Julien; Vasudevan, Subhash G; Ooi, Eng Eong; Sasisekharan, Ram

    2018-05-09

    Following the recent emergence of Zika virus (ZIKV), many murine and human neutralizing anti-ZIKV antibodies have been reported. Given the risk of virus escape mutants, engineering antibodies that target mutationally constrained epitopes with therapeutically relevant potencies can be valuable for combating future outbreaks. Here, we applied computational methods to engineer an antibody, ZAb_FLEP, that targets a highly networked and therefore mutationally constrained surface formed by the envelope protein dimer. ZAb_FLEP neutralized a breadth of ZIKV strains and protected mice in distinct in vivo models, including resolving vertical transmission and fetal mortality in infected pregnant mice. Serial passaging of ZIKV in the presence of ZAb_FLEP failed to generate viral escape mutants, suggesting that its epitope is indeed mutationally constrained. A single-particle cryo-EM reconstruction of the Fab-ZIKV complex validated the structural model and revealed insights into ZAb_FLEP's neutralization mechanism. ZAb_FLEP has potential as a therapeutic in future outbreaks. Copyright © 2018. Published by Elsevier Inc.

  11. Modeling the soil water retention curves of soil-gravel mixtures with regression method on the Loess Plateau of China.

    PubMed

    Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an

    2013-01-01

    Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present.

  12. Modeling the Soil Water Retention Curves of Soil-Gravel Mixtures with Regression Method on the Loess Plateau of China

    PubMed Central

    Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an

    2013-01-01

    Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present. PMID:23555040

  13. Phenomenological Modeling and Laboratory Simulation of Long-Term Aging of Asphalt Mixtures

    NASA Astrophysics Data System (ADS)

    Elwardany, Michael Dawoud

    The accurate characterization of asphalt mixture properties as a function of pavement service life is becoming more important as more powerful pavement design and performance prediction methods are implemented. Oxidative aging is a major distress mechanism of asphalt pavements. Aging increases the stiffness and brittleness of the material, which leads to a high cracking potential. Thus, an improved understanding of the aging phenomenon and its effect on asphalt binder chemical and rheological properties will allow for the prediction of mixture properties as a function of pavement service life. Many researchers have conducted laboratory binder thin-film aging studies; however, this approach does not allow for studying the physicochemical effects of mineral fillers on age hardening rates in asphalt mixtures. Moreover, aging phenomenon in the field is governed by kinetics of binder oxidation, oxygen diffusion through mastic phase, and oxygen percolation throughout the air voids structure. In this study, laboratory aging trials were conducted on mixtures prepared using component materials of several field projects throughout the USA and Canada. Laboratory aged materials were compared against field cores sampled at different ages. Results suggested that oven aging of loose mixture at 95°C is the most promising laboratory long-term aging method. Additionally, an empirical model was developed in order to account for the effect of mineral fillers on age hardening rates in asphalt mixtures. Kinetics modeling was used to predict field aging levels throughout pavement thickness and to determine the required laboratory aging duration to match field aging. Kinetics model outputs are calibrated using measured data from the field to account for the effects of oxygen diffusion and percolation. Finally, the calibrated model was validated using independent set of field sections. This work is expected to provide basis for improved asphalt mixture and pavement design procedures in order to save taxpayers' money.

  14. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    PubMed

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  15. Kinetics of methane production from the codigestion of switchgrass and Spirulina platensis algae.

    PubMed

    El-Mashad, Hamed M

    2013-03-01

    Anaerobic batch digestion of four feedstocks was conducted at 35 and 50 °C: switchgrass; Spirulina platensis algae; and two mixtures of both switchgrass and S. platensis. Mixture 1 was composed of 87% switchgrass (based on volatile solids) and 13% S. platensis. Mixture 2 was composed of 67% switchgrass and 33% S. platensis. The kinetics of methane production from these feedstocks was studied using four first order models: exponential, Gompertz, Fitzhugh, and Cone. The methane yields after 40days of digestion at 35 °C were 355, 127, 143 and 198 ml/g VS, respectively for S. platensis, switchgrass, and Mixtures 1 and 2, while the yields at 50 °C were 358, 167, 198, and 236 ml/g VS, respectively. Based on Akaike's information criterion, the Cone model best described the experimental data. The Cone model was validated with experimental data collected from the digestion of a third mixture that was composed of 83% switchgrass and 17% S. platensis. Published by Elsevier Ltd.

  16. Advanced stability indicating chemometric methods for quantitation of amlodipine and atorvastatin in their quinary mixture with acidic degradation products

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2016-02-01

    Two advanced, accurate and precise chemometric methods are developed for the simultaneous determination of amlodipine besylate (AML) and atorvastatin calcium (ATV) in the presence of their acidic degradation products in tablet dosage forms. The first method was Partial Least Squares (PLS-1) and the second was Artificial Neural Networks (ANN). PLS was compared to ANN models with and without variable selection procedure (genetic algorithm (GA)). For proper analysis, a 5-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the interfering species. Fifteen mixtures were used as calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested models. The proposed methods were successfully applied to the analysis of pharmaceutical tablets containing AML and ATV. The methods indicated the ability of the mentioned models to solve the highly overlapped spectra of the quinary mixture, yet using inexpensive and easy to handle instruments like the UV-VIS spectrophotometer.

  17. Thermal conductivity of disperse insulation materials and their mixtures

    NASA Astrophysics Data System (ADS)

    Geža, V.; Jakovičs, A.; Gendelis, S.; Usiļonoks, I.; Timofejevs, J.

    2017-10-01

    Development of new, more efficient thermal insulation materials is a key to reduction of heat losses and contribution to greenhouse gas emissions. Two innovative materials developed at Thermeko LLC are Izoprok and Izopearl. This research is devoted to experimental study of thermal insulation properties of both materials as well as their mixture. Results show that mixture of 40% Izoprok and 60% of Izopearl has lower thermal conductivity than pure materials. In this work, material thermal conductivity dependence temperature is also measured. Novel modelling approach is used to model spatial distribution of disperse insulation material. Computational fluid dynamics approach is also used to estimate role of different heat transfer phenomena in such porous mixture. Modelling results show that thermal convection plays small role in heat transfer despite large fraction of air within material pores.

  18. A comparative study of mixture cure models with covariate

    NASA Astrophysics Data System (ADS)

    Leng, Oh Yit; Khalid, Zarina Mohd

    2017-05-01

    In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.

  19. Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison

    PubMed Central

    Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.

    2015-01-01

    Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546

  20. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    NASA Astrophysics Data System (ADS)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  1. A probabilistic assessment of calcium carbonate export and dissolution in the modern ocean

    NASA Astrophysics Data System (ADS)

    Battaglia, Gianna; Steinacher, Marco; Joos, Fortunat

    2016-05-01

    The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo scheme to construct a 1000-member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean-sediment fluxes are considered. For local dissolution rates, either a strong or a weak dependency on CaCO3 saturation is assumed. In addition, there is the option to have saturation-independent dissolution above the saturation horizon. The median (and 68 % confidence interval) of the constrained model ensemble for global biogenic CaCO3 export is 0.90 (0.72-1.05) Gt C yr-1, that is within the lower half of previously published estimates (0.4-1.8 Gt C yr-1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo-Pacific, the northern Pacific and relatively small in the Atlantic. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport timescales for the different set-ups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest applying saturation-independent dissolution rates in Earth system models to minimise computational costs.

  2. Ten years of multiple data stream assimilation with the ORCHIDEE land surface model to improve regional to global simulated carbon budgets: synthesis and perspectives on directions for the future

    NASA Astrophysics Data System (ADS)

    Peylin, P. P.; Bacour, C.; MacBean, N.; Maignan, F.; Bastrikov, V.; Chevallier, F.

    2017-12-01

    Predicting the fate of carbon stocks and their sensitivity to climate change and land use/management strongly relies on our ability to accurately model net and gross carbon fluxes. However, simulated carbon and water fluxes remain subject to large uncertainties, partly because of unknown or poorly calibrated parameters. Over the past ten years, the carbon cycle data assimilation system at the Laboratoire des Sciences du Climat et de l'Environnement has investigated the benefit of assimilating multiple carbon cycle data streams into the ORCHIDEE LSM, the land surface component of the Institut Pierre Simon Laplace Earth System Model. These datasets have included FLUXNET eddy covariance data (net CO2 flux and latent heat flux) to constrain hourly to seasonal time-scale carbon cycle processes, remote sensing of the vegetation activity (MODIS NDVI) to constrain the leaf phenology, biomass data to constrain "slow" (yearly to decadal) processes of carbon allocation, and atmospheric CO2 concentrations to provide overall large scale constraints on the land carbon sink. Furthermore, we have investigated technical issues related to multiple data stream assimilation and choice of optimization algorithm. This has provided a wide-ranging perspective on the challenges we face in constraining model parameters and thus better quantifying, and reducing, model uncertainty in projections of the future global carbon sink. We review our past studies in terms of the impact of the optimization on key characteristics of the carbon cycle, e.g. the partition of the northern latitudes vs tropical land carbon sink, and compare to the classic atmospheric flux inversion approach. Throughout, we discuss our work in context of the abovementioned challenges, and propose solutions for the community going forward, including the potential of new observations such as atmospheric COS concentrations and satellite-derived Solar Induced Fluorescence to constrain the gross carbon fluxes of the ORCHIDEE model.

  3. Implications of Binary Black Hole Detections on the Merger Rates of Double Neutron Stars and Neutron Star–Black Holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Anuradha; Arun, K. G.; Sathyaprakash, B. S., E-mail: axg645@psu.edu, E-mail: kgarun@cmi.ac.in, E-mail: bss25@psu.edu

    We show that the inferred merger rate and chirp masses of binary black holes (BBHs) detected by advanced LIGO (aLIGO) can be used to constrain the rate of double neutron star (DNS) and neutron star–black hole (NSBH) mergers in the universe. We explicitly demonstrate this by considering a set of publicly available population synthesis models of Dominik et al. and show that if all the BBH mergers, GW150914, LVT151012, GW151226, and GW170104, observed by aLIGO arise from isolated binary evolution, the predicted DNS merger rate may be constrained to be 2.3–471.0 Gpc{sup −3} yr{sup −1} and that of NSBH mergersmore » will be constrained to 0.2–48.5 Gpc{sup −3} yr{sup −1}. The DNS merger rates are not constrained much, but the NSBH rates are tightened by a factor of ∼4 as compared to their previous rates. Note that these constrained DNS and NSBH rates are extremely model-dependent and are compared to the unconstrained values 2.3–472.5 Gpc{sup −3} yr{sup −1} and 0.2–218 Gpc{sup −3} yr{sup −1}, respectively, using the same models of Dominik et al. (2012a). These rate estimates may have implications for short Gamma Ray Burst progenitor models assuming they are powered (solely) by DNS or NSBH mergers. While these results are based on a set of open access population synthesis models, which may not necessarily be the representative ones, the proposed method is very general and can be applied to any number of models, thereby yielding more realistic constraints on the DNS and NSBH merger rates from the inferred BBH merger rate and chirp mass.« less

  4. Metal-Polycyclic Aromatic Hydrocarbon Mixture Toxicity in Hyalella azteca. 1. Response Surfaces and Isoboles To Measure Non-additive Mixture Toxicity and Ecological Risk.

    PubMed

    Gauthier, Patrick T; Norwood, Warren P; Prepas, Ellie E; Pyle, Greg G

    2015-10-06

    Mixtures of metals and polycyclic aromatic hydrocarbons (PAHs) occur ubiquitously in aquatic environments, yet relatively little is known regarding their potential to produce non-additive toxicity (i.e., antagonism or potentiation). A review of the lethality of metal-PAH mixtures in aquatic biota revealed that more-than-additive lethality is as common as strictly additive effects. Approaches to ecological risk assessment do not consider non-additive toxicity of metal-PAH mixtures. Forty-eight-hour water-only binary mixture toxicity experiments were conducted to determine the additive toxic nature of mixtures of Cu, Cd, V, or Ni with phenanthrene (PHE) or phenanthrenequinone (PHQ) using the aquatic amphipod Hyalella azteca. In cases where more-than-additive toxicity was observed, we calculated the possible mortality rates at Canada's environmental water quality guideline concentrations. We used a three-dimensional response surface isobole model-based approach to compare the observed co-toxicity in juvenile amphipods to predicted outcomes based on concentration addition or effects addition mixtures models. More-than-additive lethality was observed for all Cu-PHE, Cu-PHQ, and several Cd-PHE, Cd-PHQ, and Ni-PHE mixtures. Our analysis predicts Cu-PHE, Cu-PHQ, Cd-PHE, and Cd-PHQ mixtures at the Canadian Water Quality Guideline concentrations would produce 7.5%, 3.7%, 4.4% and 1.4% mortality, respectively.

  5. Effective diffusion coefficients of DNAPL waste components in saturated low permeability soil materials

    NASA Astrophysics Data System (ADS)

    Ayral-Cinar, Derya; Demond, Avery H.

    2017-12-01

    Diffusion is regarded as the dominant transport mechanism into and out of low permeable subsurface lenses and layers in the subsurface. But, some reports of mass storage in such zones are higher than what might be attributable to diffusion, based on estimated diffusion coefficients. Despite the importance of diffusion to efforts to estimate the quantity of residual contamination in the subsurface, relatively few studies present measured diffusion coefficients of organic solutes in saturated low permeability soils. This study reports the diffusion coefficients of a trichloroethylene (TCE), and an anionic surfactant, Aerosol OT (AOT), in water-saturated silt and a silt-montmorillonite (25:75) mixture, obtained using steady-state experiments. The relative diffusivity ranged from 0.11 to 0.17 for all three compounds for the silt and the silt-clay mixture that was allowed to expand. In the case in which the swelling was constrained, the relative diffusivity was about 0.07. In addition, the relative diffusivity of 13C-labeled TCE through a water saturated silt-clay mixture that had contacted a field dense non-aqueous phase liquid (DNAPL) for 18 months was measured and equaled 0.001. These experimental results were compared with the estimates generated using common correlations, and it was found that, in all cases, the measured diffusion coefficients were significantly lower than the estimated. Thus, the discrepancy between mass accumulations observed in the field and the mass storage that can attributable to diffusion may be greater than previously believed.

  6. The simultaneous mass and energy evaporation (SM2E) model.

    PubMed

    Choudhary, Rehan; Klauda, Jeffery B

    2016-01-01

    In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.

  7. Quantifying How Observations Inform a Numerical Reanalysis of Hawaii

    NASA Astrophysics Data System (ADS)

    Powell, B. S.

    2017-11-01

    When assimilating observations into a model via state-estimation, it is possible to quantify how each observation changes the modeled estimate of a chosen oceanic metric. Using an existing 2 year reanalysis of Hawaii that includes more than 31 million observations from satellites, ships, SeaGliders, and autonomous floats, I assess which observations most improve the estimates of the transport and eddy kinetic energy. When the SeaGliders were in the water, they comprised less than 2.5% of the data, but accounted for 23% of the transport adjustment. Because the model physics constrains advanced state-estimation, the prescribed covariances are propagated in time to identify observation-model covariance. I find that observations that constrain the isopycnal tilt across the transport section provide the greatest impact in the analysis. In the case of eddy kinetic energy, observations that constrain the surface-driven upper ocean have more impact. This information can help to identify optimal sampling strategies to improve both state-estimates and forecasts.

  8. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    NASA Astrophysics Data System (ADS)

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.

    2015-04-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.

  9. PACE: Probabilistic Assessment for Contributor Estimation- A machine learning-based assessment of the number of contributors in DNA mixtures.

    PubMed

    Marciano, Michael A; Adelman, Jonathan D

    2017-03-01

    The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)

  11. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  12. Constraint-Based Local Search for Constrained Optimum Paths Problems

    NASA Astrophysics Data System (ADS)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  13. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. An analysis of lethal and sublethal interactions among type I and type II pyrethroid pesticide mixtures using standard Hyalella azteca water column toxicity tests.

    PubMed

    Hoffmann, Krista Callinan; Deanovic, Linda; Werner, Inge; Stillway, Marie; Fong, Stephanie; Teh, Swee

    2016-10-01

    A novel 2-tiered analytical approach was used to characterize and quantify interactions between type I and type II pyrethroids in Hyalella azteca using standardized water column toxicity tests. Bifenthrin, permethrin, cyfluthrin, and lambda-cyhalothrin were tested in all possible binary combinations across 6 experiments. All mixtures were analyzed for 4-d lethality, and 2 of the 6 mixtures (permethrin-bifenthrin and permethrin-cyfluthrin) were tested for subchronic 10-d lethality and sublethal effects on swimming motility and growth. Mixtures were initially analyzed for interactions using regression analyses, and subsequently compared with the additive models of concentration addition and independent action to further characterize mixture responses. Negative interactions (antagonistic) were significant in 2 of the 6 mixtures tested, including cyfluthrin-bifenthrin and cyfluthrin-permethrin, but only on the acute 4-d lethality endpoint. In both cases mixture responses fell between the additive models of concentration addition and independent action. All other mixtures were additive across 4-d lethality, and bifenthrin-permethrin and cyfluthrin-permethrin were also additive in terms of subchronic 10-d lethality and sublethal responses. Environ Toxicol Chem 2016;35:2542-2549. © 2016 SETAC. © 2016 SETAC.

  15. Heat transfer during condensation of steam from steam-gas mixtures in the passive safety systems of nuclear power plants

    NASA Astrophysics Data System (ADS)

    Portnova, N. M.; Smirnov, Yu B.

    2017-11-01

    A theoretical model for calculation of heat transfer during condensation of multicomponent vapor-gas mixtures on vertical surfaces, based on film theory and heat and mass transfer analogy is proposed. Calculations were performed for the conditions implemented in experimental studies of heat transfer during condensation of steam-gas mixtures in the passive safety systems of PWR-type reactors of different designs. Calculated values of heat transfer coefficients for condensation of steam-air, steam-air-helium and steam-air-hydrogen mixtures at pressures of 0.2 to 0.6 MPa and of steam-nitrogen mixture at the pressures of 0.4 to 2.6 MPa were obtained. The composition of mixtures and vapor-to-surface temperature difference were varied within wide limits. Tube length ranged from 0.65 to 9.79m. The condensation of all steam-gas mixtures took place in a laminar-wave flow mode of condensate film and turbulent free convection in the diffusion boundary layer. The heat transfer coefficients obtained by calculation using the proposed model are in good agreement with the considered experimental data for both the binary and ternary mixtures.

  16. Nature and prevalence of non-additive toxic effects in industrially relevant mixtures of organic chemicals.

    PubMed

    Parvez, Shahid; Venkataraman, Chandra; Mukherji, Suparna

    2009-06-01

    The concentration addition (CA) and the independent action (IA) models are widely used for predicting mixture toxicity based on its composition and individual component dose-response profiles. However, the prediction based on these models may be inaccurate due to interaction among mixture components. In this work, the nature and prevalence of non-additive effects were explored for binary, ternary and quaternary mixtures composed of hydrophobic organic compounds (HOCs). The toxicity of each individual component and mixture was determined using the Vibrio fischeri bioluminescence inhibition assay. For each combination of chemicals specified by the 2(n) factorial design, the percent deviation of the predicted toxic effect from the measured value was used to characterize mixtures as synergistic (positive deviation) and antagonistic (negative deviation). An arbitrary classification scheme was proposed based on the magnitude of deviation (d) as: additive (< or =10%, class-I) and moderately (10< d < or =30 %, class-II), highly (30< d < or =50%, class-III) and very highly (>50%, class-IV) antagonistic/synergistic. Naphthalene, n-butanol, o-xylene, catechol and p-cresol led to synergism in mixtures while 1, 2, 4-trimethylbenzene and 1, 3-dimethylnaphthalene contributed to antagonism. Most of the mixtures depicted additive or antagonistic effect. Synergism was prominent in some of the mixtures, such as, pulp and paper, textile dyes, and a mixture composed of polynuclear aromatic hydrocarbons. The organic chemical industry mixture depicted the highest abundance of antagonism and least synergism. Mixture toxicity was found to depend on partition coefficient, molecular connectivity index and relative concentration of the components.

  17. Synthetic Constraint of Ecosystem C Models Using Radiocarbon and Net Primary Production (NPP) in New Zealand Grazing Land

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.

    2011-12-01

    Time-series radiocarbon measurements have substantial ability to constrain the size and residence time of the soil C pools commonly represented in ecosystem models. Radiocarbon remains unique in the ability to constrain the large stabilized C pool with decadal residence times. Radiocarbon also contributes usefully to constraining the size and turnover rate of the passive pool, but typically struggles to constrain pools with residence times less than a few years. Overall, the number of pools and associated turnover rates that can be constrained depends upon the number of time-series samples available, the appropriateness of chemical or physical fractions to isolate unequivocal pools, and the utility of additional C flux data to provide additional constraints. In New Zealand pasture soils, we demonstrate the ability to constrain decadal turnover times with in a few years for the stabilized pool and reasonably constrain the passive fraction. Good constraint is obtained with two time-series samples spaced 10 or more years apart after 1970. Three or more time-series samples further improve the level of constraint. Work within this context shows that a two-pool model does explain soil radiocarbon data for the most detailed profiles available (11 time-series samples), and identifies clear and consistent differences in rates of C turnover and passive fraction in Andisols vs Non-Andisols. Furthermore, samples from multiple horizons can commonly be combined, yielding consistent residence times and passive fraction estimates that are stable with, or increase with, depth in different sites. Radiocarbon generally fails to quantify rapid C turnover, however. Given that the strength of radiocarbon is estimating the size and turnover of the stabilized (decadal) and passive (millennial) pools, the magnitude of fast cycling pool(s) can be estimated by subtracting the radiocarbon-based estimates of turnover within stabilized and passive pools from total estimates of NPP. In grazing land, these estimates can be derived primarily from measured aboveground NPP and calculated belowground NPP. Results suggest that only 19-36% of heterotrophic soil respiration is derived from the soil C with rapid turnover times. A final logical step in synthesis is the analysis of temporal variation in NPP, primarily due to climate, as driver of changes in plant inputs and resulting in dynamic changes in rapid and decadal soil C pools. In sites with good time series samples from 1959-1975, we examine the apparent impacts of measured or modelled (Biome-BGC) NPP on soil Δ14C. Ultimately, these approaches have the ability to empirically constrain, and provide limited verification, of the soil C cycle as commonly depicted ecosystem biogeochemistry models.

  18. Gaseous emissions from the combustion of a waste mixture containing a high concentration of N2O.

    PubMed

    Dong, Changqing; Yang, Yongping; Zhang, Junjiao; Lu, Xuefeng

    2009-01-01

    This paper is focused on reducing the emissions from the combustion of a waste mixture containing a high concentration of N2O. A rate model and an equilibrium model were used to predict gaseous emissions from the combustion of the mixture. The influences of temperature and methane were considered, and the experimental research was carried out in a tabular reactor and a pilot combustion furnace. The results showed that for the waste mixture, the combustion temperature should be in the range of 950-1100 degrees C and the gas residence time should be 2s or higher to reduce emissions.

  19. Mixtures of charged colloid and neutral polymer: Influence of electrostatic interactions on demixing and interfacial tension

    NASA Astrophysics Data System (ADS)

    Denton, Alan R.; Schmidt, Matthias

    2005-06-01

    The equilibrium phase behavior of a binary mixture of charged colloids and neutral, nonadsorbing polymers is studied within free-volume theory. A model mixture of charged hard-sphere macroions and ideal, coarse-grained, effective-sphere polymers is mapped first onto a binary hard-sphere mixture with nonadditive diameters and then onto an effective Asakura-Oosawa model [S. Asakura and F. Oosawa, J. Chem. Phys. 22, 1255 (1954)]. The effective model is defined by a single dimensionless parameter—the ratio of the polymer diameter to the effective colloid diameter. For high salt-to-counterion concentration ratios, a free-volume approximation for the free energy is used to compute the fluid phase diagram, which describes demixing into colloid-rich (liquid) and colloid-poor (vapor) phases. Increasing the range of electrostatic interactions shifts the demixing binodal toward higher polymer concentration, stabilizing the mixture. The enhanced stability is attributed to a weakening of polymer depletion-induced attraction between electrostatically repelling macroions. Comparison with predictions of density-functional theory reveals a corresponding increase in the liquid-vapor interfacial tension. The predicted trends in phase stability are consistent with observed behavior of protein-polysaccharide mixtures in food colloids.

  20. Phase-field model of domain structures in ferroelectric thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y. L.; Hu, S. Y.; Liu, Z. K.

    A phase-field model for predicting the coherent microstructure evolution in constrained thin films is developed. It employs an analytical elastic solution derived for a constrained film with arbitrary eigenstrain distributions. The domain structure evolution during a cubic{r_arrow}tetragonal proper ferroelectric phase transition is studied. It is shown that the model is able to simultaneously predict the effects of substrate constraint and temperature on the volume fractions of domain variants, domain-wall orientations, domain shapes, and their temporal evolution. {copyright} 2001 American Institute of Physics.

  1. Planetesimal Break-Up and the Feeding of Solids to the Satellite Disk: Consequences for the Formation Timescale and Composition of the Satellites of Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Mosqueira, I.; Estrada, P. R.

    2003-01-01

    In order to create a coherent scenario of satellite formation. the source of the solids (rock-metal and ice) that will eventually make up the satellites must be considered. While it is customary to use a solar composition mixture with a gas/solid mass ratio of about 100, at the tail end of the formation of the giant planet (when satellite formation is thought to have taken place) the fraction of solids entrained in the gas (particles with sizes lower than the decoupling size about 1 m for typical nebula parameters) is likely to be significantly lower than cosmic. In particular, in the core accretion model of giant planet formation one expects low dust and rubble content at late times due to particle coagulation leading to a collisional distribution of particle sizes with most of the mas residing in objects 1 km or larger, which are not coupled to the gas and whose dynamics must be followed independently. As a result, flow of gas into circumplanetary orbits is not sufficient to constrain the mas available to form satellites.

  2. Bearing diagnostics: A method based on differential geometry

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  3. On the self-preservation of turbulent jet flows with variable viscosity

    NASA Astrophysics Data System (ADS)

    Danaila, Luminita; Gauding, Michael; Varea, Emilien; Turbulence; mixing Team

    2017-11-01

    The concept of self-preservation has played an important role in shaping the understanding of turbulent flows. The assumption of complete self-preservation imposes certain constrains on the dynamics of the flow, allowing to express one-point or two-point statistics by choosing an appropriate unique length scale. Determining this length scale and its scaling is of high relevance for modeling. In this work, we study turbulent jet flows with variable viscosity from the self-preservation perspective. Turbulent flows encountered in engineering and environmental applications are often characterized by fluctuations of viscosity resulting for instance from variations of temperature or species composition. Starting from the transport equation for the moments of the mixture fraction increment, constraints for self-preservation are derived. The analysis is based on direct numerical simulations of turbulent jet flows where the viscosity between host and jet fluid differs. It is shown that fluctuations of viscosity do not affect the decay exponents of the turbulent energy or the dissipation but modify the scaling of two-point statistics in the dissipative range. Moreover, the analysis reveals that complete self-preservation in turbulent flows with variable viscosity cannot be achieved. Financial support from Labex EMC3 and FEDER is gratefully acknowledged.

  4. Assessment of combined antiandrogenic effects of binary parabens mixtures in a yeast-based reporter assay.

    PubMed

    Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui

    2014-05-01

    To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.

  5. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Computational predictions of damage propagation preceding dissection of ascending thoracic aortic aneurysms.

    PubMed

    Mousavi, S Jamaleddin; Farzaneh, Solmaz; Avril, Stéphane

    2018-04-01

    Dissections of ascending thoracic aortic aneurysms (ATAAs) cause significant morbidity and mortality worldwide. They occur when a tear in the intima-media of the aorta permits the penetration of the blood and the subsequent delamination and separation of the wall in 2 layers, forming a false channel. To predict computationally the risk of tear formation, stress analyses should be performed layer-specifically and they should consider internal or residual stresses that exist in the tissue. In the present paper, we propose a novel layer-specific damage model based on the constrained mixture theory, which intrinsically takes into account these internal stresses and can predict appropriately the tear formation. The model is implemented in finite-element commercial software Abaqus coupled with user material subroutine. Its capability is tested by applying it to the simulation of different exemplary situations, going from in vitro bulge inflation experiments on aortic samples to in vivo overpressurizing of patient-specific ATAAs. The simulations reveal that damage correctly starts from the intimal layer (luminal side) and propagates across the media as a tear but never hits the adventitia. This scenario is typically the first stage of development of an acute dissection, which is predicted for pressures of about 2.5 times the diastolic pressure by the model after calibrating the parameters against experimental data performed on collected ATAA samples. Further validations on a larger cohort of patients should hopefully confirm the potential of the model in predicting patient-specific damage evolution and possible risk of dissection during aneurysm growth for clinical applications. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Flexible Energy Scheduling Tool for Integrating Variable Generation | Grid

    Science.gov Websites

    , security-constrained economic dispatch, and automatic generation control programs. DOWNLOAD PAPER Electric commitment, security-constrained economic dispatch, and automatic generation control sub-models. Each sub resolutions and operating strategies can be explored. FESTIV produces not only economic metrics but also

  8. Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects

    NASA Technical Reports Server (NTRS)

    Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.

    2002-01-01

    Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.

  9. A globally accurate theory for a class of binary mixture models

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  10. Engineering Behavior and Characteristics of Wood Ash and Sugarcane Bagasse Ash

    PubMed Central

    Grau, Francisco; Choo, Hyunwook; Hu, Jong Wan; Jung, Jongwon

    2015-01-01

    Biomasses are organic materials that are derived from any living or recently-living structure. Plenty of biomasses are produced nationwide. Biomasses are mostly combusted and usually discarded or disposed of without treatment as biomass ashes, which include wood and sugarcane bagasse ashes. Thus, recycling or treatment of biomass ashes leads to utilizing the natural materials as an economical and environmental alternative. This study is intended to provide an environmental solution for uncontrolled disposal of biomass ashes by way of recycling the biomass ash and replacing the soils in geotechnical engineering projects. Therefore, in this study, characteristic tests of wood and sugarcane bagasse ashes that are considered the most common biomass ashes are conducted. The test of chemical compositions of biomass ashes is conducted using energy dispersive X-ray spectroscopy (EDS), and Scanning Electron Microscope (SEM), and heavy metal analysis is also conducted. Engineering behaviors including hydraulic conductivity, constrained modulus and shear modulus are examined. Also, coal fly ash Class C is used in this study for comparison with biomass ashes, and Ottawa 20/30 sands containing biomass ashes are examined to identify the soil replacement effect of biomass ashes. The results show that the particle sizes of biomass ashes are halfway between coal fly ash Class C and Ottawa 20/30 sand, and biomass ashes consist of a heterogeneous mixture of different particle sizes and shapes. Also, all heavy metal concentrations were found to be below the US Environmental Protection Agency (EPA) maximum limit. Hydraulic conductivity values of Ottawa 20/30 sand decrease significantly when replacing them with only 1%–2% of biomass ashes. While both the constrained modulus and shear modulus of biomass ashes are lower than Ottawa 20/30 sand, those of mixtures containing up to 10% biomass ashes are little affected by replacing the soils with biomass ashes. PMID:28793611

  11. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts

    USGS Publications Warehouse

    Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.

    2013-01-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  12. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts.

    PubMed

    Dorazio, Robert M; Martin, Julien; Edwards, Holly H

    2013-07-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  13. Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; And Others

    A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…

  14. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    ERIC Educational Resources Information Center

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  15. Mixture Distribution Latent State-Trait Analysis: Basic Ideas and Applications

    ERIC Educational Resources Information Center

    Courvoisier, Delphine S.; Eid, Michael; Nussbeck, Fridtjof W.

    2007-01-01

    Extensions of latent state-trait models for continuous observed variables to mixture latent state-trait models with and without covariates of change are presented that can separate individuals differing in their occasion-specific variability. An empirical application to the repeated measurement of mood states (N = 501) revealed that a model with 2…

  16. Kinetic model for the vibrational energy exchange in flowing molecular gas mixtures. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Offenhaeuser, F.

    1987-01-01

    The present study is concerned with the development of a computational model for the description of the vibrational energy exchange in flowing gas mixtures, taking into account a given number of energy levels for each vibrational degree of freedom. It is possible to select an arbitrary number of energy levels. The presented model uses values in the range from 10 to approximately 40. The distribution of energy with respect to these levels can differ from the equilibrium distribution. The kinetic model developed can be employed for arbitrary gaseous mixtures with an arbitrary number of vibrational degrees of freedom for each type of gas. The application of the model to CO2-H2ON2-O2-He mixtures is discussed. The obtained relations can be utilized in a study of the suitability of radiation-related transitional processes, involving the CO2 molecule, for laser applications. It is found that the computational results provided by the model agree very well with experimental data obtained for a CO2 laser. Possibilities for the activation of a 16-micron and 14-micron laser are considered.

  17. MODEL OF ADDITIVE EFFECTS OF MIXTURES OF NARCOTIC CHEMICALS

    EPA Science Inventory

    Biological effects data with single chemicals are far more abundant than with mixtures. et, environmental exposures to chemical mixtures, for example near hazardous waste sites or nonpoint sources, are very common and using test data from single chemicals to approximate effects o...

  18. Thermodynamic properties of model CdTe/CdSe mixtures

    DOE PAGES

    van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...

    2015-02-20

    We report on the thermodynamic properties of binary compound mixtures of model groups II–VI semiconductors. We use the recently introduced Stillinger–Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. We found that the potential energy exhibits a positive deviation frommore » ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. Moreover, it roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.« less

  19. Second law of thermodynamics in volume diffusion hydrodynamics in multicomponent gas mixtures

    NASA Astrophysics Data System (ADS)

    Dadzie, S. Kokou

    2012-10-01

    We presented the thermodynamic structure of a new continuum flow model for multicomponent gas mixtures. The continuum model is based on a volume diffusion concept involving specific species. It is independent of the observer's reference frame and enables a straightforward tracking of a selected species within a mixture composed of a large number of constituents. A method to derive the second law and constitutive equations accompanying the model is presented. Using the configuration of a rotating fluid we illustrated an example of non-classical flow physics predicted by new contributions in the entropy and constitutive equations.

  20. Estimation of performance of a J-T refrigerators operating with nitrogen-hydrocarbon mixtures and a coiled tubes-in-tube heat exchanger

    NASA Astrophysics Data System (ADS)

    Satya Meher, R.; Venkatarathnam, G.

    2018-06-01

    The exergy efficiency of Joule-Thomson (J-T) refrigerators operating with mixtures (MRC systems) strongly depends on the choice of refrigerant mixture and the performance of the heat exchanger used. Helically coiled, multiple tubes-in-tube heat exchangers with an effectiveness of over 96% are widely used in these types of systems. All the current studies focus only on the different heat transfer correlations and the uncertainty in predicting performance of the heat exchanger alone. The main focus of this work is to estimate the uncertainty in cooling capacity when the homogenous model is used by comparing the theoretical and experimental studies. The comparisons have been extended to some two-phase models present in the literature as well. Experiments have been carried out on a J-T refrigerator at a fixed heat load of 10 W with different nitrogen-hydrocarbon mixtures in the evaporator temperature range of 100-120 K. Different heat transfer models have been used to predict the temperature profiles as well as the cooling capacity of the refrigerator. The results show that the homogenous two-phase flow model is probably the most suitable model for rating the cooling capacity of a J-T refrigerator operating with nitrogen-hydrocarbon mixtures.

  1. Interactions and Toxicity of Cu-Zn mixtures to Hordeum vulgare in Different Soils Can Be Rationalized with Bioavailability-Based Prediction Models.

    PubMed

    Qiu, Hao; Versieren, Liske; Rangel, Georgina Guzman; Smolders, Erik

    2016-01-19

    Soil contamination with copper (Cu) is often associated with zinc (Zn), and the biological response to such mixed contamination is complex. Here, we investigated Cu and Zn mixture toxicity to Hordeum vulgare in three different soils, the premise being that the observed interactions are mainly due to effects on bioavailability. The toxic effect of Cu and Zn mixtures on seedling root elongation was more than additive (i.e., synergism) in soils with high and medium cation-exchange capacity (CEC) but less than additive (antagonism) in a low-CEC soil. This was found when we expressed the dose as the conventional total soil concentration. In contrast, antagonism was found in all soils when we expressed the dose as free-ion activities in soil solution, indicating that there is metal-ion competition for binding to the plant roots. Neither a concentration addition nor an independent action model explained mixture effects, irrespective of the dose expressions. In contrast, a multimetal BLM model and a WHAM-Ftox model successfully explained the mixture effects across all soils and showed that bioavailability factors mainly explain the interactions in soils. The WHAM-Ftox model is a promising tool for the risk assessment of mixed-metal contamination in soils.

  2. Estimating Lion Abundance using N-mixture Models for Social Species

    PubMed Central

    Belant, Jerrold L.; Bled, Florent; Wilton, Clay M.; Fyumagwa, Robert; Mwampeta, Stanslaus B.; Beyer, Dean E.

    2016-01-01

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170–551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species. PMID:27786283

  3. Estimating Lion Abundance using N-mixture Models for Social Species.

    PubMed

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  4. A comparison of direct and indirect methods for the estimation of health utilities from clinical outcomes.

    PubMed

    Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb

    2014-10-01

    Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.

  5. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    NASA Astrophysics Data System (ADS)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.

  6. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    NASA Astrophysics Data System (ADS)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  7. A framework for the use of single-chemical transcriptomics data in predicting the hazards associated with complex mixtures of polycyclic aromatic hydrocarbons.

    PubMed

    Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina

    2017-07-01

    The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.

  8. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    USGS Publications Warehouse

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.

  9. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    NASA Astrophysics Data System (ADS)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  10. Structure-related aspects on water diffusivity in fatty acid-soap and skin lipid model systems.

    PubMed

    Norlén, L; Engblom, J

    2000-01-03

    Simplified skin barrier models are necessary to get a first hand understanding of the very complex morphology and physical properties of the human skin barrier. In addition, it is of great importance to construct relevant models that will allow for rational testing of barrier perturbing/occlusive effects of a large variety of substances. The primary objective of this work was to study the effect of lipid morphology on water permeation through various lipid mixtures (i.e., partly neutralised free fatty acids, as well as a skin lipid model mixture). In addition, the effects of incorporating Azone((R)) (1-dodecyl-azacycloheptan-2-one) into the skin lipid model mixture was studied. Small- and wide-angle X-ray diffraction was used for structure determinations. It is concluded that: (a) the water flux through a crystalline fatty acid-sodium soap-water mixture (s) is statistically significantly higher than the water flux through the corresponding lamellar (L(alpha)) and reversed hexagonal (H(II)) liquid crystalline phases, which do not differ between themselves; (b) the water flux through mixtures of L(alpha)/s decreases statistically significantly with increasing relative amounts of lamellar (L(alpha)) liquid crystalline phase; (c) the addition of Azone((R)) to a skin lipid model system induces a reduction in water flux. However, further studies are needed to more closely characterise the structural basis for the occlusive effects of Azone((R)) on water flux.

  11. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    PubMed

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    PubMed

    Yu, Kezi; Quirk, J Gerald; Djurić, Petar M

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.

  13. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models

    PubMed Central

    Yu, Kezi; Quirk, J. Gerald

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927

  14. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  15. An introduction to mixture item response theory models.

    PubMed

    De Ayala, R J; Santiago, S Y

    2017-02-01

    Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  16. Population heterogeneity in the salience of multiple risk factors for adolescent delinquency.

    PubMed

    Lanza, Stephanie T; Cooper, Brittany R; Bray, Bethany C

    2014-03-01

    To present mixture regression analysis as an alternative to more standard regression analysis for predicting adolescent delinquency. We demonstrate how mixture regression analysis allows for the identification of population subgroups defined by the salience of multiple risk factors. We identified population subgroups (i.e., latent classes) of individuals based on their coefficients in a regression model predicting adolescent delinquency from eight previously established risk indices drawn from the community, school, family, peer, and individual levels. The study included N = 37,763 10th-grade adolescents who participated in the Communities That Care Youth Survey. Standard, zero-inflated, and mixture Poisson and negative binomial regression models were considered. Standard and mixture negative binomial regression models were selected as optimal. The five-class regression model was interpreted based on the class-specific regression coefficients, indicating that risk factors had varying salience across classes of adolescents. Standard regression showed that all risk factors were significantly associated with delinquency. Mixture regression provided more nuanced information, suggesting a unique set of risk factors that were salient for different subgroups of adolescents. Implications for the design of subgroup-specific interventions are discussed. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  17. Evaluation and improvement of micro-surfacing mix design method and modelling of asphalt emulsion mastic in terms of filler-emulsion interaction

    NASA Astrophysics Data System (ADS)

    Robati, Masoud

    This Doctorate program focuses on the evaluation and improving the rutting resistance of micro-surfacing mixtures. There are many research problems related to the rutting resistance of micro-surfacing mixtures that still require further research to be solved. The main objective of this Ph.D. program is to experimentally and analytically study and improve rutting resistance of micro-surfacing mixtures. During this Ph.D. program major aspects related to the rutting resistance of micro-surfacing mixtures are investigated and presented as follow: 1) evaluation of a modification of current micro-surfacing mix design procedures: On the basis of this effort, a new mix design procedure is proposed for type III micro-surfacing mixtures as rut-fill materials on the road surface. Unlike the current mix design guidelines and specification, the new mix design is capable of selecting the optimum mix proportions for micro-surfacing mixtures; 2) evaluation of test methods and selection of aggregate grading for type III application of micro-surfacing: Within the term of this study, a new specification for selection of aggregate grading for type III application of micro-surfacing is proposed; 3) evaluation of repeatability and reproducibility of micro-surfacing mixture design tests: In this study, limits for repeatability and reproducibility of micro-surfacing mix design tests are presented; 4) a new conceptual model for filler stiffening effect on asphalt mastic of micro-surfacing: A new model is proposed, which is able to establish limits for minimum and maximum filler concentrations in the micro-surfacing mixture base on only the filler important physical and chemical properties; 5) incorporation of reclaimed asphalt pavement and post-fabrication asphalt shingles in micro-surfacing mixture: The effectiveness of newly developed mix design procedure for micro-surfacing mixtures is further validated using recycled materials. The results present the limits for the use of RAP and RAS amount in micro-surfacing mixtures; 6) new colored micro-surfacing formulations with improved durability and performance: The significant improvement of around 45% in rutting resistance of colored and conventional micro-surfacing mixtures is achieved through employing low penetration grade bitumen polymer modified asphalt emulsion stabilized using nanoparticles.

  18. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Treesearch

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  19. Cure modeling in real-time prediction: How much does it help?

    PubMed

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. LES/PDF studies of joint statistics of mixture fraction and progress variable in piloted methane jet flames with inhomogeneous inlet flows

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng

    2016-11-01

    The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.

Top