Estimate of influenza cases using generalized linear, additive and mixed models.
Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M
2015-01-01
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.
Regression analysis of mixed recurrent-event and panel-count data with additive rate models.
Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L
2015-03-01
Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007, The Statistical Analysis of Recurrent Events. New York: Springer-Verlag; Zhao et al., 2011, Test 20, 1-42). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013, Statistics in Medicine 32, 1954-1963). In this article, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study.
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies.
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941
Resources allocation in healthcare for cancer: a case study using generalised additive mixed models.
Musio, Monica; Sauleau, Erik A; Augustin, Nicole H
2012-11-01
Our aim is to develop a method for helping resources re-allocation in healthcare linked to cancer, in order to replan the allocation of providers. Ageing of the population has a considerable impact on the use of health resources because aged people require more specialised medical care due notably to cancer. We propose a method useful to monitor changes of cancer incidence in space and time taking into account two age categories, according to healthcar general organisation. We use generalised additive mixed models with a Poisson response, according to the methodology presented in Wood, Generalised additive models: an introduction with R. Chapman and Hall/CRC, 2006. Besides one-dimensional smooth functions accounting for non-linear effects of covariates, the space-time interaction can be modelled using scale invariant smoothers. Incidence data collected by a general cancer registry between 1992 and 2007 in a specific area of France is studied. Our best model exhibits a strong increase of the incidence of cancer along time and an obvious spatial pattern for people more than 70 years with a higher incidence in the central band of the region. This is a strong argument for re-allocating resources for old people cancer care in this sub-region.
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
Yin, Junming; Chen, Xi; Xing, Eric P.
2016-01-01
We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the nonparametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the ℓ1/ℓ2 norm to Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.
Functional Generalized Additive Models.
McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.
Lee, S; Richard Dimenna, R; David Tamburello, D
2008-11-13
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and
Petersen, Ashley; Witten, Daniela; Simon, Noah
2016-01-01
We consider the problem of predicting an outcome variable using p covariates that are measured on n independent observations, in a setting in which additive, flexible, and interpretable fits are desired. We propose the fused lasso additive model (FLAM), in which each additive function is estimated to be piecewise constant with a small number of adaptively-chosen knots. FLAM is the solution to a convex optimization problem, for which a simple algorithm with guaranteed convergence to a global optimum is provided. FLAM is shown to be consistent in high dimensions, and an unbiased estimator of its degrees of freedom is proposed. We evaluate the performance of FLAM in a simulation study and on two data sets. Supplemental materials are available online, and the R package flam is available on CRAN. PMID:28239246
Lee, S; Dimenna, R; Tamburello, D
2011-02-14
height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?
Weltz, Kay; Kock, Alison A; Winker, Henning; Attwood, Colin; Sikweyiya, Monwabisi
2013-01-01
Shark attacks on humans are high profile events which can significantly influence policies related to the coastal zone. A shark warning system in South Africa, Shark Spotters, recorded 378 white shark (Carcharodon carcharias) sightings at two popular beaches, Fish Hoek and Muizenberg, during 3690 six-hour long spotting shifts, during the months September to May 2006 to 2011. The probabilities of shark sightings were related to environmental variables using Binomial Generalized Additive Mixed Models (GAMMs). Sea surface temperature was significant, with the probability of shark sightings increasing rapidly as SST exceeded 14 °C and approached a maximum at 18 °C, whereafter it remains high. An 8 times (Muizenberg) and 5 times (Fish Hoek) greater likelihood of sighting a shark was predicted at 18 °C than at 14 °C. Lunar phase was also significant with a prediction of 1.5 times (Muizenberg) and 4 times (Fish Hoek) greater likelihood of a shark sighting at new moon than at full moon. At Fish Hoek, the probability of sighting a shark was 1.6 times higher during the afternoon shift compared to the morning shift, but no diel effect was found at Muizenberg. A significant increase in the number of shark sightings was identified over the last three years, highlighting the need for ongoing research into shark attack mitigation. These patterns will be incorporated into shark awareness and bather safety campaigns in Cape Town.
NASA Astrophysics Data System (ADS)
Mellor, Andrea F. P.; Cey, Edwin E.
2015-11-01
The Abbotsford-Sumas aquifer (ASA) has a history of nitrate contamination from agricultural land use and manure application to soils, yet little is known about its microbial groundwater quality. The goal of this study was to investigate the spatiotemporal distribution of pathogen indicators (Escherichia coli [E. coli] and total coliform [TC]) and nitrate in groundwater, and their potential relation to hydrologic drivers. Sampling of 46 wells over an 11-month period confirmed elevated nitrate concentrations, with more than 50% of samples exceeding 10 mg-N/L. E. coli detections in groundwater were infrequent (4 of 385 total samples) and attributed mainly to surface water-groundwater connections along Fishtrap Creek, which tested positive for E. coli in every sampling event. TC was detected frequently in groundwater (70% of samples) across the ASA. Generalized additive mixed models (GAMMs) yielded valuable insights into relationships between TC or nitrate and a range of spatial, temporal, and hydrologic explanatory variables. Increased TC values over the wetter fall and winter period were most strongly related to groundwater temperatures and levels, while precipitation and well location were weaker (but still significant) predictors. In contrast, the moderate temporal variability in nitrate concentrations was not significantly related to hydrologic forcings. TC was relatively widespread across the ASA and spatial patterns could not be attributed solely to surface water connectivity. Varying nitrate concentrations across the ASA were significantly related to both well location and depth, likely due to spatially variable nitrogen loading and localized geochemical attenuation (i.e., denitrification). Vulnerability of the ASA to bacteria was clearly linked to hydrologic conditions, and was distinct from nitrate, such that a groundwater management strategy specifically for bacterial contaminants is warranted.
Mellor, Andrea F P; Cey, Edwin E
2015-11-01
The Abbotsford-Sumas aquifer (ASA) has a history of nitrate contamination from agricultural land use and manure application to soils, yet little is known about its microbial groundwater quality. The goal of this study was to investigate the spatiotemporal distribution of pathogen indicators (Escherichia coli [E. coli] and total coliform [TC]) and nitrate in groundwater, and their potential relation to hydrologic drivers. Sampling of 46 wells over an 11-month period confirmed elevated nitrate concentrations, with more than 50% of samples exceeding 10 mg-N/L. E. coli detections in groundwater were infrequent (4 of 385 total samples) and attributed mainly to surface water-groundwater connections along Fishtrap Creek, which tested positive for E. coli in every sampling event. TC was detected frequently in groundwater (70% of samples) across the ASA. Generalized additive mixed models (GAMMs) yielded valuable insights into relationships between TC or nitrate and a range of spatial, temporal, and hydrologic explanatory variables. Increased TC values over the wetter fall and winter period were most strongly related to groundwater temperatures and levels, while precipitation and well location were weaker (but still significant) predictors. In contrast, the moderate temporal variability in nitrate concentrations was not significantly related to hydrologic forcings. TC was relatively widespread across the ASA and spatial patterns could not be attributed solely to surface water connectivity. Varying nitrate concentrations across the ASA were significantly related to both well location and depth, likely due to spatially variable nitrogen loading and localized geochemical attenuation (i.e., denitrification). Vulnerability of the ASA to bacteria was clearly linked to hydrologic conditions, and was distinct from nitrate, such that a groundwater management strategy specifically for bacterial contaminants is warranted.
Optimization of soil mixing technology through metallic iron addition.
Moos, L. P.
1999-01-15
Enhanced soil mixing is a process used to remove volatile organic compounds (VOCs) from soil. In this process, also known as soil mixing with thermally enhanced soil vapor extraction, or SM/TESVE, a soil mixing apparatus breaks up and mixes a column of soil up to 9 m (30 ft) deep; simultaneously, hot air is blown through the soil. The hot air carries the VOCs to the surface where they are collected and safely disposed of. This technology is cost effective at high VOC concentrations, but it becomes cost prohibitive at low concentrations. Argonne National Laboratory-East conducted a project to evaluate ways of improving the effectiveness of this system. The project investigated the feasibility of integrating the SM/TESVE process with three soil treatment processes--soil vapor extraction, augmented indigenous biodegradation, and zero-valent iron addition. Each of these technologies was considered a polishing treatment designed to remove the contaminants left behind by enhanced soil mixing. The experiment was designed to determine if the overall VOC removal effectiveness and cost-effectiveness of the SM/TESVE process could be improved by integrating this approach with one of the polishing treatment systems.
VISUAL PLUMES MIXING ZONE MODELING SOFTWARE
The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ^{15}N and δ^{18}O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; ...
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated
Quantifying uncertainty in stable isotope mixing models
NASA Astrophysics Data System (ADS)
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-01
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated
MixSIAR: advanced stable isotope mixing models in R
Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...
Percolation phenomenon in mixed reverse micelles: the effect of additives.
Paul, Bidyut K; Mitra, Rajib K
2006-03-01
The conductivity of AOT/IPM/water reverse micellar systems as a function of temperature, has been found to be non-percolating at three different concentrations (100, 175 and 250 mM), while the addition of nonionic surfactants [polyoxyethylene(10) cetyl ether (Brij-56) and polyoxyethylene(20) cetyl ether (Brij-58)] to these systems exhibits temperature-induced percolation in conductance in non-percolating AOT/isopropyl myristate (IPM)/water system at constant compositions (i.e., at fixed total surfactant concentration, omega and X(nonionic)). The influence of total surfactant concentration (micellar concentration) on the temperature-induced percolation behaviors of these systems has been investigated. The effect of Brij-58 is more pronounced than that of Brij-56 in inducing percolation. The threshold percolation temperature, Tp has been determined for these systems in presence of additives of different molecular structures, physical parameters and/or interfacial properties. The additives have shown both assisting and resisting effects on the percolation threshold. The additives, bile salt (sodium cholate), urea, formamide, cholesteryl acetate, cholesteryl benzoate, toluene, a triblock copolymer [(EO)13(PO)30(EO)13, Pluronic, PL64], polybutadiene, sucrose esters (sucrose dodecanoates, L-1695 and sucrose monostearate S-1670), formamide distinctively fall in the former category, whereas sodium chloride, cholesteryl palmitate, crown ether, ethylene glycol constitute the latter for both systems. Sucrose dodecanoates (L-595) had almost marginal effect on the process. The observed behavior of these additives on the percolation phenomenon has been explained in terms of critical packing parameter and/or other factors, which influence the texture of the interface and solution properties of the mixed reverse micellar systems. The activation energy, Ep for the percolation process has been evaluated. Ep values for the AOT/Brij-56 systems have been found to be lower than those of
Modeling Mix in ICF Implosions
NASA Astrophysics Data System (ADS)
Weber, C. R.; Clark, D. S.; Chang, B.; Eder, D. C.; Haan, S. W.; Jones, O. S.; Marinak, M. M.; Peterson, J. L.; Robey, H. F.
2014-10-01
The observation of ablator material mixing into the hot spot of ICF implosions correlates with reduced yield in National Ignition Campaign (NIC) experiments. Higher Z ablator material radiatively cools the central hot spot, inhibiting thermonuclear burn. This talk focuses on modeling a ``high-mix'' implosion from the NIC, where greater than 1000 ng of ablator material was inferred to have mixed into the hot spot. Standard post-shot modeling of this implosion does not predict the large amounts of ablator mix necessary to explain the data. Other issues are explored in this talk and sensitivity to the method of radiation transport is found. Compared with radiation diffusion, Sn transport can increase ablation front growth and alter the blow-off dynamics of capsule dust. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
2016-01-01
Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees. This article is part of the themed issue ‘Dating species divergences using rocks and clocks’. PMID:27325829
Lartillot, Nicolas; Phillips, Matthew J; Ronquist, Fredrik
2016-07-19
Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees.This article is part of the themed issue 'Dating species divergences using rocks and clocks'.
Overview of Neutrino Mixing Models and Their Mixing Angle Predictions
Albright, Carl H.
2009-11-01
An overview of neutrino-mixing models is presented with emphasis on the types of horizontal flavor and vertical family symmetries that have been invoked. Distributions for the mixing angles of many models are displayed. Ways to differentiate among the models and to narrow the list of viable models are discussed.
Bayesian stable isotope mixing models
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...
Intuitionistic fuzzy stability of a general mixed additive-cubic equation
NASA Astrophysics Data System (ADS)
Xu, Tian Zhou; Rassias, John Michael; Xu, Wan Xin
2010-06-01
We establish some stability results concerning the general mixed additive-cubic functional equation, f(kx +y)+f(kx -y)=kf(x +y)+kf(x -y)+2f(kx)-2kf(x ),in intuitionistic fuzzy normed spaces. In addition, we show under some suitable conditions that an approximately mixed additive-cubic function can be approximated by a mixed additive and cubic mapping.
Model Verification of Mixed Dynamic Systems
NASA Technical Reports Server (NTRS)
Evensen, D. A.; Chrostowski, J. D.; Hasselman, T. K.
1982-01-01
MOVER uses experimental data to verify mathematical models of "mixed" dynamic systems. The term "mixed" refers to interactive mechanical, hydraulic, electrical, and other components. Program compares analytical transfer functions with experiment.
Effect of mixed additives on lead-acid battery electrolyte
NASA Astrophysics Data System (ADS)
Bhattacharya, Arup; Basumallick, Indra Narayan
This paper describes the corrosion behaviour of the positive and negative electrodes of a lead-acid battery in 5 M H 2SO 4 with binary additives such as mixtures of phosphoric acid and boric acid, phosphoric acid and tin sulphate, and phosphoric acid and picric acid. The effect of these additives is examined from the Tafel polarisation curves, double layer capacitance and percentage of inhibition efficiency. A lead salt battery has been fabricated replacing the binary mixture with an alternative electrolyte and the above electrochemical parameters have been evaluated for this lead salt battery. The results are explained in terms of H + ion transport and the morphological change of the PbSO 4 layer.
Chen, Xuechu; He, Shengbing; Zhang, Yueping; Huang, Xiaobo; Huang, Yingying; Chen, Danyue; Huang, Xiaochen; Tang, Jianwu
2015-10-01
Wetlands and ponds are frequently used to remove nitrate from effluents or runoffs. However, the efficiency of this approach is limited. Based on the assumption that introducing vertical mixing to water column plus carbon addition would benefit the diffusion across the sediment-water interface, we conducted simulation experiments to identify a method for enhancing nitrate removal. The results suggested that the sediment-water interface has a great potential for nitrate removal, and the potential can be activated after several days of acclimation. Adding additional carbon plus mixing significantly increases the nitrate removal capacity, and the removal of total nitrogen (TN) and nitrate-nitrogen (NO3(-)-N) is well fitted to a first-order reaction model. Adding Hydrilla verticillata debris as a carbon source increased nitrate removal, whereas adding Eichhornia crassipe decreased it. Adding ethanol plus mixing greatly improved the removal performance, with the removal rate of NO3(-)-N and TN reaching 15.0-16.5 g m(-2) d(-1). The feasibility of this enhancement method was further confirmed with a wetland microcosm, and the NO3(-)-N removal rate maintained at 10.0-12.0 g m(-2) d(-1) at a hydraulic loading rate of 0.5 m d(-1).
Use and abuse of mixing models (MixSIAR)
Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...
Gray component replacement using color mixing models
NASA Astrophysics Data System (ADS)
Kang, Henry R.
1994-05-01
A new approach to the gray component replacement (GCR) has been developed. It employs the color mixing theory for modeling the spectral fit between the 3-color and 4-color prints. To achieve this goal, we first examine the accuracy of the models with respect to the experimental results by applying them to the prints made by a Canon Color Laser Copier-500 (CLC-500). An empirical halftone correction factor is used for improving the data fitting. Among the models tested, the halftone corrected Kubelka-Munk theory gives the closest fit, followed by the halftone corrected Beer-Bouguer law and the Yule-Neilsen approach. We then apply the halftone corrected BB law to GCR. The main feature of this GCR approach is based on the spectral measurements of the primary color step wedges and a software package implementing the color mixing model. The software determines the amount of the gray component to be removed, then adjusts each primary color until a good match of the peak wavelengths between the 3-color and 4-color spectra is obtained. Results indicate that the average (Delta) Eab between cmy and cmyk renditions of 64 color patches is 3.11 (Delta) Eab. Eighty-seven percent of the patches has (Delta) Eab less than 5 units. The advantage of this approach is its simplicity; there is no need for the black printer and under color addition. Because this approach is based on the spectral reproduction, it minimizes the metamerism.
Toward Better Modeling of Supercritical Turbulent Mixing
NASA Technical Reports Server (NTRS)
Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth
2008-01-01
study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.
Transition mixing study empirical model report
NASA Technical Reports Server (NTRS)
Srinivasan, R.; White, C.
1988-01-01
The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
Modeling populations of rotationally mixed massive stars
NASA Astrophysics Data System (ADS)
Brott, I.
2011-02-01
Massive stars can be considered as cosmic engines. With their high luminosities, strong stellar winds and violent deaths they drive the evolution of galaxies through-out the history of the universe. Despite the importance of massive stars, their evolution is still poorly understood. Two major issues have plagued evolutionary models of massive stars until today: mixing and mass loss On the main sequence, the effects of mass loss remain limited in the considered mass and metallicity range, this thesis concentrates on the role of mixing in massive stars. This thesis approaches this problem just on the cross road between observations and simulations. The main question: Do evolutionary models of single stars, accounting for the effects of rotation, reproduce the observed properties of real stars. In particular we are interested if the evolutionary models can reproduce the surface abundance changes during the main-sequence phase. To constrain our models we build a population synthesis model for the sample of the VLT-FLAMES Survey of Massive stars, for which star-formation history and rotational velocity distribution are well constrained. We consider the four main regions of the Hunter diagram. Nitrogen un-enriched slow rotators and nitrogen enriched fast rotators that are predicted by theory. Nitrogen enriched slow rotators and nitrogen unenriched fast rotators that are not predicted by our model. We conclude that currently these comparisons are not sufficient to verify the theory of rotational mixing. Physical processes in addition to rotational mixing appear necessary to explain the stars in the later two regions. The chapters of this Thesis have been published in the following Journals: Ch. 2: ``Rotating Massive Main-Sequence Stars I: Grids of Evolutionary Models and Isochrones'', I. Brott, S. E. de Mink, M. Cantiello, N. Langer, A. de Koter, C. J. Evans, I. Hunter, C. Trundle, J.S. Vink submitted to Astronomy & Astrop hysics Ch. 3: ``The VLT-FLAMES Survey of Massive
Extended Generalized Linear Latent and Mixed Model
ERIC Educational Resources Information Center
Segawa, Eisuke; Emery, Sherry; Curry, Susan J.
2008-01-01
The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…
NASA Astrophysics Data System (ADS)
Jeong, Hee-Hoon; Jin, Hyun-Ho; Ha, Sung-Ho; Jang, Suk-Hee; Kang, Yong-Gu; Han, Min-Hyun
2016-03-01
A series of experiments were performed to determine an optimum balance between processability and performance of a highly loaded silica compound. The experiments evaluated 4 different silane injection times. All mixing related to silane addition was conducted with a scaled up "Tandem" mixer line. With exception to silane addition timing, almost all operating conditions were controlled between experimental features. It was found that when the silane addition was introduced earlier in the mixing cycle both the reaction was more complete and the bound rubber content was higher. But processability indicators such as sheet forming and Mooney plasticity were negatively impacted. On the other hand, as silane injection was delayed to later in the mixing process the filler dispersion and good sheet forming was improved. However both the bound rubber content and Silane reaction completion were decreased. With the changes in silane addition time, the processability and properties of a silica compound can be controlled.
Evaluation of Warm Mix Asphalt Additives for Use in Modified Asphalt Mixtures
NASA Astrophysics Data System (ADS)
Chamoun, Zahi
The objective of this research effort is to evaluate the use of warm-mix additives with polymer modified and terminal blend tire rubber asphalt mixtures from Nevada and California. The research completed over two stages: first stage evaluated two different WMA technologies; Sasobit and Advera, and second stage evaluated one additional WMA technology; Evotherm. The experimental program covered the evaluation of resistance of the mixtures to moisture damage, the performance characteristics of the mixtures, and mechanistic analysis of mixtures in simulated pavements. In the both stages, the mixture resistance to moisture damage was evaluated using the indirect tensile test and the dynamic modulus at multiple freeze-thaw cycles, and the resistance of the various asphalt mixtures to permanent deformation using the Asphalt Mixture Performance Tester (AMPT). Resistance of the untreated mixes to fatigue cracking using the flexural beam fatigue was only completed for the first stage. One source of aggregates was sampled in, two different batches, three warm mix asphalt technologies (Advera, Sasobit and Evotherm) and three asphalt binder types (neat, polymer-modified, and terminal blend tire rubber modified asphalt binders) typically used in Nevada and California were evaluated in this study. This thesis presents the resistance of the first stage mixtures to permanent deformation and fatigue cracking using two warm-mix additives; Advera and Sasobit, and the resistance to moisture damage and permanent deformation of the second stage mixtures with only one warm-mix additive; Evotherm.
Cylindrical Mixing Layer Model in Stellar Jet
NASA Astrophysics Data System (ADS)
Choe, Seung-Urn; Yu, Kyoung Hee
1994-12-01
We have developed a cylindrical mixing layer model of a stellar jet including cooling effect in order to understand an optical emission mechanism along collimated high velocity stellar jets associated with young stellar objects. The cylindrical results have been calculated to be the same as the 2D ones presented by Canto & Raga(1991) because the entrainment efficiency in our cylindrical model has been obtained to be the same value as the 2D model has given. We have discussed the morphological and physical characteristics of the mixing layers by the cooling effect. As the jet Mach number increases, the initial temperature of the mixing layer goes high because the kinetic energy of the jet partly converts to the thermal energy of the mixing layer. The initial cooling of the mixing layer is very severe, changing its outer boundary radius. A subsequent change becomes adiabatic. The number of the Mach disks in the stellar jet and the total radiative luminosity of the mixing layer, based on our cylindrical calculation, have quite agreed with the observation.
Effects of two warm-mix additives on aging, rheological and failure properties of asphalt cements
NASA Astrophysics Data System (ADS)
Omari, Isaac Obeng
Sustainable road construction and maintenance could be supported when excellent warm-mix additives are employed in the modification of asphalt. These warm-mix additives provide remedies for today's requirements such as fatigue cracking resistance, durability, thermal cracking resistance, rutting resistance and resistance to moisture damage. Warm-mix additives are based on waxes and surfactants which reduce energy consumption and carbon dioxide emissions significantly during the construction phase of the pavement. In this study, the effects of two warm mix additives, siloxane and oxidised polyethylene wax, on roofing asphalt flux (RAF) and asphalt modified with waste engine oil (655-7) were investigated to evaluate the rheological, aging and failure properties of the asphalt binders. In terms of the properties of these two different asphalts, RAF has proved to be superior quality asphalt whereas 655-7 is poor quality asphalt. The properties of the modified asphalt samples were measured by Superpave(TM) tests such as Dynamic Shear Rheometer (DSR) test and Bending Beam Rheometer (BBR) test as well as modified protocols such as the extended BBR (eBBR) test (LS-308) and the Double- Edge-Notched Tension (DENT) test (LS-299) after laboratory aging. In addition, the Avrami theory was used to gain an insight on the crystallization of asphalt or the waxes within the asphalt binder. This study has however shown that the eBBR and DENT tests are better tools for providing accurate specification tests to curb thermal and fatigue cracking in contemporary asphalt pavements.
The Mixed Effects Trend Vector Model
ERIC Educational Resources Information Center
de Rooij, Mark; Schouteden, Martijn
2012-01-01
Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…
Wavelet-based functional mixed models
Morris, Jeffrey S.; Carroll, Raymond J.
2009-01-01
Summary Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects’ wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. PMID:19759841
Simplified models of mixed dark matter
Cheung, Clifford; Sanford, David E-mail: dsanford@caltech.edu
2014-02-01
We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors.
Performance testing of asphalt concrete containing crumb rubber modifier and warm mix additives
NASA Astrophysics Data System (ADS)
Ikpugha, Omo John
Utilisation of scrap tire has been achieved through the production of crumb rubber modified binders and rubberised asphalt concrete. Terminal and field blended asphalt rubbers have been developed through the wet process to incorporate crumb rubber into the asphalt binder. Warm mix asphalt technologies have been developed to curb the problem associated with the processing and production of such crumb rubber modified binders. Also the lowered production and compaction temperatures associated with warm mix additives suggests the possibility of moisture retention in the mix, which can lead to moisture damage. Conventional moisture sensitivity tests have not effectively discriminated good and poor mixes, due to the difficulty of simulating field moisture damage mechanisms. This study was carried out to investigate performance properties of crumb rubber modified asphalt concrete, using commercial warm mix asphalt technology. Commonly utilised asphalt mixtures in North America such as dense graded and stone mastic asphalt were used in this study. Uniaxial Cyclic Compression Testing (UCCT) was used to measure permanent deformation at high temperatures. Indirect Tensile Testing (IDT) was used to investigate low temperature performance. Moisture Induced Sensitivity Testing (MiST) was proposed to be an effective method for detecting the susceptibility of asphalt mixtures to moisture damage, as it incorporates major field stripping mechanisms. Sonnewarm(TM), Sasobit(TM) and Evotherm(TM) additives improved the resistance to permanent deformation of dense graded mixes at a loading rate of 0.5 percent by weight of the binder. Polymer modified mixtures showed superior resistance to permanent deformation compared to asphalt rubber in all mix types. Rediset(TM) WMX improves low temperature properties of dense graded mixes at 0.5 percent loading on the asphalt cement. Rediset LQ and Rediset WMX showed good anti stripping properties at 0.5 percent loading on the asphalt cement. The
NASA Astrophysics Data System (ADS)
Gallego, Juan; Rodríguez-Alloza, Ana María; Giuliani, Felice
2016-08-01
Warm mix asphalt (WMA) is a new research topic in the field of road pavement materials. This technology allows lower energy consumption and greenhouse gas (GHG) emissions by reducing compaction and placement temperatures of the asphalt mixtures. However, this technology is still under study, and the influence of the WMA additives has yet to be investigated thoroughly and clearly identified, especially in the case of crumb rubber modified (CRM) binders.
VISUAL PLUMES MIXING ZONE MODELING SOFTWARE
The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...
Configuration mixing calculations in soluble models
NASA Astrophysics Data System (ADS)
Cambiaggio, M. C.; Plastino, A.; Szybisz, L.; Miller, H. G.
1983-07-01
Configuration mixing calculations have been performed in two quasi-spin models using basis states which are solutions of a particular set of Hartree-Fock equations. Each of these solutions, even those which do not correspond to the global minimum, is found to contain interesting physical information. Relatively good agreement with the exact lowest-lying states has been obtained. In particular, one obtains a better approximation to the ground state than that provided by Hartree-Fock.
Multikernel linear mixed models for complex phenotype prediction
Weissbrod, Omer; Geiger, Dan; Rosset, Saharon
2016-01-01
Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636
A random distribution reacting mixing layer model
NASA Technical Reports Server (NTRS)
Jones, Richard A.
1994-01-01
A methodology for simulation of molecular mixing and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and non-reacting shear layer present in the facility given basic assumptions about turbulence properties.
A random distribution reacting mixing layer model
NASA Technical Reports Server (NTRS)
Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.
1994-01-01
A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.
CFD Modeling of Mixed-Phase Icing
NASA Astrophysics Data System (ADS)
Zhang, Lifen; Liu, Zhenxia; Zhang, Fei
2016-12-01
Ice crystal ingestion at high altitude has been reported to be a threat for safe operation of aero-engine in recently. Ice crystals do not accrete on external surface because of cold environment. But when they enter the core flow of aero-engine, ice crystals melt partially into droplets due to higher temperature. Air-droplets-ice crystal is the mixed-phase, which will give rise to ice accretion on static and rotating components in compressor. Subsequently, compressor surge and engine shutdowns may occur. To provide a numerical tool to analyze this in detail, a numerical method was developed in this study. The mixed phase flow was solved using Eulerian-Lagrangian method. The dispersed phase was represented by one-way coupling. A thermodynamic model that considers mass and energy balance with ice crystals and droplets was presented as well. The icing code was implemented by the user-defined function of Fluent. The method of ice accretion under mixed-phase conditions was validated by comparing the results simulated on a cylinder with experimental data derived from literature. The predicted ice shape and mass agree with these data, thereby confirming the validity of the numerical method developed in this research for mixed-phase conditions.
Jackson, Andrew L; Inger, Richard; Bearhop, Stuart; Parnell, Andrew
2009-03-01
The application of Bayesian methods to stable isotopic mixing problems, including inference of diet has the potential to revolutionise ecological research. Using simulated data we show that a recently published model MixSIR fails to correctly identify the true underlying dietary proportions more than 50% of the time and fails with increasing frequency as additional unquantified error is added. While the source of the fundamental failure remains elusive, mitigating solutions are suggested for dealing with additional unquantified variation. Moreover, MixSIR uses a formulation for a prior distribution that results in an opaque and unintuitive covariance structure.
Retarding viscous Rayleigh-Taylor mixing by an optimized additional mode
NASA Astrophysics Data System (ADS)
Xie, C. Y.; Tao, J. J.; Sun, Z. L.; Li, J.
2017-02-01
The Rayleigh-Taylor (RT) mixing induced by random interface disturbances between two incompressible viscous fluids is simulated numerically. The ensemble averaged spike velocity is found to be remarkably retarded when the random interface disturbances are superimposed with an optimized additional mode. The mode's wavenumber is selected to be large enough to avoid enhancing the dominance of long-wavelength modes, but not so large that its saturated spike and bubble velocities are too small to stimulate a growing effective density-gradient layer suppressing the long-wavelength modes. Such an optimized suppressing mode is expected to be found in the RT mixing including other diffusion processes, e.g., concentration diffusion and thermal diffusion.
BDA special care case mix model.
Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L
2010-04-10
Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
Network Reconstruction Using Nonparametric Additive ODE Models
Henderson, James; Michailidis, George
2014-01-01
Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative
Computational Process Modeling for Additive Manufacturing (OSU)
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2015-01-01
Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.
Effect of Crumb Rubber and Warm Mix Additives on Asphalt Aging, Rheological, and Failure Properties
NASA Astrophysics Data System (ADS)
Agrawal, Prashant
Asphalt-rubber mixtures have been shown to have useful properties with respect to distresses observed in asphalt concrete pavements. The most notable change in properties is a large increase in viscosity and improved low-temperature cracking resistance. Warm mix additives can lower production and compaction temperatures. Lower temperatures reduce harmful emissions and lower energy consumption, and thus provide environmental benefits and cut costs. In this study, the effects of crumb rubber modification on various asphalts such as California Valley, Boscan, Alaska North Slope, Laguna and Cold Lake were also studied. The materials used for warm mix modification were obtained from various commercial sources. The RAF binder was produced by Imperial Oil in their Nanticoke, Ontario, refinery on Lake Erie. A second commercial PG 52-34 (hereafter denoted as NER) was obtained/sampled during the construction of a northern Ontario MTO contract. Some regular tests such as Dynamic Shear Rheometer (DSR) and Bending Beam Rheometer (BBR), Multiple Stress Creep Recovery (MSCR) and some modified new protocols such as the extended BBR test (LS-308) and the Double-Edge Notched Tension (DENT) test (LS-299) are used to study, the effect of warm mix and a host of other additives on rheological, aging and failure properties. A comparison in the properties of RAF and NER asphalts has also been made as RAF is good quality asphalt and NER is bad quality asphalt. From the studies the effect of additives on chemical and physical hardening tendencies was found to be significant. The asphalt samples tested in this study showed a range of tendencies for chemical and physical hardening.
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
CREATION OF THE MODEL ADDITIONAL PROTOCOL
Houck, F.; Rosenthal, M.; Wulf, N.
2010-05-25
In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.
Was the Hadean Earth stagnant? Constraints from dynamic mixing models
NASA Astrophysics Data System (ADS)
O'Neill, C.; Debaille, V.; Griffin, W. L.
2013-12-01
As a result of high internal heat production, high rates of impact bombardment, and primordial heat from accretion, a result a strong case is made for extremely high internal temperatures, low internal viscosities, and extremely vigorous mantle convection in the Hadean mantle. Previous studies of mixing of high-Rayleigh number convection indicates that chemically heterogeneous mantle anomalies should have efficiently remixed into the mantle on timescales of less than 100Myr. However, 142Nd and 182W isotope studies indicate that heterogeneous mantle domains survived, without mixing, for over 2Gyr - at odds with mixing rates expected. Similarly, platinum group elements concentrations in Archaean komatiites, purported due to the later veneer of meteoritic addition on the Earth, only achieve current levels at 2.7Ga - indicating a time lag of almost 1-2Gyr in mixing this material thoroughly in the mantle. Whilst previous studies have sought to understand slow Archaean mantle mixing via mantle layering due to endothermic phase changes, or anomalously viscous blobs of material, these have demonstrated limited efficacy. Here we pursue another explanation for inefficient mantle mixing in the Hadean: tectonic regime. A number of lines of evidence suggest resurfacing in the Archaean was episodic, and extending these models to Hadean times implies the Hadean was characterized by long periods of tectonic quiescence. We explore mixing times in 3D spherical-cap models of mantle convection, which incorporate vertically stratified and temperature-dependent viscosities. At an extreme, we show that mixing in stagnant lid regimes is over an order of magnitude less efficient than mobile lid mixing, and for plausible Rayleigh numbers and internal heat production, the lag in Hadean convective recycling can be explained. The attractiveness of this explanation is that it not only explains the long-lived 142Nd and 182W mantle anomalies, but also 1) posits an explanation for the delay
Inference of ICF implosion core mix using experimental data and theoretical mix modeling
Sherrill, Leslie Welser; Haynes, Donald A; Cooley, James H; Sherrill, Manolo E; Mancini, Roberto C; Tommasini, Riccardo; Golovkin, Igor E; Haan, Steven W
2009-01-01
The mixing between fuel and shell materials in Inertial Confinement Fusion (lCF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model predicted trends in the width of the mix layer as a function of initial shell thickness. These results contribute to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increasing confidence in the methods used to extract mixing information from experimental data.
NASA Technical Reports Server (NTRS)
Menon, Suresh
1992-01-01
An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A combustor design with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary zone product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the second stage. Numerical predictions using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used to study multispecies mixing as well as propane-air and hydrogen-air jet nonpremixed flames, and has been used to predict NO(x) production in the mixing region. Comparison with available experimental data show good agreement, thereby providing validation of the mixing model. With this demonstration, this mixing model is ready to be implemented in conjunction with steady-state prediction methods and provide an improved engineering design analysis tool.
Mixing parametrizations for ocean climate modelling
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir
2016-04-01
The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model
Reactive Additive Stabilization Process (RASP) for hazardous and mixed waste vitrification
Jantzen, C.M.; Pickett, J.B.; Ramsey, W.G.
1993-07-01
Solidification of hazardous/mixed wastes into glass is being examined at the Savannah River Site (SRS) for (1) nickel plating line (F006) sludges and (2) incinerator wastes. Vitrification of these wastes using high surface area additives, the Reactive Additive Stabilization Process (RASP), has been determined to greatly enhance the dissolution and retention of hazardous, mixed, and heavy metal species in glass. RASP lowers melt temperatures (typically 1050-- 1150{degrees}C), thereby minimizing volatility concerns during vitrification. RASP maximizes waste loading (typically 50--75 wt% on a dry oxide basis) by taking advantage of the glass forming potential of the waste. RASP vitrification thereby minimizes waste disposal volume (typically 86--97 vol. %), and maximizes cost savings. Solidification of the F006 plating line sludges containing depleted uranium has been achieved in both soda-lime-silica (SLS) and borosilicate glasses at 1150{degrees}C up to waste loadings of 75 wt%. Solidification of incinerator blowdown and mixtures of incinerator blowdown and bottom kiln ash have been achieved in SLS glass at 1150{degrees}C up to waste loadings of 50% using RASP. These waste loadings correspond to volume reductions of 86 and 94 volume %, respectively, with large associated savings in storage costs.
Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage
ERIC Educational Resources Information Center
Galyardt, April
2012-01-01
This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…
Kizilkaya, Kadir; Tempelman, Robert J
2005-01-01
We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567
Hume, M E; Clemente-Hernández, S; Oviedo-Rondón, E O
2006-12-01
Evaluation of digestive microbial ecology is necessary to understand effects of growth-promoting feed. In the current study, the dynamics of intestinal microbial communities (MC) were examined in broilers fed diets supplemented with a combination of antibiotic (bacitracin methylene disalicylate) and ionophore (Coban 60), and diets containing 1 of 2 essential oil (EO) blends, Crina Poultry (CP) and Crina Alternate (CA). Five treatments were analyzed: 1) unmedicated uninfected control; 2) unmedicated infected control; 3) feed additives monensin (bacitracin methylene disalicylate) + monensin (Coban 60; AI); 4) EO blend CP; and 5) EO blend CA. Additives were mixed into a basal feed mixture, and EO were adjusted to 100 ppm. Chicks were infected by oral gavage at 19 d of age with Eimeria acervulina, Eimeria maxima, and Eimeria tenella. Duodenal, ileal, and cecal samples were taken from 12 birds per treatment just before and 7 d after challenge; 2 samples each were pooled to give a final number of 6 samples total; and all pooled samples were frozen until used for DNA extraction. Denaturing gradient gel electrophoresis was used to examine PCR-amplified fragments of the bacterial 16S ribosomal DNA variable region. Results are presented as percentages of similarity coefficients (SC). Dendrograms of PCR amplicon or band patterns indicated MC differences due to intestinal location, feed additives, and cocci challenge. Essential oil blends CP and CA affected MC in all gut sections. Each EO had different effects over MC, and they differed in most instances from the AI group. The cocci challenge caused drastic MC population shifts in duodenal, ileal, and cecal sections (36.7, 55.4, and 36.2% SC, respectively). Diets supplemented with CP supported higher SC between pre- and postchallenge MC (89.9, 83.3, and 76.4%) than AI (81.8., 57.4, and 60.0%). We concluded that mixed coccidia challenge caused drastic shifts in MC. These EO blends modulated MC better than AI, avoiding drastic
Stricker, C.; Fernando, R. L.; Elston, R. C.
1995-01-01
This paper presents an extension of the finite polygenic mixed model of FERNANDO et al. (1994) to linkage analysis. The finite polygenic mixed model, extended for linkage analysis, leads to a likelihood that can be calculated using efficient algorithms developed for oligogenic models. For comparison, linkage analysis of 5 simulated 4021-member pedigrees was performed using the usual mixed model of inheritance, approximated by HASSTEDT (1982), and the finite polygenic mixed model extended for linkage analysis presented here. Maximum likelihood estimates of the finite polygenic mixed model could be inferred to be closer to the simulated values in these pedigrees. PMID:8601502
Nonequilibrium antiferromagnetic mixed-spin Ising model.
Godoy, Mauricio; Figueiredo, Wagner
2002-09-01
We studied an antiferromagnetic mixed-spin Ising model on the square lattice subject to two competing stochastic processes. The model system consists of two interpenetrating sublattices of spins sigma=1/2 and S=1, and we take only nearest neighbor interactions between pairs of spins. The system is in contact with a heat bath at temperature T, and the exchange of energy with the heat bath occurs via one-spin flip (Glauber dynamics). Besides, the system interacts with an external agency of energy, which supplies energy to it whenever two nearest neighboring spins are simultaneously flipped. By employing Monte Carlo simulations and a dynamical pair approximation, we found the phase diagram for the stationary states of the model in the plane temperature T versus the competition parameter between one- and two-spin flips p. We observed the appearance of three distinct phases, that are separated by continuous transition lines. We also determined the static critical exponents along these lines and we showed that this nonequilibrium model belongs to the universality class of the two-dimensional equilibrium Ising model.
Modeling of Low Feed-Through CD Mix Implosions
NASA Astrophysics Data System (ADS)
Pino, Jesse; MacLaren, Steven; Greenough, Jeff; Casey, Daniel; Dittrich, Tom; Kahn, Shahab; Kyrala, George; Ma, Tammy; Salmonson, Jay; Smalyuk, Vladimir; Tipton, Robert
2015-11-01
The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the National Ignition Facility. However, the previous implosions suffered from large instability growth seeded from perturbations on the outside of the capsule. Recently, the separated reactants technique has been applied to two platforms designed to minimize this feed-through and isolate local mix at the gas-ablator interface: the Two Shock (TS) and Adiabat-Shaped (AS) Platforms. Additionally, the background contamination of Deuterium in the gas has been greatly reduced, allowing for simultaneous observation of TT, DT, and DD neutrons, which respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations with both a Reynolds-Averaged Navier Stokes method and an enhanced diffusivity model. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-674867.
Extended model for Richtmyer-Meshkov mix
Mikaelian, K O
2009-11-18
We examine four Richtmyer-Meshkov (RM) experiments on shock-generated turbulent mix and find them to be in good agreement with our earlier simple model in which the growth rate h of the mixing layer following a shock or reshock is constant and given by 2{alpha}A{Delta}v, independent of initial conditions h{sub 0}. Here A is the Atwood number ({rho}{sub B}-{rho}{sub A})/({rho}{sub B} + {rho}{sub A}), {rho}{sub A,B} are the densities of the two fluids, {Delta}V is the jump in velocity induced by the shock or reshock, and {alpha} is the constant measured in Rayleigh-Taylor (RT) experiments: {alpha}{sup bubble} {approx} 0.05-0.07, {alpha}{sup spike} {approx} (1.8-2.5){alpha}{sup bubble} for A {approx} 0.7-1.0. In the extended model the growth rate beings to day after a time t*, when h = h*, slowing down from h = h{sub 0} + 2{alpha}A{Delta}vt to h {approx} t{sup {theta}} behavior, with {theta}{sup bubble} {approx} 0.25 and {theta}{sup spike} {approx} 0.36 for A {approx} 0.7. They ascribe this change-over to loss of memory of the direction of the shock or reshock, signaling transition from highly directional to isotropic turbulence. In the simplest extension of the model h*/h{sub 0} is independent of {Delta}v and depends only on A. They find that h*/h{sub 0} {approx} 2.5-3.5 for A {approx} 0.7-1.0.
Measurements and Models for Hazardous chemical and Mixed Wastes
Laurel A. Watts; Cynthia D. Holcomb; Stephanie L. Outcalt; Beverly Louie; Michael E. Mullins; Tony N. Rogers
2002-08-21
Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the DOE sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system o water + acetone + 2-propanol + NaNo3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation
Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...
Poston, James A.
1997-01-01
Mixed metal oxide pellets for removing hydrogen sulfide from fuel gas mixes derived from coal are stabilized for operation over repeated cycles of desulfurization and regeneration reactions by addition of a large promoter metal oxide such as lanthanum trioxide. The pellets, which may be principally made up of a mixed metal oxide such as zinc titanate, exhibit physical stability and lack of spalling or decrepitation over repeated cycles without loss of reactivity. The lanthanum oxide is mixed with pellet-forming components in an amount of 1 to 10 weight percent.
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
Box-Cox Mixed Logit Model for Travel Behavior Analysis
NASA Astrophysics Data System (ADS)
Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.
2010-09-01
To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.
Radiolysis Model Formulation for Integration with the Mixed Potential Model
Buck, Edgar C.; Wittman, Richard S.
2014-07-10
The U.S. Department of Energy Office of Nuclear Energy (DOE-NE), Office of Fuel Cycle Technology has established the Used Fuel Disposition Campaign (UFDC) to conduct the research and development activities related to storage, transportation, and disposal of used nuclear fuel (UNF) and high-level radioactive waste. Within the UFDC, the components for a general system model of the degradation and subsequent transport of UNF is being developed to analyze the performance of disposal options [Sassani et al., 2012]. Two model components of the near-field part of the problem are the ANL Mixed Potential Model and the PNNL Radiolysis Model. This report is in response to the desire to integrate the two models as outlined in [Buck, E.C, J.L. Jerden, W.L. Ebert, R.S. Wittman, (2013) “Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation,” FCRD-UFD-2013-000290, M3FT-PN0806058
Measurement and Model for Hazardous Chemical and Mixed Waste
Michael E. Mullins; Tony N. Rogers; Stephanie L. Outcalt; Beverly Louie; Laurel A. Watts; Cynthia D. Holcomb
2002-07-30
Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the Department of Energy (DOE) sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system of water + acetone + 2-propanol + NaNO3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.
Mathematical Modelling of Mixed-Model Assembly Line Balancing Problem with Resources Constraints
NASA Astrophysics Data System (ADS)
Magffierah Razali, Muhamad; Rashid, Mohd Fadzil Faisae Ab.; Razif Abdullah Make, Muhammad
2016-11-01
Modern manufacturing industries nowadays encounter with the challenges to provide a product variety in their production at a cheaper cost. This situation requires for a system that flexible with cost competent such as Mixed-Model Assembly Line. This paper developed a mathematical model for Mixed-Model Assembly Line Balancing Problem (MMALBP). In addition to the existing works that consider minimize cycle time, workstation and product rate variation, this paper also consider the resources constraint in the problem modelling. Based on the finding, the modelling results achieved by using computational method were in line with the manual calculation for the evaluated objective functions. Hence, it provided an evidence to verify the developed mathematical model for MMALBP. Implications of the results and future research directions were also presented in this paper.
Models of neutrino mass, mixing and CP violation
NASA Astrophysics Data System (ADS)
King, Stephen F.
2015-12-01
In this topical review we argue that neutrino mass and mixing data motivates extending the Standard Model (SM) to include a non-Abelian discrete flavour symmetry in order to accurately predict the large leptonic mixing angles and {C}{P} violation. We begin with an overview of the SM puzzles, followed by a description of some classic lepton mixing patterns. Lepton mixing may be regarded as a deviation from tri-bimaximal mixing, with charged lepton corrections leading to solar mixing sum rules, or tri-maximal lepton mixing leading to atmospheric mixing rules. We survey neutrino mass models, using a roadmap based on the open questions in neutrino physics. We then focus on the seesaw mechanism with right-handed neutrinos, where sequential dominance (SD) can account for large lepton mixing angles and {C}{P} violation, with precise predictions emerging from constrained SD (CSD). We define the flavour problem and discuss progress towards a theory of favour using GUTs and discrete family symmetry. We classify models as direct, semidirect or indirect, according to the relation between the Klein symmetry of the mass matrices and the discrete family symmetry, in all cases focussing on spontaneous {C}{P} violation. Finally we give two examples of realistic and highly predictive indirect models with CSD, namely an A to Z of flavour with Pati-Salam and a fairly complete A 4 × SU(5) SUSY GUT of flavour, where both models have interesting implications for leptogenesis.
A multifluid mix model with material strength effects
Chang, C. H.; Scannapieco, A. J.
2012-04-23
We present a new multifluid mix model. Its features include material strength effects and pressure and temperature nonequilibrium between mixing materials. It is applicable to both interpenetration and demixing of immiscible fluids and diffusion of miscible fluids. The presented model exhibits the appropriate smooth transition in mathematical form as the mixture evolves from multiphase to molecular mixing, extending its applicability to the intermediate stages in which both types of mixing are present. Virtual mass force and momentum exchange have been generalized for heterogeneous multimaterial mixtures. The compression work has been extended so that the resulting species energy equations are consistent with the pressure force and material strength.
Linear mixing model applied to coarse resolution satellite data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Modeling a Rain-Induced Mixed Layer
1990-06-01
te -)-A-- e e -2)- . (7) ’&Z AZ Az D Using the exponential relations with trigonometry , equation (7) becomes, Ok n) 3 (I- cos2ikAz)+ D (1- cos ikAz...completely unknown because there are no prior studies which predict what portion of total energy may go into subsurface mixing. The biggest obstacle
ERIC Educational Resources Information Center
Holladay, Jennifer
2009-01-01
Since 2002, Teaching Tolerance's Mix It Up at Lunch Day program has helped millions of students cross social boundaries and create more inclusive school communities. Its goal is to create a safe, purposeful opportunity for students to break down the patterns of social self-segregation that too often plague schools. Research conducted in 2006 by…
Imaging and quantifying mixing in a model droplet micromixer
NASA Astrophysics Data System (ADS)
Stone, Z. B.; Stone, H. A.
2005-06-01
Rapid mixing is essential in a variety of microfluidic applications but is often difficult to achieve at low Reynolds numbers. Inspired by a recently developed microdevice that mixes reagents in droplets, which simply flow along a periodic serpentine channel [H. Song, J. D. Tice, and R. F. Ismagilov, "A microfluidic system for controlling reaction networks in time," Angew. Chem. Int. Ed. 42, 767 (2003)], we investigate a model "droplet mixer." The model consists of a spherical droplet immersed in a periodic sequence of distinct external flows, which are superpositions of uniform and shear flows. We label the fluid inside the droplet with two colors and visualize mixing with a method we call "backtrace imaging," which allows us to render cross sections of the droplet at arbitrary times during the mixing cycle. To analyze our results, we present a novel scalar measure of mixing that permits us to locate sets of parameters that optimize mixing over a small number of flow cycles.
Analysis and modeling of subgrid scalar mixing using numerical data
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
Estimating anatomical trajectories with Bayesian mixed-effects modeling.
Ziegler, G; Penny, W D; Ridgway, G R; Ourselin, S; Friston, K J
2015-11-01
We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD).
Estimating anatomical trajectories with Bayesian mixed-effects modeling
Ziegler, G.; Penny, W.D.; Ridgway, G.R.; Ourselin, S.; Friston, K.J.
2015-01-01
We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD). PMID:26190405
An Investigation of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee
2009-01-01
The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…
On the coalescence-dispersion modeling of turbulent molecular mixing
NASA Technical Reports Server (NTRS)
Givi, Peyman; Kosaly, George
1987-01-01
The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.
A Non-Fickian Mixing Model for Stratified Turbulent Flows
2011-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. A Non-Fickian Mixing Model for Stratified Turbulent Flows...would be to improve the predictive skill of the Navy numerical models for submesoscale transport in the ocean. OBJECTIVES My main objective...COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE A Non-Fickian Mixing Model for Stratified Turbulent Flows 5a. CONTRACT NUMBER 5b. GRANT
Simulation model for urban ternary mix-traffic flow
NASA Astrophysics Data System (ADS)
Deo, Lalit; Akkawi, Faisal; Deo, Puspita
2007-12-01
A two-lane two-way traffic light controlled X-intersection for ternary mix traffic (cars + buses (equivalent vehicles) + very large trucks/ buses) is developed based on cellular automata model. This model can provide different matrices such as throughput, queue length and delay time. This paper will describe how the model works and how composition of traffic mix effects the throughput (numbers of vehicles navigate through the intersection per unit of time (vph)) and also compare the result with homogeneous counterpart.
Development of a Medicaid Behavioral Health Case-Mix Model
ERIC Educational Resources Information Center
Robst, John
2009-01-01
Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…
Diagnostic tools for mixing models of stream water chemistry
Hooper, R.P.
2003-01-01
Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end-members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end-members, an extension of the mathematics of mixing models is presented that assesses the "fit" of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end-members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end-members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.
Influence of dispersing additive on asphaltenes aggregation in model system
NASA Astrophysics Data System (ADS)
Gorshkov, A. M.; Shishmina, L. V.; Tukhvatullina, A. Z.; Ismailov, Yu R.; Ges, G. A.
2016-09-01
The work is devoted to investigation of the dispersing additive influence on asphaltenes aggregation in the asphaltenes-toluene-heptane model system by photon correlation spectroscopy method. The experimental relationship between the onset point of asphaltenes and their concentration in toluene has been obtained. The influence of model system composition on asphaltenes aggregation has been researched. The estimation of aggregative and sedimentation stability of asphaltenes in model system and system with addition of dispersing additive has been given.
Mixed waste treatment model: Basis and analysis
Palmer, B.A.
1995-09-01
The Department of Energy`s Programmatic Environmental Impact Statement (PEIS) required treatment system capacities for risk and cost calculation. Los Alamos was tasked with providing these capacities to the PEIS team. This involved understanding the Department of Energy (DOE) Complex waste, making the necessary changes to correct for problems, categorizing the waste for treatment, and determining the treatment system requirements. The treatment system requirements depended on the incoming waste, which varied for each PEIS case. The treatment system requirements also depended on the type of treatment that was desired. Because different groups contributing to the PEIS needed specific types of results, we provided the treatment system requirements in a variety of forms. In total, some 40 data files were created for the TRU cases, and for the MLLW case, there were 105 separate data files. Each data file represents one treatment case consisting of the selected waste from various sites, a selected treatment system, and the reporting requirements for such a case. The treatment system requirements in their most basic form are the treatment process rates for unit operations in the desired treatment system, based on a 10-year working life and 20-year accumulation of the waste. These results were reported in cubic meters and for the MLLW case, in kilograms as well. The treatment system model consisted of unit operations that are linked together. Each unit operation`s function depended on the input waste streams, waste matrix, and contaminants. Each unit operation outputs one or more waste streams whose matrix, contaminants, and volume/mass may have changed as a result of the treatment. These output streams are then routed to the appropriate unit operation for additional treatment until the output waste stream meets the treatment requirements for disposal. The total waste for each unit operation was calculated as well as the waste for each matrix treated by the unit.
Shell model of optimal passive-scalar mixing
NASA Astrophysics Data System (ADS)
Miles, Christopher; Doering, Charles
2015-11-01
Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.
Hutchinson, D.P.
1995-07-01
This document provides physical, chemical, and radiological descriptive information for a portion of mixed waste that is potentially available for private sector treatment. The format and contents are designed to provide treatment vendors with preliminary information on the characteristics and properties for additional candidate portions of the Idaho National Engineering Laboratory (INEL) and offsite mixed wastes not covered in the two previous characterization reports for the INEL-stored low-level alpha-contaminated and transuranic wastes. This report defines the waste, provides background information, briefly reviews the requirements of the Federal Facility Compliance Act (P.L. 102-386), and relates the Site Treatment Plans developed under the Federal Facility Compliance Act to the waste streams described herein. Each waste is summarized in a Waste Profile Sheet with text, charts, and tables of waste descriptive information for a particular waste stream. A discussion of the availability and uncertainty of data for these waste streams precedes the characterization descriptions.
Scaled tests and modeling of effluent stack sampling location mixing.
Recknagle, Kurtis P; Yokuda, Satoru T; Ballinger, Marcel Y; Barnett, J Matthew
2009-02-01
A three-dimensional computational fluid dynamics computer model was used to evaluate the mixing at a sampling system for radioactive air emissions. Researchers sought to determine whether the location would meet the criteria for uniform air velocity and contaminant concentration as prescribed in the American National Standards Institute standard, Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stacks and Ducts of Nuclear Facilities. This standard requires that the sampling location be well-mixed and stipulates specific tests to verify the extent of mixing. The exhaust system for the Radiochemical Processing Laboratory was modeled with a computational fluid dynamics code to better understand the flow and contaminant mixing and to predict mixing test results. The modeled results were compared to actual measurements made at a scale-model stack and to the limited data set for the full-scale facility stack. Results indicated that the computational fluid dynamics code provides reasonable predictions for velocity, cyclonic flow, gas, and aerosol uniformity, although the code predicts greater improvement in mixing as the injection point is moved farther away from the sampling location than is actually observed by measurements. In expanding from small to full scale, the modeled predictions for full-scale measurements show similar uniformity values as in the scale model. This work indicated that a computational fluid dynamics code can be a cost-effective aid in designing or retrofitting a facility's stack sampling location that will be required to meet standard ANSI/HPS N13.1-1999.
Weakly nonlinear models for turbulent mixing in a plane mixing layer
NASA Technical Reports Server (NTRS)
Liou, William W.; Morris, Philip J.
1992-01-01
New closure models for turbulent free shear flows are presented in this paper. They are based on a weakly nonlinear theory with a description of the dominant large-scale structures as instability waves. Two models are presented that describe the evolution of the free shear flows in terms of the time-averaged mean flow and the dominant large-scale turbulent structure. The local characteristics of the large-scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models have been applied to the study of an incompressible mixing layer. For both models, predictions of the mean flow developed are made. In the second model, predictions of the time-dependent motion of the large-scale structures in the mixing layer are made. The predictions show good agreement with experimental observations.
Mixing Model Performance in Non-Premixed Turbulent Combustion
NASA Astrophysics Data System (ADS)
Pope, Stephen B.; Ren, Zhuyin
2002-11-01
In order to shed light on their qualitative and quantitative performance, three different turbulent mixing models are studied in application to non-premixed turbulent combustion. In previous works, PDF model calculations with detailed kinetics have been shown to agree well with experimental data for non-premixed piloted jet flames. The calculations from two different groups using different descriptions of the chemistry and turbulent mixing are capable of producing the correct levels of local extinction and reignition. The success of these calculations raises several questions, since it is not clear that the mixing models used contain an adequate description of the processes involved. To address these questions, three mixing models (IEM, modified Curl and EMST) are applied to a partially-stirred reactor burning hydrogen in air. The parameters varied are the residence time and the mixing time scale. For small relative values of the mixing time scale (approaching the perfectly-stirred limit) the models yield the same extinction behavior. But for larger values, the behavior is distictly different, with EMST being must resistant to extinction.
A Comparison of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.
2010-01-01
In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
Regional Conference on the Analysis of the Unbalanced Mixed Model.
1987-12-31
this complicated problem. Paper titles: The Present Status of Confidence Interval Estimation on Variance Components in Balanced and Unbalanced Random...Models; Prediction-Interval Procedures and (Fixed Effects) Confidence - Interval Procedures for Mixed Linear Models; The Use of Equivalent linear Models
Bayes factor between Student t and Gaussian mixed models within an animal breeding context
Casellas, Joaquim; Ibáñez-Escriche, Noelia; García-Cortés, Luis Alberto; Varona, Luis
2008-01-01
The implementation of Student t mixed models in animal breeding has been suggested as a useful statistical tool to effectively mute the impact of preferential treatment or other sources of outliers in field data. Nevertheless, these additional sources of variation are undeclared and we do not know whether a Student t mixed model is required or if a standard, and less parameterized, Gaussian mixed model would be sufficient to serve the intended purpose. Within this context, our aim was to develop the Bayes factor between two nested models that only differed in a bounded variable in order to easily compare a Student t and a Gaussian mixed model. It is important to highlight that the Student t density converges to a Gaussian process when degrees of freedom tend to infinity. The twomodels can then be viewed as nested models that differ in terms of degrees of freedom. The Bayes factor can be easily calculated from the output of a Markov chain Monte Carlo sampling of the complex model (Student t mixed model). The performance of this Bayes factor was tested under simulation and on a real dataset, using the deviation information criterion (DIC) as the standard reference criterion. The two statistical tools showed similar trends along the parameter space, although the Bayes factor appeared to be the more conservative. There was considerable evidence favoring the Student t mixed model for data sets simulated under Student t processes with limited degrees of freedom, and moderate advantages associated with using the Gaussian mixed model when working with datasets simulated with 50 or more degrees of freedom. For the analysis of real data (weight of Pietrain pigs at six months), both the Bayes factor and DIC slightly favored the Student t mixed model, with there being a reduced incidence of outlier individuals in this population. PMID:18558073
Mixing by barotropic instability in a nonlinear model
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Chen, Ping
1994-01-01
A global, nonlinear, equivalent barotropic model is used to study the isentropic mixing of passive tracers by barotropic instability. Basic states are analytical zonal-mean jets representative of the zonal-mean flow in the upper stratosphere, where the observed 4-day wave is thought to be a result of barotropic, and possibly baroclinic, instability. As is known from previous studies, the phase speed and growth rate of the unstable waves is fairly sensitive to the shape of the zonal-mean jet; and the dominant wave mode at saturation is not necessarily the fastest growing mode; but the unstable modes share many features of the observed 4-day wave. Lagrangian trajectories computed from model winds are used to characterize the mixing by the flow. For profiles with both midlatitude and polar modes, mixing is stronger in midlatitude than inside the vortex; but there is little exchange of air across the vortex boundary. There is a minimum in the Lyapunov exponents of the flow and the particle dispersion at the jet maximum. For profiles with only polar unstable modes, there is weak mixing inside the vortex, no mixing outside the vortex, and no exchange of air across the vortex boundary. These results support the theoretical arguments that, whether wave disturbances are generated by local instability or propagate from other regions, the mixing properties of the total flow are determined by the locations of the wave critical lines and that strong gradients of potential vorticity are very resistant to mixing.
New mixing angles in the left-right symmetric model
NASA Astrophysics Data System (ADS)
Kokado, Akira; Saito, Takesi
2015-12-01
In the left-right symmetric model neutral gauge fields are characterized by three mixing angles θ12,θ23,θ13 between three gauge fields Bμ,WLμ 3,WRμ 3, which produce mass eigenstates Aμ,Zμ,Zμ', when G =S U (2 )L×S U (2 )R×U (1 )B-L×D is spontaneously broken down until U (1 )em . We find a new mixing angle θ', which corresponds to the Weinberg angle θW in the standard model with the S U (2 )L×U (1 )Y gauge symmetry, from these mixing angles. It is then shown that any mixing angle θi j can be expressed by ɛ and θ', where ɛ =gL/gR is a ratio of running left-right gauge coupling strengths. We observe that light gauge bosons are described by θ' only, whereas heavy gauge bosons are described by two parameters ɛ and θ'.
Vašíčková, Jana; Maňáková, Blanka; Šudoma, Marek; Hofman, Jakub
2016-11-05
Sludge coming from remediation of groundwater contaminated by industry is usually managed as hazardous waste despite it might be considered for further processing as a source of nutrients. The ecotoxicity of phosphorus rich sludge contaminated with arsenic was evaluated after mixing with soil and cultivation with Sinapis alba, and supplementation into composting and vermicomposting processes. The Enchytraeus crypticus and Folsomia candida reproduction tests and the Lactuca sativa root growth test were used. Invertebrate bioassays reacted sensitively to arsenic presence in soil-sludge mixtures. The root elongation of L. sativa was not sensitive and showed variable results. In general, the relationship between invertebrate tests results and arsenic mobile concentration was indicated in majority endpoints. Nevertheless, significant portion of the results still cannot be satisfactorily explained by As chemistry data. Composted and vermicomposted sludge mixtures showed surprisingly high toxicity on all three tested organisms despite the decrease in arsenic mobility, probably due to toxic metabolites of bacteria and earthworms produced during these processes. The results from the study indicated the inability of chemical methods to predict the effects of complex mixtures on living organisms with respect to ecotoxicity bioassays.
A Mixed Effects Randomized Item Response Model
ERIC Educational Resources Information Center
Fox, J.-P.; Wyrick, Cheryl
2008-01-01
The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…
Regression models for mixed Poisson and continuous longitudinal data.
Yang, Ying; Kang, Jian; Mao, Kai; Zhang, Jie
2007-09-10
In this article we develop flexible regression models in two respects to evaluate the influence of the covariate variables on the mixed Poisson and continuous responses and to evaluate how the correlation between Poisson response and continuous response changes over time. A scenario for dealing with regression models of mixed continuous and Poisson responses when the heterogeneous variance and correlation changing over time exist is proposed. Our general approach is first to jointly build marginal model and to check whether the variance and correlation change over time via likelihood ratio test. If the variance and correlation change over time, we will do a suitable data transformation to properly evaluate the influence of the covariates on the mixed responses. The proposed methods are applied to the interstitial cystitis data base (ICDB) cohort study, and we find that the positive correlations significantly change over time, which suggests heterogeneous variances should not be ignored in modelling and inference.
Generalized Dynamic Factor Models for Mixed-Measurement Time Series
Cui, Kai; Dunson, David B.
2013-01-01
In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982–2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133
Generalized Dynamic Factor Models for Mixed-Measurement Time Series.
Cui, Kai; Dunson, David B
2014-02-12
In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody's rated firms from 1982-2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online.
Additive and subtractive scrambling in optional randomized response modeling.
Hussain, Zawar; Al-Sobhi, Mashail M; Al-Zahrani, Bander
2014-01-01
This article considers unbiased estimation of mean, variance and sensitivity level of a sensitive variable via scrambled response modeling. In particular, we focus on estimation of the mean. The idea of using additive and subtractive scrambling has been suggested under a recent scrambled response model. Whether it is estimation of mean, variance or sensitivity level, the proposed scheme of estimation is shown relatively more efficient than that recent model. As far as the estimation of mean is concerned, the proposed estimators perform relatively better than the estimators based on recent additive scrambling models. Relative efficiency comparisons are also made in order to highlight the performance of proposed estimators under suggested scrambling technique.
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Teaching Service Modelling to a Mixed Class: An Integrated Approach
ERIC Educational Resources Information Center
Deng, Jeremiah D.; Purvis, Martin K.
2015-01-01
Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…
Teaching the Mixed Model Design: A Flowchart to Facilitate Understanding.
ERIC Educational Resources Information Center
Mills, Jamie D.
2005-01-01
The Mixed Model (MM) design, sometimes known as a Split-Plot design, is very popular in educational research. This model can be used to examine the effects of several independent variables on a dependent variable and it offers a more powerful alternative to the completely randomized design. The MM design considers both a between-subjects factor,…
Complex Modelling Scheme Of An Additive Manufacturing Centre
NASA Astrophysics Data System (ADS)
Popescu, Liliana Georgeta
2015-09-01
This paper presents a modelling scheme sustaining the development of an additive manufacturing research centre model and its processes. This modelling is performed using IDEF0, the resulting model process representing the basic processes required in developing such a centre in any university. While the activities presented in this study are those recommended in general, changes may occur in specific existing situations in a research centre.
Validation of hydrogen gas stratification and mixing models
Wu, Hsingtzu; Zhao, Haihua
2015-05-26
Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.
Validation of hydrogen gas stratification and mixing models
Wu, Hsingtzu; Zhao, Haihua
2015-05-26
Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less
Temperature Chaos in Some Spherical Mixed p-Spin Models
NASA Astrophysics Data System (ADS)
Chen, Wei-Kuo; Panchenko, Dmitry
2017-03-01
We give two types of examples of the spherical mixed even- p-spin models for which chaos in temperature holds. These complement some known results for the spherical pure p-spin models and for models with Ising spins. For example, in contrast to a recent result of Subag who showed absence of chaos in temperature in the spherical pure p-spin models for p≥3, we show that even a smaller order perturbation induces temperature chaos.
Modeling the iron cycling in the mixed layer
NASA Astrophysics Data System (ADS)
Weber, L.; Voelker, C.; Schartau, M.; Wolf-Gladrow, D.
2003-04-01
We present a comprehensive model of the iron cycling within the mixed layer of the ocean, which predicts the time course of iron concentration and speciation. The speciation of iron within the mixed layer is heavily influenced by photochemistry, organic complexation, colloid formation and aggregation, as well as uptake and release by marine biota. The model is driven by mixed layer dynamics, dust deposition and insolation, as well as coupled to a simple ecosystem model (based on Schartau at al.2001: Deep-Sea Res.II.48,1769-1800) and applied to the site of the Bermuda Atlantic Time-series Study (BATS). Parameters in the model were chosen to reproduce the small number of available speciation measurements resolving a daily cycle. The model clearly reproduces the available Fe concentration at the BATS station but the annual balance of Fe fluxes at BATS is less constrained, due to uncertainties in the model parameters. Hence we discuss the model's sensitivity to parameter uncertainties and which observations might help to better constrain the relevant model parameters. Futher we discuss how the most important model parameters are constrained by the data. The mixed layer cycle in the model strongly influences seasonality of primary production as well as light dependency of photoreductive processes and therefore controlls iron speciation. Futhermore short events within a day (e.g. heavy rain, change of irradiance, intense dust deposition and temporary deepening of the mixed layer) may push processes like colloidal aggregation. For this reason we compare two versions of the model: The first one is forced by monthly averaged climatological variables, the second one by daily climatological variabilities.
Hybrid configuration mixing model for odd nuclei
NASA Astrophysics Data System (ADS)
Colò, G.; Bortignon, P. F.; Bocchi, G.
2017-03-01
In this work, we introduce a new approach which is meant to be a first step towards complete self-consistent low-lying spectroscopy of odd nuclei. So far, we essentially limit ourselves to the description of a double-magic core plus an extra nucleon. The model does not contain any free adjustable parameter and is instead based on a Hartree-Fock (HF) description of the particle states in the core, together with self-consistent random-phase approximation (RPA) calculations for the core excitations. We include both collective and noncollective excitations, with proper care of the corrections due to the overlap between them (i.e., due to the nonorthonormality of the basis). As a consequence, with respect to traditional particle-vibration coupling calculations in which one can only address single-nucleon states and particle-vibration multiplets, we can also describe states of shell-model types like 2 particle-1 hole. We will report results for 49Ca and 133Sb and discuss future perspectives.
Mix Model Comparison of Low Feed-Through Implosions
NASA Astrophysics Data System (ADS)
Pino, Jesse; MacLaren, S.; Greenough, J.; Casey, D.; Dewald, E.; Dittrich, T.; Khan, S.; Ma, T.; Sacks, R.; Salmonson, J.; Smalyuk, V.; Tipton, R.; Kyrala, G.
2016-10-01
The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the NIF. Recently, the separated reactants technique has been applied to the Two Shock (TS) implosion platform, which is designed to minimize this feed-through and isolate local mix at the gas-ablator interface and produce core yields in good agreement with 1D clean simulations. The effects of both inner surface roughness and convergence ratio have been probed. The TT, DT, and DD neutron signals respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations. Various methods of interfacial mix will be considered, including the Reynolds-Averaged Navier Stokes (RANS) KL method as well as and a multicomponent enhanced diffusivity model with species, thermal, and pressure gradient terms. We also give predictions of a upcoming campaign to investigate Mid-Z mixing by adding a Ge dopant to the CD layer. LLNL-ABS-697251 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The salinity effect in a mixed layer ocean model
NASA Technical Reports Server (NTRS)
Miller, J. R.
1976-01-01
A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.
A 3D Bubble Merger Model for RTI Mixing
NASA Astrophysics Data System (ADS)
Cheng, Baolian
2015-11-01
In this work we present a model for the merger processes of bubbles at the edge of an unstable acceleration driven mixing layer. Steady acceleration defines a self-similar mixing process, with a time-dependent inverse cascade of structures of increasing size. The time evolution is itself a renormalization group evolution. The model predicts the growth rate of a Rayleigh-Taylor chaotic fluid-mixing layer. The 3-D model differs from the 2-D merger model in several important ways. Beyond the extension of the model to three dimensions, the model contains one phenomenological parameter, the variance of the bubble radii at fixed time. The model also predicts several experimental numbers: the bubble mixing rate, the mean bubble radius, and the bubble height separation at the time of merger. From these we also obtain the bubble height to the radius aspect ratio, which is in good agreement with experiments. Applications to recent NIF and Omega experiments will be discussed. This work was performed under the auspices of the U.S. Department of Energy by the Los Alamos National Laboratory under Contract No. W-7405-ENG-36.
Modeling and Analysis of Mixed Synchronous/Asynchronous Systems
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan
2012-01-01
Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.
Comprehensive European dietary exposure model (CEDEM) for food additives.
Tennant, David R
2016-05-01
European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.
Computer modeling of ORNL storage tank sludge mobilization and mixing
Terrones, G.; Eyler, L.L.
1993-09-01
This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.
Spread in model climate sensitivity traced to atmospheric convective mixing.
Sherwood, Steven C; Bony, Sandrine; Dufresne, Jean-Louis
2014-01-02
Equilibrium climate sensitivity refers to the ultimate change in global mean temperature in response to a change in external forcing. Despite decades of research attempting to narrow uncertainties, equilibrium climate sensitivity estimates from climate models still span roughly 1.5 to 5 degrees Celsius for a doubling of atmospheric carbon dioxide concentration, precluding accurate projections of future climate. The spread arises largely from differences in the feedback from low clouds, for reasons not yet understood. Here we show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half of the variance in climate sensitivity estimated by 43 climate models. The apparent mechanism is that such mixing dehydrates the low-cloud layer at a rate that increases as the climate warms, and this rate of increase depends on the initial mixing strength, linking the mixing to cloud feedback. The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming.
Merino, Ignacio; Arévalo, Luis F; Romero, Fernando
2007-01-01
The study of the ceramic characteristics of sludge ashes, alone or mixed with additives (kaolin, montmorillonite, illitic clay, powdered flat glass) includes characterization of additives, preparation of probes (dry or wet mixed), thermal treatment (up to 1200 degrees C, except melting or deformation) and control (densities, compressive strengths and water absorption). Thermal treatment increases the density and compressive strength of probes (both parameters go through maxima, with later decreases) and decreases the absorption of water. The densification is also revealed by the evolution of the ratio of decrease of volume/loss of mass. The maximum values of compressive strengths were obtained for 25% of illitic clay, montmorillonite and glass powder. Densification concerning probes with sludge ashes alone does not occur with kaolin. Experimental data were adjusted to exponential relationships between compressive strengths and densities for every composition, and also to a general equation for all probes. The apparent density obtained was adjusted to a non-linear dependence with temperature, leading to a maximum in density and permitting calculating the temperature of occurrence of this maximum. The adjustment was not possible for probes containing kaolin, requiring presumably higher temperatures to densify. Water absorption has low values for ashes or kaolin probes, intermediate values for illite and powdered flat glass probes and high values for montmorillonite probes. Excepting with kaolin, ceramic materials with better characteristics than sludge ashes without additives were obtained at lower treatment temperatures.
Forbes, Scott C; McCargar, Linda; Jelen, Paul; Bell, Gordon J
2014-04-01
The purpose was to investigate the effects of a controlled typical 1-day diet supplemented with two different doses of whey protein isolate on blood amino acid profiles and hormonal concentrations following the final meal. Nine males (age: 29.6 ± 6.3 yrs) completed four conditions in random order: a control (C) condition of a typical mixed diet containing ~10% protein (0.8 g·kg1), 65% carbohydrate, and 25% fat; a placebo (P) condition calorically matched with carbohydrate to the whey protein conditions; a low-dose condition of 0.8 grams of whey protein isolate per kilogram body mass per day (g·kg1·d1; W1) in addition to the typical mixed diet; or a high-dose condition of 1.6 g·kg1·d1 (W2) of supplemental whey protein in addition to the typical mixed diet. Following the final meal, significant (p < .05) increases in total amino acids, essential amino acids (EAA), branch-chained amino acids (BCAA), and leucine were observed in plasma with whey protein supplementation while no changes were observed in the control and placebo conditions. There was no significant group difference for glucose, insulin, testosterone, cortisol, or growth hormone. In conclusion, supplementing a typical daily food intake consisting of 0.8 g of protein·kg1·d1 with a whey protein isolate (an additional 0.8 or 1.6 g·kg1·d1) significantly elevated total amino acids, EAA, BCAA, and leucine but had no effect on glucose, insulin, testosterone, cortisol, or growth hormone following the final meal. Future acute and chronic supplementation research examining the physiological and health outcomes associated with elevated amino acid profiles is warranted.
Sensitivity of fine sediment source apportionment to mixing model assumptions
NASA Astrophysics Data System (ADS)
Cooper, Richard; Krueger, Tobias; Hiscock, Kevin; Rawlins, Barry
2015-04-01
Mixing models have become increasingly common tools for quantifying fine sediment redistribution in river catchments. The associated uncertainties may be modelled coherently and flexibly within a Bayesian statistical framework (Cooper et al., 2015). However, there is more than one way to represent these uncertainties because the modeller has considerable leeway in making error assumptions and model structural choices. In this presentation, we demonstrate how different mixing model setups can impact upon fine sediment source apportionment estimates via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges and subsurface material) under base flow conditions between August 2012 and August 2013 (Cooper et al., 2014). Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ~76%), comparison of apportionment estimates reveals varying degrees of sensitivity to changing prior parameter distributions, inclusion of covariance terms, incorporation of time-variant distributions and methods of proportion characterisation. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup and between a Bayesian and a popular Least Squares optimisation approach. Our OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon fine sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model setup prior to conducting fine sediment source apportionment investigations
Observing and Modelling Upper Ocean Mixing by Near-Inertial Oscillations
NASA Astrophysics Data System (ADS)
Pillar, Helen; Jochum, Markus; Nuterman, Roman; Bentsen, Mats
2016-04-01
Near-inertial oscillations (NIOs) have been observed to drive substantial ocean mixing during the passage of atmospheric storms. This mixing is poorly resolved in climate models due to coarse spatial and temporal resolution of the atmospheric forcing and missing ocean physics. A new parameterisation is developed in the Norwegian Earth System Model (NorESM) to account for enhancement of both mixed layer turbulent kinetic energy and interior diapycnal diffusivity by locally forced NIOs. This parameterisation is based on the inclusion of a simple slab model in the NorESM coupler, receiving high frequency wind forcing and generating near-inertial current distributions consistent with available observations from surface drifters. Our results suggest that NIOs are unimportant for mixing at depth, but act to deepen the ocean mixed layer and significantly impact air-sea buoyancy fluxes, contributing to the reduction of large model biases in tropical SST. Additional analysis of mooring data from the PIRATA observational array reveals that a large fraction of the near-inertial energy injected at the surface is realised through a few extreme storms rather than a continuum of events. Further improvements to the ocean mixing parameterisation may thus require the resolution dependence of the simulated storm activity to be explored in more detail.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
The Worm Process for the Ising Model is Rapidly Mixing
NASA Astrophysics Data System (ADS)
Collevecchio, Andrea; Garoni, Timothy M.; Hyndman, Timothy; Tokarev, Daniel
2016-09-01
We prove rapid mixing of the worm process for the zero-field ferromagnetic Ising model, on all finite connected graphs, and at all temperatures. As a corollary, we obtain a fully-polynomial randomized approximation scheme for the Ising susceptibility, and for a certain restriction of the two-point correlation function.
Development of stable isotope mixing models in ecology - Sydney
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Perth
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Historical development of stable isotope mixing models in ecology
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Dynamics and Modeling of Turbulent Mixing in Oceanic Flows
2010-09-30
channel flow (for a nice theoretical discussion, see Armenio and Sarkar 2002), the mixing properties of each of the Prt formulations might not be...to incorporate effects of inhomogeneity into turblence models. REFERENCES Armenio , V. and Sarkar, S. 2002. An investigation of stably stratified
A Nonlinear Mixed Effects Model for Latent Variables
ERIC Educational Resources Information Center
Harring, Jeffrey R.
2009-01-01
The nonlinear mixed effects model for continuous repeated measures data has become an increasingly popular and versatile tool for investigating nonlinear longitudinal change in observed variables. In practice, for each individual subject, multiple measurements are obtained on a single response variable over time or condition. This structure can be…
Development of stable isotope mixing models in ecology - Fremantle
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Dublin
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh
2009-05-01
Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.
Bowers, J.S.; Anson, S.M.; Painter, S.M.
1995-09-01
Stabilization is a best demonstrated available technology, or BDAT, as defined by the U.S. Environmental Protection Agency (EPA) in Title 40, part 268, of the Code of Federal Regulations (40 CFR 268). This technology traps toxic contaminants (usually both chemically and physically) in a matrix so that they do not. leach into the environment. Typical contaminants that are trapped by stabilization are metals (mostly transition metals) that exhibit the characteristic of toxicity as defined by 40 CFR part 261. The stabilization process routinely uses pozzolanic materials. Portland cement, fly ash-lime mixes, gypsum cements, and clays are some of the most common materials. They are inexpensive, easy to use, and effective for wastes containing low concentrations of toxic materials. At the Lawrence Livermore National Laboratory (LLNL), additives such as dithiocarbamates and thiocarbonates, which are pH-insensitive and provide resistance to ligand formation are used in the waste stabilization process. Attapulgite, montmorillonite, and sepiolite clays are used because they are forgiving (recipe can be adjusted before the matrix hardens) when formulating a stabilization matrix, and they have a neutral pH. By using these clays and additives, LLNL`s highly concentrated wastewater treatment sludges have passed the TCLP and STLC tests. The most frequently used stabilization process consists of a customized recipe involving waste sludge, clay and dithiocarbamate salt, mixed with a double planetary mixer into a pasty consistency. TCLP and STLC data on this waste matrix have shown that the process matrix meets land disposal requirements.
Cross-Validation for Nonlinear Mixed Effects Models
Colby, Emily; Bair, Eric
2013-01-01
Cross-validation is frequently used for model selection in a variety of applications. However, it is difficult to apply cross-validation to mixed effects models (including nonlinear mixed effects models or NLME models) due to the fact that cross-validation requires “out-of-sample” predictions of the outcome variable, which cannot be easily calculated when random effects are present. We describe two novel variants of cross-validation that can be applied to nonlinear mixed effects models. One variant, where out-of-sample predictions are based on post hoc estimates of the random effects, can be used to select the overall structural model. Another variant, where cross-validation seeks to minimize the estimated random effects rather than the estimated residuals, can be used to select covariates to include in the model. We show that these methods produce accurate results in a variety of simulated data sets and apply them to two publicly available population pharmacokinetic data sets. PMID:23532511
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes
NASA Astrophysics Data System (ADS)
Hehr, Adam; Dapino, Marcelo J.
2016-04-01
Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.
An epidemic model to evaluate the homogeneous mixing assumption
NASA Astrophysics Data System (ADS)
Turnes, P. P.; Monteiro, L. H. A.
2014-11-01
Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.
Low-order models of biogenic ocean mixing
NASA Astrophysics Data System (ADS)
Dabiri, J. O.; Rosinelli, D.; Koumoutsakos, P.
2009-12-01
Biogenic ocean mixing, the process whereby swimming animals may affect ocean circulation, has primarily been studied using order-of-magnitude theoretical estimates and a small number of field observations. We describe numerical simulations of arrays of simplified animal shapes migrating in inviscid fluid and at finite Reynolds numbers. The effect of density stratification is modeled in the fluid dynamic equations of motion by a buoyancy acceleration term, which arises due to perturbations to the density field by the migrating bodies. The effects of fluid viscosity, body spacing, and array configuration are investigated to identify scenarios in which a meaningful contribution to ocean mixing by swimming animals is plausible.
An explicit mixed numerical method for mesoscale model
NASA Technical Reports Server (NTRS)
Hsu, H.-M.
1981-01-01
A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.
An Additional Symmetry in the Weinberg-Salam Model
Bakker, B.L.G.; Veselov, A.I.; Zubkov, M.A.
2005-06-01
An additional Z{sub 6} symmetry hidden in the fermion and Higgs sectors of the Standard Model has been found recently. It has a singular nature and is connected to the centers of the SU(3) and SU(2) subgroups of the gauge group. A lattice regularization of the Standard Model was constructed that possesses this symmetry. In this paper, we report our results on the numerical simulation of its electroweak sector.
Quasi 1D Modeling of Mixed Compression Supersonic Inlets
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.
2012-01-01
The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.
Application of large eddy interaction model to a mixing layer
NASA Technical Reports Server (NTRS)
Murthy, S. N. B.
1989-01-01
The large eddy interaction model (LEIM) is a statistical model of turbulence based on the interaction of selected eddies with the mean flow and all of the eddies in a turbulent shear flow. It can be utilized as the starting point for obtaining physical structures in the flow. The possible application of the LEIM to a mixing layer formed between two parallel, incompressible flows with a small temperature difference is developed by invoking a detailed similarity between the spectra of velocity and temperature.
Modeling uranium transport in acidic contaminated groundwater with base addition.
Zhang, Fan; Luo, Wensui; Parker, Jack C; Brooks, Scott C; Watson, David B; Jardine, Philip M; Gu, Baohua
2011-06-15
This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO(3)(-), SO(4)(2-), U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.
Seismic tests for solar models with tachocline mixing
NASA Astrophysics Data System (ADS)
Brun, A. S.; Antia, H. M.; Chitre, S. M.; Zahn, J.-P.
2002-08-01
We have computed accurate 1-D solar models including both a macroscopic mixing process in the solar tachocline as well as up-to-date microscopic physical ingredients. Using sound speed and density profiles inferred through primary inversion of the solar oscillation frequencies coupled with the equation of thermal equilibrium, we have extracted the temperature and hydrogen abundance profiles. These inferred quantities place strong constraints on our theoretical models in terms of the extent and strength of our macroscopic mixing, on the photospheric heavy elements abundance, on the nuclear reaction rates such as S11 and S34 and on the efficiency of the microscopic diffusion. We find a good overall agreement between the seismic Sun and our models if we introduce a macroscopic mixing in the tachocline and allow for variation within their uncertainties of the main physical ingredients. From our study we deduce that the solar hydrogen abundance at the solar age is Xinv=0.732+/- 0.001 and that based on the 9Be photospheric depletion, the maximum extent of mixing in the tachocline is 5% of the solar radius. The nuclear reaction rate for the fundamental pp reaction is found to be S11(0)=4.06+/- 0.07 10-25 MeV barns, i.e., 1.5% higher than the present theoretical determination. The predicted solar neutrino fluxes are discussed in the light of the new SNO/SuperKamiokande results.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440
Upscaling of Mixing Processes using a Spatial Markov Model
NASA Astrophysics Data System (ADS)
Bolster, Diogo; Sund, Nicole; Porta, Giovanni
2016-11-01
The Spatial Markov model is a model that has been used to successfully upscale transport behavior across a broad range of spatially heterogeneous flows, with most examples to date coming from applications relating to porous media. In its most common current forms the model predicts spatially averaged concentrations. However, many processes, including for example chemical reactions, require an adequate understanding of mixing below the averaging scale, which means that knowledge of subscale fluctuations, or closures that adequately describe them, are needed. Here we present a framework, consistent with the Spatial Markov modeling framework, that enables us to do this. We apply and present it as applied to a simple example, a spatially periodic flow at low Reynolds number. We demonstrate that our upscaled model can successfully predict mixing by comparing results from direct numerical simulations to predictions with our upscaled model. To this end we focus on predicting two common metrics of mixing: the dilution index and the scalar dissipation. For both metrics our upscaled predictions very closely match observed values from the DNS. This material is based upon work supported by NSF Grants EAR-1351625 and EAR-1417264.
Generalised additive modelling approach to the fermentation process of glutamate.
Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping
2011-03-01
In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu.
Abaházi, Emese; Boros, Zoltán; Poppe, László
2014-07-08
Effects of various additives on the lipase from Burkholderia cepacia (BcL) immobilized on mixed-function-grafted mesoporous silica gel support by hydrophobic adsorption and covalent attachment were investigated. Catalytic properties of the immobilized biocatalysts were characterized in kinetic resolution of racemic 1-phenylethanol (rac-1a) and 1-(thiophen-2-yl)ethan-1-ol (rac-1b). Screening of more than 40 additives showed significantly enhanced productivity of immobilized BcL with several additives such as PEGs, oleic acid and polyvinyl alcohol. Effects of substrate concentration and temperature between 0-100 °C on kinetic resolution of rac-1a were studied with the best adsorbed BcLs containing PEG 20 k or PVA 18-88 additives in continuous-flow packed-bed reactor. The optimum temperature of lipase activity for BcL co-immobilized with PEG 20k found at around 30 °C determined in the continuous-flow system increased remarkably to around 80 °C for BcL co-immobilized with PVA 18-88.
Logit-normal mixed model for Indian Monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-03-01
Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.
Uncertainty in mixing models: a blessing in disguise?
NASA Astrophysics Data System (ADS)
Delsman, J. R.; Oude Essink, G. H. P.
2012-04-01
Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.
Validation of transport models using additive flux minimization technique
NASA Astrophysics Data System (ADS)
Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.
2013-10-01
A new additive flux minimization technique is proposed for carrying out the verification and validation (V&V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V&V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V&V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.
Fermion masses and mixing in general warped extra dimensional models
NASA Astrophysics Data System (ADS)
Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel
2015-06-01
We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.
Tokarz-Deptuła, B; Niedźwiedzka-Rystwej, P; Adamiak, M; Hukowska-Szematowicz, B; Trzeciak-Ryczek, A; Deptuła, W
2015-01-01
In the paper we studied haematologic values, such as haemoglobin concentration, haematocrit value, thrombocytes, leucocytes: lymphocytes, neutrophils, basophils, eosinophils and monocytes in the pheral blood in Polish mixed-breeds with addition of meat breed blood in order to obtain the reference values which are until now not available for this animals. In studying this indices we took into consideration the impact of the season (spring, summer, autumn, winter), and sex of the animals. The studies have shown a high impact of the season of the year on those rabbits, but only in spring and summer. Moreover we observed that the sex has mean impact on the studied values of haematological parameters in those rabbits. According to our knowledge, this is the first paper on haematologic values in this widely used group of rabbits, so they may serve as reference values.
Sun, Xiao-Lu; Zhao, Jing; You, Ye-Ming; Jianxin Sun, Osbert
2016-01-14
Changes in litterfall dynamics and soil properties due to anthropogenic or natural perturbations have important implications to soil carbon (C) and nutrient cycling via microbial pathway. Here we determine soil microbial responses to contrasting types of litter inputs (leaf vs. fine woody litter) and nitrogen (N) deposition by conducting a multi-year litter manipulation and N addition experiment in a mixed-wood forest. We found significantly higher soil organic C, total N, microbial biomass C (MBC) and N (MBN), microbial activity (MR), and activities of four soil extracellular enzymes, including β-glucosidase (BG), N-acetyl-β-glucosaminidase (NAG), phenol oxidase (PO), and peroxidase (PER), as well as greater total bacteria biomass and relative abundance of gram-negative bacteria (G-) community, in top soils of plots with presence of leaf litter than of those without litter or with presence of only fine woody litter. No apparent additive or interactive effects of N addition were observed in this study. The occurrence of more labile leaf litter stimulated G-, which may facilitate microbial community growth and soil C stabilization as inferred by findings in literature. A continued treatment with contrasting types of litter inputs is likely to result in divergence in soil microbial community structure and function.
Sun, Xiao-Lu; Zhao, Jing; You, Ye-Ming; Jianxin Sun, Osbert
2016-01-01
Changes in litterfall dynamics and soil properties due to anthropogenic or natural perturbations have important implications to soil carbon (C) and nutrient cycling via microbial pathway. Here we determine soil microbial responses to contrasting types of litter inputs (leaf vs. fine woody litter) and nitrogen (N) deposition by conducting a multi-year litter manipulation and N addition experiment in a mixed-wood forest. We found significantly higher soil organic C, total N, microbial biomass C (MBC) and N (MBN), microbial activity (MR), and activities of four soil extracellular enzymes, including β-glucosidase (BG), N-acetyl-β-glucosaminidase (NAG), phenol oxidase (PO), and peroxidase (PER), as well as greater total bacteria biomass and relative abundance of gram-negative bacteria (G-) community, in top soils of plots with presence of leaf litter than of those without litter or with presence of only fine woody litter. No apparent additive or interactive effects of N addition were observed in this study. The occurrence of more labile leaf litter stimulated G-, which may facilitate microbial community growth and soil C stabilization as inferred by findings in literature. A continued treatment with contrasting types of litter inputs is likely to result in divergence in soil microbial community structure and function. PMID:26762490
A mixed model reduction method for preserving selected physical information
NASA Astrophysics Data System (ADS)
Zhang, Jing; Zheng, Gangtie
2017-03-01
A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.
Extension of the stochastic mixing model to cumulonimbus clouds
Raymond, D.J.; Blyth, A.M. )
1992-11-01
The stochastic mixing model of cumulus clouds is extended to the case in which ice and precipitation form. A simple cloud microphysical model is adopted in which ice crystals and aggregates are carried along with the updraft, whereas raindrops, graupel, and hail are assumed to immediately fall out. The model is then applied to the 2 August 1984 case study of convection over the Magdalena Mountains of central New Mexico, with excellent results. The formation of ice and precipitation can explain the transition of this system from a cumulus congestus cloud to thunderstorm. 28 refs.
Bowers, J.S.; Anson, J.R.; Painter, S.M.
1995-12-31
Stabilization is a best demonstrated available technology, or BDAT. This technology traps toxic contaminants in a matrix so that they do not leach into the environment. The stabilization process routinely uses pozzolanic materials. Portland cement, fly ash-lime mixes, gypsum cements, and clays are some of the most common materials. In many instances, materials that can pass the Toxicity Characteristic Leaching Procedure (TCLP the federal leach test) or the Soluble Threshold Leachate Concentration (STLC the California leach test) must have high concentrations of lime or other caustic material because of the low pH of the leaching media. Both leaching media, California`s and EPA`s, have a pH of 5.0. California uses citric acid and sodium citrate while EPA uses acetic acid and sodium acetate. The concentration in the leachate is approximately ten times higher for the STLC procedure than the TCLP. These media can form ligands that provide excellent metal leaching. Because of the aggressive nature of the leaching medium, stabilized wastes in many cases will not pass the leaching tests. At the Lawrence Livermore National Laboratory (LLNL), additives such as dithiocarbamates and thiocarbonates, which are pH-insensitive and provide resistance to ligand formation, are used in the waste stabilization process. Attapulgite, montmorillonite, and sepiolite clays are used because they are forgiving (recipe can be adjusted before the matrix hardens) when formulating a stabilization matrix, and they have a neutral pH. By using these clays and additives, LLNL`s highly concentrated wastewater treatment sludges have passed the TCLP and STLC tests. The most frequently used stabilization process consists of a customized recipe involving waste sludge, clay and dithiocarbamate salt, mixed with a double planetary mixer into a pasty consistency. TCLP and STLC data on this waste matrix have shown that the process matrix meets land disposal requirements.
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
A Non-Fickian Mixing Model for Stratified Turbulent Flows
2012-09-30
center particle at each grid point and 4 satellite ones, displaced by 500 m along the four cardinal directions. Drifter trajectories were integrated...type of submesoscale instabilities exist, how they are connected to both larger scale and smaller scale motions, and to what extent they influence...been to model upper ocean mixed layer instabilities , investigate their behavior and try to develop sampling strategies using synthetic drifters and
Continuum Modeling of Mixed Conductors: a Study of Ceria
NASA Astrophysics Data System (ADS)
Ciucci, Francesco
In this thesis we have derived a new way to analyze the impedance response of mixed conducting materials for use in solid oxide fuel cells (SOFCs), with the main focus on anodic materials, in particular cerium oxides. First we have analyzed the impact of mixed conductivity coupled to electrocatalytic behavior in the linear time-independent domain for a thick ceria sample. We have derived that, for a promising fuel cell material, Samarium Doped Ceria, chemical reactions are the determining component of the polarization resistance. As a second step we have extended the previous model to the time-dependent case, where we focused on single harmonic excitation, the impedance spectroscopy conditions. We extended the model to the case where some input diffusivities are spatially nonuniform. For instance we considered the case where diffusivities change significantly in the vicinity of the electrocatalytic region. As a third and final step we use to model to capture the two dimensional behavior of mixed conducting thin films, where the electronic motion from one side of the sample to the other is impeded. Such conditions are similar to those encountered in fuel cells where an electrolyte conducting exclusively oxygen ions is placed between the anode and the cathode. The framework developed was also extended to study a popular cathodic material, Lanthanum Manganite. The model is used to give unprecedented insight in SOFC polarization resistance analysis of mixed conductors. It helps elucidate rigorously rate determining steps and to address the interplay of diffusion with diffusion losses. Electrochemical surface losses dominate for most experimental conditions of Samarium Doped Ceria and they are shown to be strongly dependent on geometry.
Shell Model Depiction of Isospin Mixing in sd Shell
Lam, Yi Hua; Smirnova, Nadya A.; Caurier, Etienne
2011-11-30
We constructed a new empirical isospin-symmetry breaking (ISB) Hamiltonian in the sd(1s{sub 1/2}, 0d{sub 5/2} and 0d{sub 3/2}) shell-model space. In this contribution, we present its application to two important case studies: (i){beta}-delayed proton emission from {sup 22}Al and (ii) isospin-mixing correction to superallowed 0{sup +}{yields}0{sup +}{beta}-decay ft-values.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models
Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.
2013-01-01
Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921
Effects of mixing in threshold models of social behavior
NASA Astrophysics Data System (ADS)
Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan
2013-07-01
We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.
Effects of mixing in threshold models of social behavior.
Akhmetzhanov, Andrei R; Worden, Lee; Dushoff, Jonathan
2013-07-01
We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors' behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the "ground state." Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.
Modeling and diagnosing interface mix in layered ICF implosions
NASA Astrophysics Data System (ADS)
Weber, C. R.; Berzak Hopkins, L. F.; Clark, D. S.; Haan, S. W.; Ho, D. D.; Meezan, N. B.; Milovich, J. L.; Robey, H. F.; Smalyuk, V. A.; Thomas, C. A.
2015-11-01
Mixing at the fuel-ablator interface of an inertial confinement fusion (ICF) implosion can arise from an unfavorable in-flight Atwood number between the cryogenic DT fuel and the ablator. High-Z dopant is typically added to the ablator to control the Atwood number, but recent high-density carbon (HDC) capsules have been shot at the National Ignition Facility (NIF) without this added dopant. Highly resolved post-shot modeling of these implosions shows that there was significant mixing of ablator material into the dense DT fuel. This mix lowers the fuel density and results in less overall compression, helping to explain the measured ratio of down scattered-to-primary neutrons. Future experimental designs will seek to improve this issue through adding dopant and changing the x-ray spectra with a different hohlraum wall material. To test these changes, we are designing an experimental platform to look at the growth of this mixing layer. This technique uses side-on radiography to measure the spatial extent of an embedded high-Z tracer layer near the interface. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.
Study on system dynamics of evolutionary mix-game models
NASA Astrophysics Data System (ADS)
Gou, Chengling; Guo, Xiaoqian; Chen, Fang
2008-11-01
Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.
Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials
NASA Technical Reports Server (NTRS)
Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar
2015-01-01
The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition
A Mixed Exponential Time Series Model. NMEARMA(p,q).
1980-03-01
AD-AO85 316 NAVAL POSTGRADUATE SCHOOL MONTEREY CA F/G 12/1 A MIXED EXPONENTIAL TIME SERIES MODEL. NMEARMA(P,Q).(U MAR GO A .J LAWRANCE , P A LEWIS...This report was prepared by: A. J. Lawrance University of Birmingham Birmingham, England Reviewed by: Released by- Michael G. Sover’ign, Chirman...MODEL, NMEARMA(p,q) by A. J. Lawrance P. A. W. Lewis University of Birmingham Naval Postgraduate School Birmingham, England Monterey, California, USA
Bowers, J.S.; Anson, J.R.; Painter, S.M.; Maitino, R.E.
1995-03-01
Stabilization traps toxic contaminants (usually both chemically and physically) in a matrix so that they do not leach into the environment. Typical contaminants are metals (mostly transition metals) that exhibit the characteristic of toxicity. The stabilization process routinely uses pozzolanic materials. Portland cement, fly ash-lime mixes, gypsum cements, and clays are some of the most common materials. In many instances, materials that can pass the Toxicity Characteristic Leaching Procedure (TCLP-the federal leach test) or the Soluble Threshold Leachate Concentration (STLC-the California leach test) must have high concentrations of lime or other caustic material because of the low pH of the leaching media. Both leaching media, California`s and EPA`s, have a pH of 5.0. California uses citric acid and sodium citrate while EPA uses acetic acid and sodium acetate. These media can form ligands that provide excellent metal leaching. Because of the aggressive nature of the leaching medium, stabilized wastes in many cases will not pass the leaching tests. At the Lawrence Livermore National Laboratory, additives such as dithiocarbamates and thiocarbonates, which are pH-insensitive and provide resistance to ligand formation, are used in the waste stabilization process. Attapulgite, montmorillonite, and sepiolite clays are used because they are forgiving (recipe can be adjusted before the matrix hardens). The most frequently used stabilization process consists of a customized recipe involving waste sludge, clay and dithiocarbamate salt, mixed with a double planetary mixer into a pasty consistency. TCLP and STLC data on this waste matrix have shown that the process matrix meets land disposal requirements.
Lee, Yi Feng; Graalfs, Heiner; Frech, Christian
2016-09-16
An extended model is developed to describe protein retention in mixed-mode chromatography based on thermodynamic principles. Special features are the incorporation of pH dependence of the ionic interaction on a mixed-mode resin and the addition of a water term into the model which enables one to describe the total number of water molecules released at the hydrophobic interfaces upon protein-ligand binding. Examples are presented on how to determine the model parameters using isocratic elution chromatography. Four mixed-mode anion-exchanger prototype resins with different surface chemistries and ligand densities were tested using isocratic elution of two monoclonal antibodies at different pH values (7-10) and encompassed a wide range of NaCl concentrations (0-5M). U-shape mixed-mode retention curves were observed for all four resins. By taking into account of the deprotonation and protonation of the weak cationic functional groups in these mixed-mode anion-exchanger prototype resins, conditions which favor protein-ligand binding via mixed-mode strong cationic ligands as well as conditions which favor protein-ligand binding via both mixed-mode strong cationic ligands and non-hydrophobic weak cationic ligands were identified. The changes in the retention curves with pH, salt, protein, and ligand can be described very well by the extended model using meaningful thermodynamic parameters like Gibbs energy, number of ionic and hydrophobic interactions, total number of released water molecules as well as modulator interaction constant. Furthermore, the fitted model parameters based on isocratic elution data can also be used to predict protein retention in dual salt-pH gradient elution chromatography.
Photonic states mixing beyond the plasmon hybridization model
NASA Astrophysics Data System (ADS)
Suryadharma, Radius N. S.; Iskandar, Alexander A.; Tjia, May-On
2016-07-01
A study is performed on a photonic-state mixing-pattern in an insulator-metal-insulator cylindrical silver nanoshell and its rich variations induced by changes in the geometry and dielectric media of the system, representing the combined influences of plasmon coupling strength and cavity effects. This study is performed in terms of the photonic local density of states (LDOS) calculated using the Green tensor method, in order to elucidate those combined effects. The energy profiles of LDOS inside the dielectric core are shown to exhibit consistently growing number of redshifted photonic states due to an enhanced plasmon coupling induced state mixing arising from decreased shell thickness, increased cavity size effect, and larger symmetry breaking effect induced by increased permittivity difference between the core and the background media. Further, an increase in cavity size leads to increased additional peaks that spread out toward the lower energy regime. A systematic analysis of those variations for a silver nanoshell with a fixed inner radius in vacuum background reveals a certain pattern of those growing number of redshifted states with an analytic expression for the corresponding energy downshifts, signifying a photonic state mixing scheme beyond the commonly adopted plasmon hybridization scheme. Finally, a remarkable correlation is demonstrated between the LDOS energy profiles outside the shell and the corresponding scattering efficiencies.
The dependence of global ocean modeling on background diapycnal mixing.
Deng, Zengan
2014-01-01
The Argo-derived background diapycnal mixing (BDM) proposed by Deng et al. (in publish) is introduced to and applied in Hybrid Coordinate Ocean Model (HYCOM). Sensitive experiments are carried out using HYCOM to detect the responses of ocean surface temperature and Meridional Overturning Circulation (MOC) to BDM in a global context. Preliminary results show that utilizing a constant BDM, with the same order of magnitude as the realistic one, may cause significant deviation in temperature and MOC. It is found that the dependence of surface temperature and MOC on BDM is prominent. Surface temperature is decreased with the increase of BDM, because diapycnal mixing can promote the deep cold water return to the upper ocean. Comparing to the control run, more striking MOC changes can be caused by the larger variation in BDM.
Nonlinear spectral mixing theory to model multispectral signatures
Borel, C.C.
1996-02-01
Nonlinear spectral mixing occurs due to multiple reflections and transmissions between discrete surfaces, e.g. leaves or facets of a rough surface. The radiosity method is an energy conserving computational method used in thermal engineering and it models nonlinear spectral mixing realistically and accurately. In contrast to the radiative transfer method the radiosity method takes into account the discreteness of the scattering surfaces (e.g. exact location, orientation and shape) such as leaves and includes mutual shading between them. An analytic radiosity-based scattering model for vegetation was developed and used to compute vegetation indices for various configurations. The leaf reflectance and transmittance was modeled using the PROSPECT model for various amounts of water, chlorophyll and variable leaf structure. The soil background was modeled using SOILSPEC with a linear mixture of reflectances of sand, clay and peat. A neural network and a geometry based retrieval scheme were used to retrieve leaf area index and chlorophyll concentration for dense canopies. Only simulated canopy reflectances in the 6 visible through short wave IR Landsat TM channels were used. The authors used an empirical function to compute the signal-to-noise ratio of a retrieved quantity.
Intercomparison of garnet barometers and implications for garnet mixing models
Anovitz, L.M.; Essene, E.J.
1985-01-01
Several well-calibrated barometers are available in the system Ca-Fe-Ti-Al-Si-O, including: Alm+3Ru-3Ilm+Sil+2Qtz (GRAIL), 2Alm+Grreverse arrow6Ru=6Ilm+3An+3Qtz (GRIPS); 2Alm+Gr=3Fa+3An (FAG); 3AnGr+Ky+Qtz (GASP); 2Fs-Fa+Qtz (FFQ); and Gr+Qtz=An+2Wo (WAGS). GRIPS, GRAIL and GASP form a linearly dependent set such that any two should yield the third given an a/X model for the grossular/almandine solid-solution. Application to barometry of garnet granulite assemblages from the Grenville in Ontario yields average pressures 0.1 kb lower for GRIPS and 0.4 kb higher for FAGS using our mixing model. Results from Parry Island, Ontario, yield 8.7 kb from GRAIL as opposed to 9.1 kb using Ganguly and Saxena's model. For GASP, Parry Island assemblages yield 8.4 kb with the authors calibration. Ganguly and Saxena's model gives 5.4 kb using Gasparik's reversals and 8.1 kb using the position of GASP calculated from GRIPS and GRAIL. These corrections allow GRIPS, GRAIL, GASP and FAGS to yield consistent pressures to +/- 0.5 kb in regional metamorphic terranes. Application of their mixing model outside of the fitted range 700-1000 K is not encouraged as extrapolation may yield erroneous results.
NASA Astrophysics Data System (ADS)
Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.
2015-05-01
The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.
ERIC Educational Resources Information Center
Mota, A. R.; Lopes dos Santos, J. M. B.
2014-01-01
Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.
Application of a mixing-ratios based formulation to model mixing-driven dissolution experiments
NASA Astrophysics Data System (ADS)
Guadagnini, Alberto; Sanchez-Vila, Xavier; Saaltink, Maarten W.; Bussini, Michele; Berkowitz, Brian
2009-05-01
We address the question of how one can combine theoretical and numerical modeling approaches with limited measurements from laboratory flow cell experiments to realistically quantify salient features of complex mixing-driven multicomponent reactive transport problems in porous media. Flow cells are commonly used to examine processes affecting reactive transport through porous media, under controlled conditions. An advantage of flow cells is their suitability for relatively fast and reliable experiments, although measuring spatial distributions of a state variable within the cell is often difficult. In general, fluid is sampled only at the flow cell outlet, and concentration measurements are usually interpreted in terms of integrated reaction rates. In reactive transport problems, however, the spatial distribution of the reaction rates within the cell might be more important than the bulk integrated value. Recent advances in theoretical and numerical modeling of complex reactive transport problems [De Simoni M, Carrera J, Sanchez-Vila X, Guadagnini A. A procedure for the solution of multicomponent reactive transport problems. Water Resour Res 2005;41:W11410. doi: 10.1029/2005WR004056, De Simoni M, Sanchez-Vila X, Carrera J, Saaltink MW. A mixing ratios-based formulation for multicomponent reactive transport. Water Resour Res 2007;43:W07419. doi: 10.1029/2006WR005256] result in a methodology conducive to a simple exact expression for the space-time distribution of reaction rates in the presence of homogeneous or heterogeneous reactions in chemical equilibrium. The key points of the methodology are that a general reactive transport problem, involving a relatively high number of chemical species, can be formulated in terms of a set of decoupled partial differential equations, and the amount of reactants evolving into products depends on the rate at which solutions mix. The main objective of the current study is to show how this methodology can be used in conjunction
Additive Manufacturing of Medical Models--Applications in Rhinology.
Raos, Pero; Klapan, Ivica; Galeta, Tomislav
2015-09-01
In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area.
Modeling of Transient Flow Mixing of Streams Injected into a Mixing Chamber
NASA Technical Reports Server (NTRS)
Voytovych, Dmytro M.; Merkle, Charles L.; Lucht, Robert P.; Hulka, James R.; Jones, Gregg W.
2006-01-01
Ignition is recognized as one the critical drivers in the reliability of multiple-start rocket engines. Residual combustion products from previous engine operation can condense on valves and related structures thereby creating difficulties for subsequent starting procedures. Alternative ignition methods that require fewer valves can mitigate the valve reliability problem, but require improved understanding of the spatial and temporal propellant distribution in the pre-ignition chamber. Current design tools based mainly on one-dimensional analysis and empirical models cannot predict local details of the injection and ignition processes. The goal of this work is to evaluate the capability of the modern computational fluid dynamics (CFD) tools in predicting the transient flow mixing in pre-ignition environment by comparing the results with the experimental data. This study is a part of a program to improve analytical methods and methodologies to analyze reliability and durability of combustion devices. In the present paper we describe a series of detailed computational simulations of the unsteady mixing events as the cold propellants are first introduced into the chamber as a first step in providing this necessary environmental description. The present computational modeling represents a complement to parallel experimental simulations' and includes comparisons with experimental results from that effort. A large number of rocket engine ignition studies has been previously reported. Here we limit our discussion to the work discussed in Refs. 2, 3 and 4 which is both similar to and different from the present approach. The similarities arise from the fact that both efforts involve detailed experimental/computational simulations of the ignition problem. The differences arise from the underlying philosophy of the two endeavors. The approach in Refs. 2 to 4 is a classical ignition study in which the focus is on the response of a propellant mixture to an ignition source, with
Mixing behavior of a model cellulosic biomass slurry during settling and resuspension
Crawford, Nathan C.; Sprague, Michael A.; Stickel, Jonathan J.
2016-01-29
Thorough mixing during biochemical deconstruction of biomass is crucial for achieving maximum process yields and economic success. However, due to the complex morphology and surface chemistry of biomass particles, biomass mixing is challenging and currently it is not well understood. This study investigates the bulk rheology of negatively buoyant, non-Brownian α-cellulose particles during settling and resuspension. The torque signal of a vane mixer across two distinct experimental setups (vane-in-cup and vane-in-beaker) was used to understand how mixing conditions affect the distribution of biomass particles. During experimentation, a bifurcated torque response as a function of vane speed was observed, indicating that the slurry transitions from a “settling-dominant” regime to a “suspension-dominant” regime. The torque response of well-characterized fluids (i.e., DI water) were then used to empirically identify when sufficient mixing turbulence was established in each experimental setup. The predicted critical mixing speeds were in agreement with measured values, suggesting that secondary flows are required in order to keep the cellulose particles fully suspended. In addition, a simple scaling relationship was developed to model the entire torque signal of the slurry throughout settling and resuspension. Furthermore, qualitative and semi-quantitative agreement between the model and experimental results was observed.
Mixing behavior of a model cellulosic biomass slurry during settling and resuspension
Crawford, Nathan C.; Sprague, Michael A.; Stickel, Jonathan J.
2016-01-29
Thorough mixing during biochemical deconstruction of biomass is crucial for achieving maximum process yields and economic success. However, due to the complex morphology and surface chemistry of biomass particles, biomass mixing is challenging and currently it is not well understood. This study investigates the bulk rheology of negatively buoyant, non-Brownian α-cellulose particles during settling and resuspension. The torque signal of a vane mixer across two distinct experimental setups (vane-in-cup and vane-in-beaker) was used to understand how mixing conditions affect the distribution of biomass particles. During experimentation, a bifurcated torque response as a function of vane speed was observed, indicating thatmore » the slurry transitions from a “settling-dominant” regime to a “suspension-dominant” regime. The torque response of well-characterized fluids (i.e., DI water) were then used to empirically identify when sufficient mixing turbulence was established in each experimental setup. The predicted critical mixing speeds were in agreement with measured values, suggesting that secondary flows are required in order to keep the cellulose particles fully suspended. In addition, a simple scaling relationship was developed to model the entire torque signal of the slurry throughout settling and resuspension. Furthermore, qualitative and semi-quantitative agreement between the model and experimental results was observed.« less
Multiscale Modeling of Powder Bed-Based Additive Manufacturing
NASA Astrophysics Data System (ADS)
Markl, Matthias; Körner, Carolin
2016-07-01
Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.
MIXING MODELING ANALYSIS FOR SRS SALT WASTE DISPOSITION
Lee, S.
2011-01-18
Nuclear waste at Savannah River Site (SRS) waste tanks consists of three different types of waste forms. They are the lighter salt solutions referred to as supernate, the precipitated salts as salt cake, and heavier fine solids as sludge. The sludge is settled on the tank floor. About half of the residual waste radioactivity is contained in the sludge, which is only about 8 percentage of the total waste volume. Mixing study to be evaluated here for the Salt Disposition Integration (SDI) project focuses on supernate preparations in waste tanks prior to transfer to the Salt Waste Processing Facility (SWPF) feed tank. The methods to mix and blend the contents of the SRS blend tanks were evalutaed to ensure that the contents are properly blended before they are transferred from the blend tank such as Tank 50H to the SWPF feed tank. The work consists of two principal objectives to investigate two different pumps. One objective is to identify a suitable pumping arrangement that will adequately blend/mix two miscible liquids to obtain a uniform composition in the tank with a minimum level of sludge solid particulate in suspension. The other is to estimate the elevation in the tank at which the transfer pump inlet should be located where the solid concentration of the entrained fluid remains below the acceptance criterion (0.09 wt% or 1200 mg/liter) during transfer operation to the SWPF. Tank 50H is a Waste Tank that will be used to prepare batches of salt feed for SWPF. The salt feed must be a homogeneous solution satisfying the acceptance criterion of the solids entrainment during transfer operation. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The modeling results will provide quantitative design and operation information during the mixing/blending process and the transfer operation of the blended
Mixing and shocks in geophysical shallow water models
NASA Astrophysics Data System (ADS)
Jacobson, Tivon
In the first section, a reduced two-layer shallow water model for fluid mixing is described. The model is a nonlinear hyperbolic quasilinear system of partial differential equations, derived by taking the limit as the upper layer becomes infinitely deep. It resembles the shallow water equations, but with an active buoyancy. Fluid entrainment is supposed to occur from the upper layer to the lower. Several physically motivated closures are proposed, including a robust closure based on maximizing a mixing entropy (also defined and derived) at shocks. The structure of shock solutions is examined. The Riemann problem is solved by setting the shock speed to maximize the production of mixing entropy. Shock-resolving finite-volume numerical models are presented with and without topographic forcing. Explicit shock tracking is required for strong shocks. The constraint that turbulent energy production be positive is considered. The model has geophysical applications in studying the dynamics of dense sill overflows in the ocean. The second section discusses stationary shocks of the shallow water equations in a reentrant rotating channel with wind stress and topography. Asymptotic predictions for the shock location, strength, and associated energy dissipation are developed by taking the topographic perturbation to be small. The scaling arguments for the asymptotics are developed by demanding integrated energy and momentum balance, with the result that the free surface perturbation is of the order of the square root of the topographic perturbation. Shock formation requires that linear waves be nondispersive, which sets a solvability condition on the mean flow and which leads to a class of generalized Kelvin waves. Two-dimensional shock-resolving numerical simulations validate the asymptotic expressions and demonstrate the presence of stationary separated flow shocks in some cases. Geophysical applications are considered. Overview sections on shock-resolving numerical methods
Subgrid models for mass and thermal diffusion in turbulent mixing
Sharp, David H; Lim, Hyunkyung; Li, Xiao - Lin; Gilmm, James G
2008-01-01
We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without resolving the
Generalized linear mixed model for segregation distortion analysis
2011-01-01
Background Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. Results We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F2 mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Conclusions Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals. PMID:22078575
Additive Functions in Boolean Models of Gene Regulatory Network Modules
Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H.; Provero, Paolo; Giacobini, Mario
2011-01-01
Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity
Additive functions in boolean models of gene regulatory network modules.
Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H; Provero, Paolo; Giacobini, Mario
2011-01-01
Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity
WATEQ3 geochemical model: thermodynamic data for several additional solids
Krupka, K.M.; Jenne, E.A.
1982-09-01
Geochemical models such as WATEQ3 can be used to model the concentrations of water-soluble pollutants that may result from the disposal of nuclear waste and retorted oil shale. However, for a model to competently deal with these water-soluble pollutants, an adequate thermodynamic data base must be provided that includes elements identified as important in modeling these pollutants. To this end, several minerals and related solid phases were identified that were absent from the thermodynamic data base of WATEQ3. In this study, the thermodynamic data for the identified solids were compiled and selected from several published tabulations of thermodynamic data. For these solids, an accepted Gibbs free energy of formation, ..delta..G/sup 0//sub f,298/, was selected for each solid phase based on the recentness of the tabulated data and on considerations of internal consistency with respect to both the published tabulations and the existing data in WATEQ3. For those solids not included in these published tabulations, Gibbs free energies of formation were calculated from published solubility data (e.g., lepidocrocite), or were estimated (e.g., nontronite) using a free-energy summation method described by Mattigod and Sposito (1978). The accepted or estimated free energies were then combined with internally consistent, ancillary thermodynamic data to calculate equilibrium constants for the hydrolysis reactions of these minerals and related solid phases. Including these values in the WATEQ3 data base increased the competency of this geochemical model in applications associated with the disposal of nuclear waste and retorted oil shale. Additional minerals and related solid phases that need to be added to the solubility submodel will be identified as modeling applications continue in these two programs.
Scherer, Pia I; Raeder, Uta; Geist, Juergen; Zwirglmaier, Katrin
2017-02-01
Cyanobacteria, such as the toxin producer Microcystis aeruginosa, are predicted to be favored by global warming both directly, through elevated water temperatures, and indirectly, through factors such as prolonged stratification of waterbodies. M. aeruginosa is able to produce the hepatotoxin microcystin, which causes great concern in freshwater management worldwide. However, little is known about the expression of microcystin synthesis genes in response to climate change-related factors. In this study, a new RT-qPCR assay employing four reference genes (GAPDH, gltA, rpoC1, and rpoD) was developed to assess the expression of two target genes (the microcystin synthesis genes mcyB and mcyD). This assay was used to investigate changes in mcyB and mcyD expression in response to selected environmental factors associated with global warming. A 10°C rise in temperature significantly increased mcyB expression, but not mcyD expression. Neither mixing nor the addition of microcystin-LR (10 μg L(-1) or 60 μg L(-1) ) significantly altered mcyB and mcyD expression. The expression levels of mcyB and mcyD were correlated but not identical.
NASA Astrophysics Data System (ADS)
Xu, Yonggang; Yuan, Liming; Zhang, Deyuan
2016-04-01
A silicone rubber composite filled with carbonyl iron particles and four different carbonous materials (carbon black, graphite, carbon fiber or multi-walled carbon nanotubes) was prepared using a two-roller mixture. The complex permittivity and permeability were measured using a vector network analyzer at the frequency of 2-18 GHz. Then a type-based mixing rule based on the dielectric absorbent and magnetic absorbent was proposed to reveal the enhancing mechanism on the permittivity and permeability. The enforcement effect lies in the decreased percolation threshold and the changing pending parameter as the carbonous materials were added. The reflection loss (RL) result showed the added carbonous materials enhanced the absorption in the lower frequency range, the RL decrement value being about 2 dB at 4-5 GHz with a thickness of 1 mm. All the added carbonous materials reinforced the shielding effectiveness (SE) of the composites. The maximum increment value of the SE was about 3.23 dB at 0.5 mm and 4.65 dB at 1 mm, respectively. The added carbonous materials could be effective additives for enforcing the absorption and shielding property of the absorbers.
Lin, Bin; He, Xibing; MacKerell, Alexander D.
2013-01-01
A comparative study on aqueous methanol solutions modeled by the CHARMM additive and Drude polarizable force fields was carried out by employing Kirkwood-Buff analysis. It was shown that both models reproduced the experimental Kirkwood-Buff integrals and excess coordination numbers adequately well over the entire concentration range. The Drude model showed significant improvement over the additive model in solution densities, partial molar volumes, excess molar volumes, concentration-dependent diffusion constants, and dielectric constants. However, the additive model performed somewhat better than the Drude model in reproducing the activity derivative, excess molar Gibbs energy and excess molar enthalpy of mixing. This is due to the additive achieving a better balance among solute-solute, solute-solvent, and solvent-solvent interactions, indicating the potential for improvements in the Drude polarizable alcohol model. PMID:23947568
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Generalized linear mixed models for meta-analysis.
Platt, R W; Leroux, B G; Breslow, N
1999-03-30
We examine two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation. Penalized quasi-likelihood (PQL), an approximate inference technique for generalized linear mixed models, and a linear model fitted by weighted least squares to the observed log-odds ratios are used to estimate regression coefficients and dispersion parameters. Simulation results demonstrate that both methods perform adequate approximate inference under many conditions, but that neither method works well in the presence of highly sparse data. Under certain conditions with small cell frequencies the PQL method provides better inference.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Peters, J. M.
1981-01-01
Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.
Estimation of propensity scores using generalized additive models.
Woo, Mi-Ja; Reiter, Jerome P; Karr, Alan F
2008-08-30
Propensity score matching is often used in observational studies to create treatment and control groups with similar distributions of observed covariates. Typically, propensity scores are estimated using logistic regressions that assume linearity between the logistic link and the predictors. We evaluate the use of generalized additive models (GAMs) for estimating propensity scores. We compare logistic regressions and GAMs in terms of balancing covariates using simulation studies with artificial and genuine data. We find that, when the distributions of covariates in the treatment and control groups overlap sufficiently, using GAMs can improve overall covariate balance, especially for higher-order moments of distributions. When the distributions in the two groups overlap insufficiently, GAM more clearly reveals this fact than logistic regression does. We also demonstrate via simulation that matching with GAMs can result in larger reductions in bias when estimating treatment effects than matching with logistic regression.
[Critical of the additive model of the randomized controlled trial].
Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine
2008-01-01
Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.
Ayala, Raul E.
1993-01-01
This invention relates to additives to mixed-metal oxides that act simultaneously as sorbents and catalysts in cleanup systems for hot coal gases. Such additives of this type, generally, act as a sorbent to remove sulfur from the coal gases while substantially simultaneously, catalytically decomposing appreciable amounts of ammonia from the coal gases.
Instantiated mixed effects modeling of Alzheimer's disease markers.
Guerrero, R; Schmidt-Richberg, A; Ledig, C; Tong, T; Wolz, R; Rueckert, D
2016-11-15
The assessment and prediction of a subject's current and future risk of developing neurodegenerative diseases like Alzheimer's disease are of great interest in both the design of clinical trials as well as in clinical decision making. Exploring the longitudinal trajectory of markers related to neurodegeneration is an important task when selecting subjects for treatment in trials and the clinic, in the evaluation of early disease indicators and the monitoring of disease progression. Given that there is substantial intersubject variability, models that attempt to describe marker trajectories for a whole population will likely lack specificity for the representation of individual patients. Therefore, we argue here that individualized models provide a more accurate alternative that can be used for tasks such as population stratification and a subject-specific prognosis. In the work presented here, mixed effects modeling is used to derive global and individual marker trajectories for a training population. Test subject (new patient) specific models are then instantiated using a stratified "marker signature" that defines a subpopulation of similar cases within the training database. From this subpopulation, personalized models of the expected trajectory of several markers are subsequently estimated for unseen patients. These patient specific models of markers are shown to provide better predictions of time-to-conversion to Alzheimer's disease than population based models.
Chen, Bao-Ming; Peng, Shao-Lin; D’Antonio, Carla M.; Li, Dai-Jiang; Ren, Wen-Tao
2013-01-01
A common hypothesis to explain the effect of litter mixing is based on the difference in litter N content between mixed species. Although many studies have shown that litter of invasive non-native plants typically has higher N content than that of native plants in the communities they invade, there has been surprisingly little study of mixing effects during plant invasions. We address this question in south China where Mikania micrantha H.B.K., a non-native vine, with high litter N content, has invaded many forested ecosystems. We were specifically interested in whether this invader accelerated decomposition and how the strength of the litter mixing effect changes with the degree of invasion and over time during litter decomposition. Using litterbags, we evaluated the effect of mixing litter of M. micrantha with the litter of 7 native resident plants, at 3 ratios: M1 (1∶4, = exotic:native litter), M2 (1∶1) and M3 (4∶1, = exotic:native litter) over three incubation periods. We compared mixed litter with unmixed litter of the native species to identify if a non-additive effect of mixing litter existed. We found that there were positive significant non-additive effects of litter mixing on both mass loss and nutrient release. These effects changed with native species identity, mixture ratio and decay times. Overall the greatest accelerations of mixture decay and N release tended to be in the highest degree of invasion (mix ratio M3) and during the middle and final measured stages of decomposition. Contrary to expectations, the initial difference in litter N did not explain species differences in the effect of mixing but overall it appears that invasion by M. micrantha is accelerating the decomposition of native species litter. This effect on a fundamental ecosystem process could contribute to higher rates of nutrient turnover in invaded ecosystems. PMID:23840435
Clarke, David C; Morris, Melody K; Lauffenburger, Douglas A
2013-01-01
Multiplexed bead-based flow cytometric immunoassays are a powerful experimental tool for investigating cellular communication networks, yet their widespread adoption is limited in part by challenges in robust quantitative analysis of the measurements. Here we report our application of mixed-effects modeling for the normalization and statistical analysis of bead-based immunoassay data. Our data set consisted of bead-based immunoassay measurements of 16 phospho-proteins in lysates of HepG2 cells treated with ligands that regulate acute-phase protein secretion. Mixed-effects modeling provided estimates for the effects of both the technical and biological sources of variance, and normalization was achieved by subtracting the technical effects from the measured values. This approach allowed us to detect ligand effects on signaling with greater precision and sensitivity and to more accurately characterize the HepG2 cell signaling network using constrained fuzzy logic. Mixed-effects modeling analysis of our data was vital for ascertaining that IL-1α and TGF-α treatment increased the activities of more pathways than IL-6 and TNF-α and that TGF-α and TNF-α increased p38 MAPK and c-Jun N-terminal kinase (JNK) phospho-protein levels in a synergistic manner. Moreover, we used mixed-effects modeling-based technical effect estimates to reveal the substantial variance contributed by batch effects along with the absence of loading order and assay plate position effects. We conclude that mixed-effects modeling enabled additional insights to be gained from our data than would otherwise be possible and we discuss how this methodology can play an important role in enhancing the value of experiments employing multiplexed bead-based immunoassays.
Clarke, David C.; Morris, Melody K.; Lauffenburger, Douglas A.
2013-01-01
Multiplexed bead-based flow cytometric immunoassays are a powerful experimental tool for investigating cellular communication networks, yet their widespread adoption is limited in part by challenges in robust quantitative analysis of the measurements. Here we report our application of mixed-effects modeling for the normalization and statistical analysis of bead-based immunoassay data. Our data set consisted of bead-based immunoassay measurements of 16 phospho-proteins in lysates of HepG2 cells treated with ligands that regulate acute-phase protein secretion. Mixed-effects modeling provided estimates for the effects of both the technical and biological sources of variance, and normalization was achieved by subtracting the technical effects from the measured values. This approach allowed us to detect ligand effects on signaling with greater precision and sensitivity and to more accurately characterize the HepG2 cell signaling network using constrained fuzzy logic. Mixed-effects modeling analysis of our data was vital for ascertaining that IL-1α and TGF-α treatment increased the activities of more pathways than IL-6 and TNF-α and that TGF-α and TNF-α increased p38 MAPK and c-Jun N-terminal kinase (JNK) phospho-protein levels in a synergistic manner. Moreover, we used mixed-effects modeling-based technical effect estimates to reveal the substantial variance contributed by batch effects along with the absence of loading order and assay plate position effects. We conclude that mixed-effects modeling enabled additional insights to be gained from our data than would otherwise be possible and we discuss how this methodology can play an important role in enhancing the value of experiments employing multiplexed bead-based immunoassays. PMID:23071098
Maximal atmospheric neutrino mixing in an SU(5) model
NASA Astrophysics Data System (ADS)
Grimus, W.; Lavoura, L.
2003-05-01
We show that maximal atmospheric and large solar neutrino mixing can be implemented in SU(5) gauge theories, by making use of the U(1) F symmetry associated with a suitably defined family number F, together with a Z2 symmetry which does not commute with F. U(1) F is softly broken by the mass terms of the right-handed neutrino singlets, which are responsible for the seesaw mechanism; in additio n, U(1) F is also spontaneously broken at the electroweak scale. In our scenario, lepton mixing stems exclusively from the right-handed-neutrino Majorana mass matrix, whereas the CKM matrix originates solely in the up-type-quark sector. We show that, despite the non-supersymmetric character of our model, unification of the gauge couplings can be achieved at a scale 1016 GeV < m U < 1019 GeV; indeed, we have found a particula r solution to this problem which yields results almost identical to the ones of the minimal supersymmetric standard model.
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2016-11-01
The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.
A mixing evolution model for bidirectional microblog user networks
NASA Astrophysics Data System (ADS)
Yuan, Wei-Guo; Liu, Yun
2015-08-01
Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.
Study of a mixed dispersal population dynamics model
Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; ...
2016-08-27
In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less
Study of a mixed dispersal population dynamics model
Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; Klymko, Christine F.; Thomas, Evelyn; Zhao, Bingyu
2016-08-27
In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to die out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.
Gennings, Chris; Wagner, Elizabeth D.; Simmons, Jane Ellen; Plewa, Michael J.
2010-01-01
For mixtures of many chemicals, a ray design based on a relevant, fixed mixing ratio is useful for detecting departure from additivity. Methods for detecting departure involve modeling the response as a function of total dose along the ray. For mixtures with many components, the interaction may be dose dependent. Therefore, we have developed the use of a three-segment model containing both a dose threshold and an interaction threshold. Prior to the dose threshold, the response is that of background; between the dose threshold and the interaction threshold, an additive relationship exists; the model allows for departure from additivity beyond the interaction threshold. With such a model, we can conduct a hypothesis test of additivity, as well as a test for a region of additivity. The methods are illustrated with cytotoxicity data that arise when Chinese hamster ovary cells are exposed to a mixture of nine haloacetic acids. PMID:21359103
Linear models for sound from supersonic reacting mixing layers
NASA Astrophysics Data System (ADS)
Chary, P. Shivakanth; Samanta, Arnab
2016-12-01
We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.
Mesoscale Modeling During Mixed-Phase Arctic Cloud Experiment
Avramov, A.; Harringston, J.Y.; Verlinde, J.
2005-03-18
Mixed-phase arctic stratus clouds are the predominant cloud type in the Arctic (Curry et al. 2000) and through various feedback mechanisms exert a strong influence on the Arctic climate. Perhaps one of the most intriguing of their features is that they tend to have liquid tops that precipitate ice. Despite the fact that this situation is colloidally unstable, these cloud systems are quite long lived - from a few days to over a couple of weeks. It has been hypothesized that mixed-phase clouds are maintained through a balance between liquid water condensation resulting from the cloud-top radiative cooling and ice removal by precipitation (Pinto 1998; Harrington et al. 1999). In their modeling study Harrington et al. (1999) found that the maintenance of this balance depends strongly on the ambient concentration of ice forming nucleus (IFN). In a follow-up study, Jiang et al. (2002), using only 30% of IFN concentration predicted by Meyers et al. (1992) IFN parameterization were able to obtain results similar to the observations reported by Pinto (1998). The IFN concentration measurements collected during the Mixed-Phase Arctic Cloud Experiment (M-PACE), conducted in October 2004 over the North Slope of Alaska and the Beaufort Sea (Verlinde et al. 2005), also showed much lower values then those predicted (Prenne, pers. comm.) by currently accepted ice nucleation parameterizations (e.g. Meyers et al. 1992). The goal of this study is to use the extensive IFN data taken during M-PACE to examine what effects low IFN concentrations have on mesoscale cloud structure and coastal dynamics.
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Xu, Songchen; Manna, Kuntal; Ellern, Arkady; Sadow, Aaron D
2014-12-08
In order to facilitate oxidative addition chemistry of fac-coordinated rhodium(I) and iridium(I) compounds, carbene–bis(oxazolinyl)phenylborate proligands have been synthesized and reacted with organometallic precursors. Two proligands, PhB(OxMe2)2(ImtBuH) (H[1]; OxMe2 = 4,4-dimethyl-2-oxazoline; ImtBuH = 1-tert-butylimidazole) and PhB(OxMe2)2(ImMesH) (H[2]; ImMesH = 1-mesitylimidazole), are deprotonated with potassium benzyl to generate K[1] and K[2], and these potassium compounds serve as reagents for the synthesis of a series of rhodium and iridium complexes. Cyclooctadiene and dicarbonyl compounds {PhB(OxMe2)2ImtBu}Rh(η4-C8H12) (3), {PhB(OxMe2)2ImMes}Rh(η4-C8H12) (4), {PhB(OxMe2)2ImMes}Rh(CO)2 (5), {PhB(OxMe2)2ImMes}Ir(η4-C8H12) (6), and {PhB(OxMe2)2ImMes}Ir(CO)2 (7) are synthesized along with ToMM(η4-C8H12) (M = Rh (8); M = Ir (9); ToM = tris(4,4-dimethyl-2-oxazolinyl)phenylborate). The spectroscopic and structural properties and reactivity of this series of compounds show electronic and steric effects of substituents on the imidazole (tert-butyl vs mesityl), effects of replacing an oxazoline in ToM with a carbene donor, and the influence of the donor ligand (CO vs C8H12). The reactions of K[2] and [M(μ-Cl)(η2-C8H14)2]2 (M = Rh, Ir) provide {κ4-PhB(OxMe2)2ImMes'CH2}Rh(μ-H)(μ-Cl)Rh(η2-C8H14)2 (10) and {PhB(OxMe2)2ImMes}IrH(η3-C8H13) (11). In the former compound, a spontaneous oxidative addition of a mesityl ortho-methyl to give a mixed-valent dirhodium species is observed, while the iridium compound forms a monometallic allyl hydride. Photochemical reactions of dicarbonyl compounds 5 and 7 result in C–H bond oxidative addition providing the compounds {κ4-PhB(OxMe2)2ImMes'CH2}RhH(CO) (12) and {PhB(OxMe2)2ImMes}IrH(Ph)CO (13). In 12, oxidative addition results in cyclometalation of the mesityl ortho-methyl similar to 10, whereas the iridium compound reacts with the benzene solvent to give a rare crystallographically characterized cis
A New Model for the Solubility of Water+Carbon Dioxide Mixed Fluids in Magmatic Systems
NASA Astrophysics Data System (ADS)
Ghiorso, M. S.; Gualda, G. A.
2012-12-01
A model is calibrated that permits estimation of the thermodynamic properties of dissolved H2O and CO2 components in silicate liquids of magmatic composition.The model is internally consistent with thermodynamic data/model collections in both MELTS (CMP 119; 197-212) and rhyolite-MELTS (JP 53, 875-890). It is calibrated from extensive literature data collected over a broad range of melt compositions (mafic to silicic) on the solubility of water (>1225 experiments, 700°-1600°C, 0-3 GPa), carbon dioxide (>450 experiments, 1150°-1800°C, 0-3.5 GPa), and mixed H2O-CO2 fluids (>140 experiments, 950°-1650°C, 0-3 GPa) in silicate liquids. The model reproduces these solubility data without bias over the entire range of temperature, pressure and composition. At lower pressures (<1 GPa) model residuals are within experimental uncertainty, but residuals are systematically larger at more elevated pressures. The model formulation relies on the EOS of Duan and Zhang (GCA 70, 2311-2324) for estimation of thermodynamic properties of fluid end members and of the mixed fluid. Melt properties are modeled under the simplifying assumption that water disassociates to hydroxyl species in the melt and that carbon dioxide dissolves as a molecular species. Both of these assumptions have been tested against more refined approximations involving speciation, with insufficient improvement of model recovery for solubility data to warrant the additional complexity. The calibrated mixed fluid model is an extension of and is backward compatible with the thermodynamic model for dissolved water in MELTS and rhyolite-MELTS. Additional calibration parameters for the mixed fluid include the enthalpy, entropy and volume of the CO2-melt component as well as regular solution-type interaction parameters between CO2 and "anhydrous" melt components (after MELTS); a total of 12 parameters in all. We find no compelling experimental evidence to justify a CO2-H2O interaction term in the melt. In addition to
Modeling of mixed-mode chromatography of peptides.
Bernardi, Susanna; Gétaz, David; Forrer, Nicola; Morbidelli, Massimo
2013-03-29
Mixed-mode chromatographic materials are more and more often used for the purification of biomolecules, such as peptides and proteins. In many instances they in fact exhibit better selectivity values and therefore improve the purification efficiency compared to classical materials. In this work, a model to describe biomolecules retention in cation-exchange/reversed-phase (CIEX-RP) mixed-mode columns under diluted conditions has been developed. The model accounts for the effect of the salt and organic modifier concentration on the biomolecule Henry coefficient through three parameters: α, β and γ. The α parameter is related to the adsorption strength and ligand density, β represents the number of organic modifier molecules necessary to displace one adsorbed biomolecule and γ represents the number of salt molecules necessary to desorb one biomolecule. The latter parameter is strictly related to the number of charges on the biomolecule surface interacting with the ion-exchange ligands and it is shown experimentally that its value is close to the biomolecule net charge. The model reliability has been validated by a large set of experimental data including retention times of two different peptides (goserelin and insulin) on five columns: a reversed-phase C8 column and four CIEX-RP columns with different percentages of sulfonic groups and various concentration values of the salt and organic modifier. It has been found that the percentage of sulfonic groups on the surface strongly affects the peptides adsorption strength, and in particular, in the cases investigated, a CIEX ligand density around 0.04μmol/m(2) leads to optimal retention values.
THE USE OF DI WATER TO MITIGATE DUSTING FOR ADDITION OF DWPF FRIT TO THE SLURRY MIX EVAPORATOR
Hansen, E.
2010-07-21
The Defense Waste Processing Facility (DPWF) presently is in the process to determine means to reduce water utilization in the Slurry Mix Evaporator (SME) process, thus reducing effluent and processing times. The frit slurry addition system mixes the dry frit with water, yielding approximately a 50 weight percent slurry containing frit and the other fraction water. This slurry is discharged into the SME and excess water is removed via boiling. To reduce this water load to the SME, DWPF has proposed using a pneumatic system in conveying the frit to the SME, in essence a dry delivery system. The problem associated with utilizing a dry delivery system with the existing frit is the generation of dust when discharged into the SME. The use of water has been shown to be effective in the mining industry as well in the DOE complex to mitigate dusting. The method employed by SRNL to determine the quantity of water to mitigate dusting in dry powders was effective, between a lab and bench scale tests. In those tests, it was shown that as high as five weight percent (wt%) of water addition was required to mitigate dust from batches of glass forming minerals used by the Waste Treatment Plant at Hanford, Washington. The same method used to determine the quantity of water to mitigate dusting was used in this task to determine the quantity of water to mitigate this dusting using as-received frit. The ability for water to mitigate dusting is due to its adhesive properties as shown in Figure 1-1. Wetting the frit particles allows for the smaller frit particles (including dust) to adhere to the larger frit particles or to agglomerate into large particles. Fluids other than water can also be used, but their adhesive properties are different than water and the quantity required to mitigate dusting is different, as was observed in reference 1. Excessive water, a few weight percentages greater than that required to mitigate dusting can cause the resulting material not to flow. The primary
Efficient material flow in mixed model assembly lines.
Alnahhal, Mohammed; Noche, Bernd
2013-01-01
In this study, material flow from decentralized supermarkets to stations in mixed model assembly lines using tow (tugger) trains is investigated. Train routing, scheduling, and loading problems are investigated in parallel to minimize the number of trains, variability in loading and in routes lengths, and line-side inventory holding costs. The general framework for solving these problems in parallel contains analytical equations, Dynamic Programming (DP), and Mixed Integer Programming (MIP). Matlab in conjunction with LP-solve software was used to formulate the problem. An example was presented to explain the idea. Results which were obtained in very short CPU time showed the effect of using time buffer among routes on the feasible space and on the optimal solution. Results also showed the effect of the objective, concerning reducing the variability in loading, on the results of routing, scheduling, and loading. Moreover, results showed the importance of considering the maximum line-side inventory beside the capacity of the train in the same time in finding the optimal solution.
Mixed-Effects Modeling with Crossed Random Effects for Subjects and Items
ERIC Educational Resources Information Center
Baayen, R. H.; Davidson, D. J.; Bates, D. M.
2008-01-01
This paper provides an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixed-effects analyses compared to…
Transitional Boundary-Layer Solutions Using a Mixing-Length and a Two-Equation Turbulence Model
NASA Technical Reports Server (NTRS)
Anderson, E. C.; Wilcox, D. C.
1978-01-01
Boundary-layer solutions were obtained using the conventional two-layer mixing-length turbulence model and the Wilcox-Traci two-equation model of turbulence. Both flatplate and blunt-body geometries were considered. The most significant result of the study is development of approximations for the two-equation model which permit streamwise stepsize comparable to that used in mixing-length computations. Additionally, a set of model-equation boundary conditions derived which apply equally well to both flat-plate and blunt-body geometries. Solutions obtained with the two-equations turbulence model are compared with experimental data and/or corresponding solutions obtained using the mixing-length model. Agreement is satisfactory for flat-plate boundary layers but not for blunt body boundary layers.
JACKSON VL
2011-08-31
The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.
Metabolic modeling of mixed substrate uptake for polyhydroxyalkanoate (PHA) production.
Jiang, Yang; Hebly, Marit; Kleerebezem, Robbert; Muyzer, Gerard; van Loosdrecht, Mark C M
2011-01-01
Polyhydroxyalkanoate (PHA) production by mixed microbial communities can be established in a two-stage process, consisting of a microbial enrichment step and a PHA accumulation step. In this study, a mathematical model was constructed for evaluating the influence of the carbon substrate composition on both steps of the PHA production process. Experiments were conducted with acetate, propionate, and acetate propionate mixtures. Microbial community analysis demonstrated that despite the changes in substrate composition the dominant microorganism was Plasticicumulans acidivorans in all experiments. A metabolic network model was established to investigate the processes observed. The model based analysis indicated that adaptation of the acetate and propionate uptake rate as a function of acetate and propionate concentrations in the substrate during cultivation occurred. The monomer composition of the PHA produced was found to be directly related to the composition of the substrate. Propionate induced mainly polyhydroxyvalerate (PHV) production whereas only polyhydroxybutyrate (PHB) was produced on acetate. Accumulation experiments with acetate-propionate mixtures yielded PHB/PHV mixtures in ratios directly related to the acetate and propionate uptake rate. The model developed can be used as a useful tool to predict the PHA composition as a function of the substrate composition for acetate-propionate mixtures.
Neutrino mixing in a left-right model
NASA Astrophysics Data System (ADS)
Martins Simões, J. A.; Ponciano, J. A.
We study the mixing among different generations of massive neutrino fields in a model can accommodate a consistent pattern for neutral fermion masses as well as neutrino oscillations. The left and right sectors can be connected by a new neutral current. PACS: 12.60.-i, 14.60.St, 14.60.Pq
Longitudinal Mixed Membership Trajectory Models for Disability Survey Data.
Manrique-Vallier, Daniel
2014-12-01
We develop new methods for analyzing discrete multivariate longitudinal data and apply them to functional disability data on U.S. elderly population from the National Long Term Care Survey (NLTCS), 1982-2004. Our models build on a mixed membership framework, in which individuals are allowed multiple membership on a set of extreme profiles characterized by time-dependent trajectories of progression into disability. We also develop an extension that allows us to incorporate birth-cohort effects, in order to assess inter-generational changes. Applying these methods we find that most individuals follow trajectories that imply a late onset of disability, and that younger cohorts tend to develop disabilities at a later stage in life compared to their elders.
Chemical geothermometers and mixing models for geothermal systems
Fournier, R.O.
1977-01-01
Qualitative chemical geothermometers utilize anomalous concentrations of various "indicator" elements in groundwaters, streams, soils, and soil gases to outline favorable places to explore for geothermal energy. Some of the qualitative methods, such as the delineation of mercury and helium anomalies in soil gases, do not require the presence of hot springs or fumaroles. However, these techniques may also outline fossil thermal areas that are now cold. Quantitative chemical geothermometers and mixing models can provide information about present probable minimum subsurface temperatures. Interpretation is easiest where several hot or warm springs are present in a given area. At this time the most widely used quantitative chemical geothermometers are silica, Na/K, and Na-K-Ca. ?? 1976.
Computation of Supersonic Jet Mixing Noise Using PARC Code With a kappa-epsilon Turbulence Model
NASA Technical Reports Server (NTRS)
Khavaran, A.; Kim, C. M.
1999-01-01
A number of modifications have been proposed in order to improve the jet noise prediction capabilities of the MGB code. This code which was developed at General Electric, employees the concept of acoustic analogy for the prediction of turbulent mixing noise. The source convection and also refraction of sound due to the shrouding effect of the mean flow are accounted for by incorporating the high frequency solution to Lilley's equation for cylindrical jets (Balsa and Mani). The broadband shock-associated noise is estimated using Harper-Bourne and Fisher's shock noise theory. The proposed modifications are aimed at improving the aerodynamic predictions (source/spectrum computations) and allowing for the non- axisymmetric effects in the jet plume and nozzle geometry (sound/flow interaction). In addition, recent advances in shock noise prediction as proposed by Tam can be employed to predict the shock-associated noise as an addition to the jet mixing noise when the flow is not perfectly expanded. Here we concentrate on the aerodynamic predictions using the PARC code with a k-E turbulence model and the ensuing turbulent mixing noise. The geometry under consideration is an axisymmetric convergent-divergent nozzle at its design operating conditions. Aerodynamic and acoustic computations are compared with data as well as predictions due to the original MGB model using Reichardt's aerodynamic theory.
Wave-turbulence interaction-induced vertical mixing and its effects in ocean and climate models.
Qiao, Fangli; Yuan, Yeli; Deng, Jia; Dai, Dejun; Song, Zhenya
2016-04-13
Heated from above, the oceans are stably stratified. Therefore, the performance of general ocean circulation models and climate studies through coupled atmosphere-ocean models depends critically on vertical mixing of energy and momentum in the water column. Many of the traditional general circulation models are based on total kinetic energy (TKE), in which the roles of waves are averaged out. Although theoretical calculations suggest that waves could greatly enhance coexisting turbulence, no field measurements on turbulence have ever validated this mechanism directly. To address this problem, a specially designed field experiment has been conducted. The experimental results indicate that the wave-turbulence interaction-induced enhancement of the background turbulence is indeed the predominant mechanism for turbulence generation and enhancement. Based on this understanding, we propose a new parametrization for vertical mixing as an additive part to the traditional TKE approach. This new result reconfirmed the past theoretical model that had been tested and validated in numerical model experiments and field observations. It firmly establishes the critical role of wave-turbulence interaction effects in both general ocean circulation models and atmosphere-ocean coupled models, which could greatly improve the understanding of the sea surface temperature and water column properties distributions, and hence model-based climate forecasting capability.
Wave–turbulence interaction-induced vertical mixing and its effects in ocean and climate models
Qiao, Fangli; Yuan, Yeli; Deng, Jia; Dai, Dejun; Song, Zhenya
2016-01-01
Heated from above, the oceans are stably stratified. Therefore, the performance of general ocean circulation models and climate studies through coupled atmosphere–ocean models depends critically on vertical mixing of energy and momentum in the water column. Many of the traditional general circulation models are based on total kinetic energy (TKE), in which the roles of waves are averaged out. Although theoretical calculations suggest that waves could greatly enhance coexisting turbulence, no field measurements on turbulence have ever validated this mechanism directly. To address this problem, a specially designed field experiment has been conducted. The experimental results indicate that the wave–turbulence interaction-induced enhancement of the background turbulence is indeed the predominant mechanism for turbulence generation and enhancement. Based on this understanding, we propose a new parametrization for vertical mixing as an additive part to the traditional TKE approach. This new result reconfirmed the past theoretical model that had been tested and validated in numerical model experiments and field observations. It firmly establishes the critical role of wave–turbulence interaction effects in both general ocean circulation models and atmosphere–ocean coupled models, which could greatly improve the understanding of the sea surface temperature and water column properties distributions, and hence model-based climate forecasting capability. PMID:26953182
Percolation model with an additional source of disorder
NASA Astrophysics Data System (ADS)
Kundu, Sumanta; Manna, S. S.
2016-06-01
The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.
Research on mixed network architecture collaborative application model
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Zhao, Xi'an; Liang, Song
2009-10-01
When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.
Cruise observation and numerical modeling of turbulent mixing in the Pearl River estuary in summer
NASA Astrophysics Data System (ADS)
Pan, Jiayi; Gu, Yanzhen
2016-06-01
The turbulent mixing in the Pearl River estuary and plume area is analyzed by using cruise data and simulation results of the Regional Ocean Model System (ROMS). The cruise observations reveal that strong mixing appeared in the bottom layer on larger ebb in the estuary. Modeling simulations are consistent with the observation results, and suggest that inside the estuary and in the near-shore water, the mixing is stronger on ebb than on flood. The mixing generation mechanism analysis based on modeling data reveals that bottom stress is responsible for the generation of turbulence in the estuary, for the re-circulating plume area, internal shear instability plays an important role in the mixing, and wind may induce the surface mixing in the plume far-field. The estuary mixing is controlled by the tidal strength, and in the re-circulating plume bulge, the wind stirring may reinforce the internal shear instability mixing.
Hyperbolic value addition and general models of animal choice.
Mazur, J E
2001-01-01
Three mathematical models of choice--the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971), and a new model called the hyperbolic value-added model--were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations for which the models make distinctly different predictions.
Using Bayesian Stable Isotope Mixing Models to Enhance Marine Ecosystem Models
The use of stable isotopes in food web studies has proven to be a valuable tool for ecologists. We investigated the use of Bayesian stable isotope mixing models as constraints for an ecosystem model of a temperate seagrass system on the Atlantic coast of France. δ13C and δ15N i...
Covariate-Adjusted Linear Mixed Effects Model with an Application to Longitudinal Data
Nguyen, Danh V.; Şentürk, Damla; Carroll, Raymond J.
2009-01-01
Linear mixed effects (LME) models are useful for longitudinal data/repeated measurements. We propose a new class of covariate-adjusted LME models for longitudinal data that nonparametrically adjusts for a normalizing covariate. The proposed approach involves fitting a parametric LME model to the data after adjusting for the nonparametric effects of a baseline confounding covariate. In particular, the effect of the observable covariate on the response and predictors of the LME model is modeled nonparametrically via smooth unknown functions. In addition to covariate-adjusted estimation of fixed/population parameters and random effects, an estimation procedure for the variance components is also developed. Numerical properties of the proposed estimators are investigated with simulation studies. The consistency and convergence rates of the proposed estimators are also established. An application to a longitudinal data set on calcium absorption, accounting for baseline distortion from body mass index, illustrates the proposed methodology. PMID:19266053
NASA Astrophysics Data System (ADS)
Oh, Min Wook; Kang, Jae Won; Yeo, Dong Hun; Shin, Hyo Soon; Jeong, Dae Yong
2015-04-01
Recently, the use of small-sized BaTiO3 particles for ultra-thin MLCC research has increased as a method for minimizing the dielectric layer's thickness in thick film process. However, when particles smaller than 100 nm are used, the reduced particle size leads to a reduced dielectric constant. The use of nanoparticles, therefore, requires an increase in the amount of additive used due to the increase in the specific surface area, thus increasing the production cost. In this study, a novel method of coating 150-nm and 80-nm BaTiO3 powders with additives and mixing them together was employed, taking advantage of the effect obtained through the use of BaTiO3 particles smaller than 100 nm, to conveniently obtain the desired dielectric constant and thermal characteristics. Also, the microstructure and the dielectric properties were evaluated. The additives Dy, Mn, Mg, Si, and Cr were coated on a 150-nm powder, and the additives Dy, Mn, Mg, and Si were coated on 80-nm powder, followed by mixing at a ratio of 1:1. As a result, the microstructure revealed grain formation according to the liquid-phase additive Si; additionally, densification was well realized. However, non-reducibility was not obtained, and the material became a semiconductor. When the amount of added Mn in the 150-nm powder was increased to 0.2 and 0.3 mol%, insignificant changes in the microstructure were observed, and the bulk density after mixing was found to have increased drastically in comparison to that before mixing. Also, non-reducibility was obtained for certain conditions. The dielectric property was found to be consistent with the densification and the grain size. The mixed composition #1-0.3 had a dielectric constant over 2000, and the result somewhat satisfied the dielectric constant temperature dependency for X6S.
Extended Mixed-Efects Item Response Models with the MH-RM Algorithm
ERIC Educational Resources Information Center
Chalmers, R. Philip
2015-01-01
A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…
Chandra Observations and Models of the Mixed Morphology Supernova Remnant W44: Global Trends
NASA Technical Reports Server (NTRS)
Shelton, R. L.; Kuntz, K. D.; Petre, R.
2004-01-01
We report on the Chandra observations of the archetypical mixed morphology (or thermal composite) supernova remnant, W44. As with other mixed morphology remnants, W44's projected center is bright in thermal X-rays. It has an obvious radio shell, but no discernable X-ray shell. In addition, X-ray bright knots dot W44's image. The spectral analysis of the Chandra data show that the remnant s hot, bright projected center is metal-rich and that the bright knots are regions of comparatively elevated elemental abundances. Neon is among the affected elements, suggesting that ejecta contributes to the abundance trends. Furthermore, some of the emitting iron atoms appear to be underionized with respect to the other ions, providing the first potential X-ray evidence for dust destruction in a supernova remnant. We use the Chandra data to test the following explanations for W44's X-ray bright center: 1.) entropy mixing due to bulk mixing or thermal conduction, 2.) evaporation of swept up clouds, and 3.) a metallicity gradient, possibly due to dust destruction and ejecta enrichment. In these tests, we assume that the remnant has evolved beyond the adiabatic evolutionary stage, which explains the X-ray dimness of the shell. The entropy mixed model spectrum was tested against the Chandra spectrum for the remnant's projected center and found to be a good match. The evaporating clouds model was constrained by the finding that the ionization parameters of the bright knots are similar to those of the surrounding regions. While both the entropy mixed and the evaporating clouds models are known to predict centrally bright X-ray morphologies, their predictions fall short of the observed brightness gradient. The resulting brightness gap can be largely filled in by emission from the extra metals in and near the remnant's projected center. The preponderance of evidence (including that drawn from other studies) suggests that W44's remarkable morphology can be attributed to dust destruction
Mixed dark matter in left-right symmetric models
Berlin, Asher; Fox, Patrick J.; Hooper, Dan; ...
2016-06-08
Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal darkmore » matter. Decays of the heavy charged W' boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, gR = gL. Furthermore, this region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.« less
Mixed dark matter in left-right symmetric models
Berlin, Asher; Fox, Patrick J.; Hooper, Dan; Mohlabeng, Gopolang
2016-06-08
Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal dark matter. Decays of the heavy charged W' boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, g_{R} = g_{L}. Furthermore, this region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.
Mixed dark matter in left-right symmetric models
Berlin, Asher; Fox, Patrick J.; Hooper, Dan; Mohlabeng, Gopolang
2016-06-08
Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal dark matter. Decays of the heavy charged W{sup ′} boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, g{sub R}=g{sub L}. This region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.
Mixed dark matter in left-right symmetric models
NASA Astrophysics Data System (ADS)
Berlin, Asher; Fox, Patrick J.; Hooper, Dan; Mohlabeng, Gopolang
2016-06-01
Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal dark matter. Decays of the heavy charged W' boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, gR = gL. This region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.
Numerical Modeling of Mixing and Venting from Explosions in Bunkers
NASA Astrophysics Data System (ADS)
Liu, Benjamin
2005-07-01
2D and 3D numerical simulations were performed to study the dynamic interaction of explosion products in a concrete bunker with ambient air, stored chemical or biological warfare (CBW) agent simulant, and the surrounding walls and structure. The simulations were carried out with GEODYN, a multi-material, Godunov-based Eulerian code, that employs adaptive mesh refinement and runs efficiently on massively parallel computer platforms. Tabular equations of state were used for all materials with the exception of any high explosives employed, which were characterized with conventional JWL models. An appropriate constitutive model was used to describe the concrete. Interfaces between materials were either tracked with a volume-of-fluid method that used high-order reconstruction to specify the interface location and orientation, or a capturing approach was employed with the assumption of local thermal and mechanical equilibrium. A major focus of the study was to estimate the extent of agent heating that could be obtained prior to venting of the bunker and resultant agent dispersal. Parameters investigated included the bunker construction, agent layout, energy density in the bunker and the yield-to-agent mass ratio. Turbulent mixing was found to be the dominant heat transfer mechanism for heating the agent.
Analytical model for heterogeneous reactions in mixed porous media
Hatfield, K.; Burris, D.R.; Wolfe, N.L.
1996-08-01
The funnel/gate system is a developing technology for passive ground-water plume management and treatment. This technology uses sheet pilings as a funnel to force polluted ground water through a highly permeable zone of reactive porous media (the gate) where contaminants are degraded by biotic or abiotic heterogeneous reactions. This paper presents a new analytical nonequilibrium model for solute transport in saturated, nonhomogeneous or mixed porous media that could assist efforts to design funnel/gate systems and predict their performance. The model incorporates convective/dispersion transport, dissolved constituent decay, surface-mediated degradation, and time-dependent mass transfer between phases. Simulation studies of equilibrium and nonequilibrium transport conditions reveal manifestations of rate-limited degradation when mass-transfer times are longer than system hydraulic residence times, or when surface-mediated reaction rates are faster than solute mass-transfer processes (i.e., sorption, film diffusion, or intraparticle diffusion). For example, steady-state contaminant concentrations will be higher under a nonequilibrium transport scenario than would otherwise be expected when assuming equilibrium conditions. Thus, a funnel/gate system may fail to achieve desired ground-water treatment if the possibility of mass-transfer-limited degradation is not considered.
Linear mixed effects models under inequality constraints with applications.
Farnan, Laura; Ivanova, Anastasia; Peddada, Shyamal D
2014-01-01
Constraints arise naturally in many scientific experiments/studies such as in, epidemiology, biology, toxicology, etc. and often researchers ignore such information when analyzing their data and use standard methods such as the analysis of variance (ANOVA). Such methods may not only result in a loss of power and efficiency in costs of experimentation but also may result poor interpretation of the data. In this paper we discuss constrained statistical inference in the context of linear mixed effects models that arise naturally in many applications, such as in repeated measurements designs, familial studies and others. We introduce a novel methodology that is broadly applicable for a variety of constraints on the parameters. Since in many applications sample sizes are small and/or the data are not necessarily normally distributed and furthermore error variances need not be homoscedastic (i.e. heterogeneity in the data) we use an empirical best linear unbiased predictor (EBLUP) type residual based bootstrap methodology for deriving critical values of the proposed test. Our simulation studies suggest that the proposed procedure maintains the desired nominal Type I error while competing well with other tests in terms of power. We illustrate the proposed methodology by re-analyzing a clinical trial data on blood mercury level. The methodology introduced in this paper can be easily extended to other settings such as nonlinear and generalized regression models.
SU(4) chiral quark model with configuration mixing
NASA Astrophysics Data System (ADS)
Dahiya, Harleen; Gupta, Manmohan
2003-04-01
The chiral quark model with configuration mixing and broken SU(3)×U(1) symmetry is extended to include the contribution from cc¯ fluctuations by considering broken SU(4) instead of SU(3). The implications of such a model are studied for quark flavor and spin distribution functions corresponding to E866 and the NMC data. The predicted parameters regarding the charm spin distribution functions, for example, Δc, Δc/ΔΣ, Δc/c as well as the charm quark distribution functions, for example, c¯, 2c¯/(ū+d¯), 2c¯/(u+d) and (c+c¯)/∑(q+q¯) are in agreement with other similar calculations. Specifically, we find Δc=-0.009, Δc/ΔΣ=-0.02, c¯=0.03 and (c+c¯)/∑(q+q¯)=0.02 for the χQM parameters a=0.1, α=0.4, β=0.7, ζE866=-1-2β, ζNMC=-2-2β and γ=0.3; the latter appears due to the extension of SU(3) to SU(4).
Prediction of stock markets by the evolutionary mix-game model
NASA Astrophysics Data System (ADS)
Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping
2008-06-01
This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.
A mixing timescale model for TPDF simulations of turbulent premixed flames
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...
2017-02-06
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive
NASA Astrophysics Data System (ADS)
Chen, Wei-Kuo; Sen, Arnab
2016-12-01
We show that the limiting ground state energy of the spherical mixed p-spin model can be identified as the infimum of certain variational problem. This complements the well-known Parisi formula for the limiting free energy in the spherical model. As an application, we obtain explicit formulas for the limiting ground state energy in the replica symmetry, one level of replica symmetry breaking and full replica symmetry breaking phases at zero temperature. In addition, our approach leads to new results on disorder chaos in spherical mixed even p-spin models. In particular, we prove that when there is no external field, the location of the ground state energy is chaotic under small perturbations of the disorder. We also establish that in the spherical mixed even p-spin model, the ground state energy superconcentrates in the absence of external field, while it obeys a central limit theorem if the external field is present.
Slama, G; Haardt, M J; Jean-Joseph, P; Costagliola, D; Goicolea, I; Bornet, F; Elgrably, F; Tchobroutsky, G
1984-07-21
The hyperglycaemic effect of 20 g sucrose taken at the end of a regular mixed meal by diabetic patients was measured in six adult type 1 diabetics, C-peptide negative, controlled by the artificial pancreas, and twelve adult type 2 diabetics, with fasting plasma glucose levels below 7.2 mmol/l (130 mg/100 ml) and post-prandial plasma glucose levels below 10.0 mmol/l (180 mg/100 ml), treated by diet alone or with glibenclamide and/or metformin. All the patients were given on consecutive days, in random order, two mixed meals of grilled meat, green beans, and cheese, as well as a cake made either of rice, skimmed milk, and saccharine (meal A) or rice, skimmed milk, and 20 g sucrose (meal B). The meals contained equal amounts of calories and of carbohydrate. There was no difference between the meals in plasma glucose curves and plasma insulin or insulin infusion rate variations whether in peak values, peaking times, or areas under the curves, in either group of patients. Sparing use of sucrose taken during mixed meals might help well-controlled diabetic patients to comply with their daily dietary prescription while maintaining good blood glucose control.
Using Generalized Additive Models to Analyze Single-Case Designs
ERIC Educational Resources Information Center
Shadish, William; Sullivan, Kristynn
2013-01-01
Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…
Benchmark studies of thermal jet mixing in SFRs using a two-jet model
Omotowa, O. A.; Skifton, R.; Tokuhiro, A.
2012-07-01
To guide the modeling, simulations and design of Sodium Fast Reactors (SFRs), we explore and compare the predictive capabilities of two numerical solvers COMSOL and OpenFOAM in the thermal jet mixing of two buoyant jets typical of the outlet flow from a SFR tube bundle. This process will help optimize on-going experimental efforts at obtaining high resolution data for V and V of CFD codes as anticipated in next generation nuclear systems. Using the k-{epsilon} turbulence models of both codes as reference, their ability to simulate the turbulence behavior in similar environments was first validated for single jet experimental data reported in literature. This study investigates the thermal mixing of two parallel jets having a temperature difference (hot-to-cold) {Delta}T{sub hc}= 5 deg. C, 10 deg. C and velocity ratios U{sub c}/U{sub h} = 0.5, 1. Results of the computed turbulent quantities due to convective mixing and the variations in flow field along the axial position are presented. In addition, this study also evaluates the effect of spacing ratio between jets in predicting the flow field and jet behavior in near and far fields. (authors)
Mixing effects in postdischarge modeling of electric discharge oxygen-iodine laser experiments
NASA Astrophysics Data System (ADS)
Palla, Andrew D.; Carroll, David L.; Verdeyen, Joseph T.; Solomon, Wayne C.
2006-07-01
In an electric discharge oxygen-iodine laser, laser action at 1315nm on the I(P1/22)→I(P3/22) transition of atomic iodine is obtained by a near resonant energy transfer from O2(aΔ1) which is produced using a low-pressure electric discharge. The discharge production of atomic oxygen, ozone, and other excited species adds higher levels of complexity to the postdischarge kinetics which are not encountered in a classic purely chemical O2(aΔ1) generation system. Mixing effects are also present. In this paper we present postdischarge modeling results obtained using a modified version of the BLAZE-II gas laser code. A 28 species, 105 reaction chemical kinetic reaction set for the postdischarge kinetics is presented. Calculations were performed to ascertain the impact of a two stream mixing mechanism on the numerical model and to study gain as a function of reactant mass flow rates. The calculations were compared with experimental data. Agreement with experimental data was improved with the addition of new kinetics and the mixing mechanism.
Effects of mixing on post-discharge modeling of ElectricOIL experiments
NASA Astrophysics Data System (ADS)
Palla, Andrew D.; Carroll, David L.; Verdeyen, Joseph T.; Solomon, Wayne C.
2006-02-01
In an electric discharge Oxygen-Iodine laser (ElectricOIL), the desired O II(a1Δ) is produced using a low-to-medium pressure electric discharge. The discharge production of atomic oxygen, ozone, and other excited species adds higher levels of complexity to the post-discharge kinetics which are not encountered in a classic purely chemical O II(a1-Δ) generation system. Mixing effects are also present. In this paper we present post-discharge modeling results obtained using a modified version of the Blaze-II gas laser code. A 28 specie, 105 reaction chemical kinetic reaction set for the post-discharge kinetics is presented. Calculations were performed to ascertain the impact of a two stream mixing mechanism on the numerical model and to study gain as a function of reactant mass flow rates. The calculations were compared with experimental data. Agreement with experimental data was improved with the addition of new kinetics and the mixing mechanism.
Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.
Gür, Y
2014-12-01
The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.
A Mathematical Modeling Study of Tracer Mixing in a Continuous Casting Tundish
NASA Astrophysics Data System (ADS)
Chen, Chao; Jonsson, Lage Tord Ingemar; Tilliander, Anders; Cheng, Guoguang; Jönsson, Pär Göran
2015-02-01
A mathematical model based on a water model was developed to study the tracer mixing in a single strand tundish. The mixing behavior of black ink and KCl solution was simulated by a mixed composition fluid model, and the data were validated by water modeling results. In addition, a model that solves the scalar transport equation (STE) without any physical properties of the tracer was studied and the results were compared to predictions using the density-coupled model. Furthermore, the mixing behaviors of different amounts of KCl tracers were investigated. Before the model was established, KCl tracer properties such as the KCl molecule diffusion (KMD), the water molecule self-diffusion (WSD) in KCl solution, and the KCl solution viscosity (KV) were evaluated. The RTD curve of 250 mL KCl for the KMD case was closer to the water modeling results than that of the case implemented with only density. Moreover, the ensemble average deviation of the RTD curves of the cases implemented with KMD + WSD, KMD + KV, and KMD + WSD + KV to the KMD case is less than 0.7 pct. Thus, the water self-diffusion and KV were neglected, while the KCl density and KMD were implemented in the current study. The flow pattern of black ink was similar to the STE result i.e., the fluid flowed upwards toward the top surface and formed a large circulating flow at the outlet nozzle. The flow behavior of the 100, 150, and 250 mL KCl cases exhibited a strong tendency to sink to the tundish bottom, and subsequently flow through the holes in the dam. Thereafter, it propagated toward the outlet nozzle. Regarding the KCl tracer amount, the tracer concentration propagated to the outlet nozzle much faster for the larger amount case than for the smaller amount cases. However, the flow pattern for the 50 mL KCl case was somewhat different. The fluid propagated to the top surface which acted like black ink during the initial injection, and subsequently the fluid flowed throughout the holes at a much slower pace
A Mathematical Modeling Study of Tracer Mixing in a Continuous Casting Tundish
NASA Astrophysics Data System (ADS)
Chen, Chao; Jonsson, Lage Tord Ingemar; Tilliander, Anders; Cheng, Guoguang; Jönsson, Pär Göran
2014-09-01
A mathematical model based on a water model was developed to study the tracer mixing in a single strand tundish. The mixing behavior of black ink and KCl solution was simulated by a mixed composition fluid model, and the data were validated by water modeling results. In addition, a model that solves the scalar transport equation (STE) without any physical properties of the tracer was studied and the results were compared to predictions using the density-coupled model. Furthermore, the mixing behaviors of different amounts of KCl tracers were investigated. Before the model was established, KCl tracer properties such as the KCl molecule diffusion (KMD), the water molecule self-diffusion (WSD) in KCl solution, and the KCl solution viscosity (KV) were evaluated. The RTD curve of 250 mL KCl for the KMD case was closer to the water modeling results than that of the case implemented with only density. Moreover, the ensemble average deviation of the RTD curves of the cases implemented with KMD + WSD, KMD + KV, and KMD + WSD + KV to the KMD case is less than 0.7 pct. Thus, the water self-diffusion and KV were neglected, while the KCl density and KMD were implemented in the current study. The flow pattern of black ink was similar to the STE result i.e., the fluid flowed upwards toward the top surface and formed a large circulating flow at the outlet nozzle. The flow behavior of the 100, 150, and 250 mL KCl cases exhibited a strong tendency to sink to the tundish bottom, and subsequently flow through the holes in the dam. Thereafter, it propagated toward the outlet nozzle. Regarding the KCl tracer amount, the tracer concentration propagated to the outlet nozzle much faster for the larger amount case than for the smaller amount cases. However, the flow pattern for the 50 mL KCl case was somewhat different. The fluid propagated to the top surface which acted like black ink during the initial injection, and subsequently the fluid flowed throughout the holes at a much slower pace
From linear to generalized linear mixed models: A case study in repeated measures
Technology Transfer Automated Retrieval System (TEKTRAN)
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
NASA Astrophysics Data System (ADS)
Jiang, Hao; Panagiotopoulos, Athanassios Z.; Economou, Ioannis G.
2016-03-01
Statistical associating fluid theory (SAFT) is used to model CO2 solubilities in single and mixed electrolyte solutions. The proposed SAFT model implements an improved mean spherical approximation in the primitive model to represent the electrostatic interactions between ions, using a parameter K to correct the excess energies ("KMSA" for short). With the KMSA formalism, the proposed model is able to describe accurately mean ionic activity coefficients and liquid densities of electrolyte solutions including Na+, K+, Ca2+, Mg2+, Cl-, Br- and SO42- from 298.15 K to 473.15 K using mostly temperature independent parameters, with sole exception being the volume of anions. CO2 is modeled as a non-associating molecule, and temperature-dependent CO2-H2O and CO2-ion cross interactions are used to obtain CO2 solubilities in H2O and in single ion electrolyte solutions. Without any additional fitting parameters, CO2 solubilities in mixed electrolyte solutions and synthetic brines are predicted, in good agreement with experimental measurements.
On the application of mixed hidden Markov models to multiple behavioural time series
Schliehe-Diecks, S.; Kappeler, P. M.; Langrock, R.
2012-01-01
Analysing behavioural sequences and quantifying the likelihood of occurrences of different behaviours is a difficult task as motivational states are not observable. Furthermore, it is ecologically highly relevant and yet more complicated to scale an appropriate model for one individual up to the population level. In this manuscript (mixed) hidden Markov models (HMMs) are used to model the feeding behaviour of 54 subadult grey mouse lemurs (Microcebus murinus), small nocturnal primates endemic to Madagascar that forage solitarily. Our primary aim is to introduce ecologists and other users to various HMM methods, many of which have been developed only recently, and which in this form have not previously been synthesized in the ecological literature. Our specific application of mixed HMMs aims at gaining a better understanding of mouse lemur behaviour, in particular concerning sex-specific differences. The model we consider incorporates random effects for accommodating heterogeneity across animals, i.e. accounts for different personalities of the animals. Additional subject- and time-specific covariates in the model describe the influence of sex, body mass and time of night. PMID:23565332
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase
Non-additive model for specific heat of electrons
NASA Astrophysics Data System (ADS)
Anselmo, D. H. A. L.; Vasconcelos, M. S.; Silva, R.; Mello, V. D.
2016-10-01
By using non-additive Tsallis entropy we demonstrate numerically that one-dimensional quasicrystals, whose energy spectra are multifractal Cantor sets, are characterized by an entropic parameter, and calculate the electronic specific heat, where we consider a non-additive entropy Sq. In our method we consider an energy spectra calculated using the one-dimensional tight binding Schrödinger equation, and their bands (or levels) are scaled onto the [ 0 , 1 ] interval. The Tsallis' formalism is applied to the energy spectra of Fibonacci and double-period one-dimensional quasiperiodic lattices. We analytically obtain an expression for the specific heat that we consider to be more appropriate to calculate this quantity in those quasiperiodic structures.
Modeling of additive manufacturing processes for metals: Challenges and opportunities
Francois, Marianne M.; Sun, Amy; King, Wayne E.; ...
2017-01-09
Here, with the technology being developed to manufacture metallic parts using increasingly advanced additive manufacturing processes, a new era has opened up for designing novel structural materials, from designing shapes and complex geometries to controlling the microstructure (alloy composition and morphology). The material properties used within specific structural components are also designable in order to meet specific performance requirements that are not imaginable with traditional metal forming and machining (subtractive) techniques.
Guarana provides additional stimulation over caffeine alone in the planarian model.
Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R; Constable, Mic Andre; Mulligan, Margaret E; Voura, Evelyn B
2015-01-01
The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose.
Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model
Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R.; Constable, Mic Andre; Mulligan, Margaret E.; Voura, Evelyn B.
2015-01-01
The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065
Analysis and Modeling of soil hydrology under different soil additives in artificial runoff plots
NASA Astrophysics Data System (ADS)
Ruidisch, M.; Arnhold, S.; Kettering, J.; Huwe, B.; Kuzyakov, Y.; Ok, Y.; Tenhunen, J. D.
2009-12-01
The impact of monsoon events during June and July in the Korean project region Haean Basin, which is located in the northeastern part of South Korea plays a key role for erosion, leaching and groundwater pollution risk by agrochemicals. Therefore, the project investigates the main hydrological processes in agricultural soils under field and laboratory conditions on different scales (plot, hillslope and catchment). Soil hydrological parameters were analysed depending on different soil additives, which are known for prevention of soil erosion and nutrient loss as well as increasing of water infiltration, aggregate stability and soil fertility. Hence, synthetic water-soluble Polyacrylamides (PAM), Biochar (Black Carbon mixed with organic fertilizer), both PAM and Biochar were applied in runoff plots at three agricultural field sites. Additionally, as control a subplot was set up without any additives. The field sites were selected in areas with similar hillslope gradients and with emphasis on the dominant land management form of dryland farming in Haean, which is characterised by row planting and row covering by foil. Hydrological parameters like satured water conductivity, matrix potential and water content were analysed by infiltration experiments, continuous tensiometer measurements, time domain reflectometry as well as pressure plates to indentify characteristic water retention curves of each horizon. Weather data were observed by three weather stations next to the runoff plots. Measured data also provide the input data for modeling water transport in the unsatured zone in runoff plots with HYDRUS 1D/2D/3D and SWAT (Soil & Water Assessment Tool).
Additional Research Needs to Support the GENII Biosphere Models
Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen
2013-11-30
In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed
Addition of a Hydrological Cycle to the EPIC Jupiter Model
NASA Astrophysics Data System (ADS)
Dowling, T. E.; Palotai, C. J.
2002-09-01
We present a progress report on the development of the EPIC atmospheric model to include clouds, moist convection, and precipitation. Two major goals are: i) to study the influence that convective water clouds have on Jupiter's jets and vortices, such as those to the northwest of the Great Red Spot, and ii) to predict ammonia-cloud evolution for direct comparison to visual images (instead of relying on surrogates for clouds like potential vorticity). Data structures in the model are now set up to handle the vapor, liquid, and solid phases of the most common chemical species in planetary atmospheres. We have adapted the Prather conservation of second-order moments advection scheme to the model, which yields high accuracy for dealing with cloud edges. In collaboration with computer scientists H. Dietz and T. Mattox at the U. Kentucky, we have built a dedicated 40-node parallel computer that achieves 34 Gflops (double precision) at 74 cents per Mflop, and have updated the EPIC-model code to use cache-aware memory layouts and other modern optimizations. The latest test-case results of cloud evolution in the model will be presented. This research is funded by NASA's Planetary Atmospheres and EPSCoR programs.
Swinarski, M; Makinia, J; Stensel, H D; Czerwionka, K; Drewnowski, J
2012-08-01
The aim of this study was to expand the International Water Association Activated Sludge Model No. 2d (ASM2d) to account for a newly defined readily biodegradable substrate that can be consumed by polyphosphate-accumulating organisms (PAOs) under anoxic and aerobic conditions, but not under anaerobic conditions. The model change was to add a new substrate component and process terms for its use by PAOs and other heterotrophic bacteria under anoxic and aerobic conditions. The Gdansk (Poland) wastewater treatment plant (WWTP), which has a modified University of Cape Town (MUCT) process for nutrient removal, provided field data and mixed liquor for batch tests for model evaluation. The original ASM2d was first calibrated under dynamic conditions with the results of batch tests with settled wastewater and mixed liquor, in which nitrate-uptake rates, phosphorus-release rates, and anoxic phosphorus uptake rates were followed. Model validation was conducted with data from a 96-hour measurement campaign in the full-scale WWTP. The results of similar batch tests with ethanol and fusel oil as the external carbon sources were used to adjust kinetic and stoichiometric coefficients in the expanded ASM2d. Both models were compared based on their predictions of the effect of adding supplemental carbon to the anoxic zone of an MUCT process. In comparison with the ASM2d, the new model better predicted the anoxic behaviors of carbonaceous oxygen demand, nitrate-nitrogen (NO3-N), and phosphorous (PO4-P) in batch experiments with ethanol and fusel oil. However, when simulating ethanol addition to the anoxic zone of a full-scale biological nutrient removal facility, both models predicted similar effluent NO3-N concentrations (6.6 to 6.9 g N/m3). For the particular application, effective enhanced biological phosphorus removal was predicted by both models with external carbon addition but, for the new model, the effluent PO4-P concentration was approximately one-half of that found from
Age of stratospheric air and aging by mixing in global models
NASA Astrophysics Data System (ADS)
Garny, Hella; Dietmüller, Simone; Plöger, Felix; Birner, Thomas; Bönisch, Harald; Jöckel, Patrick
2016-04-01
The Brewer-Dobson circulation is often quantified by the integrated transport measure age of air (AoA). AoA is affected by all transport processes, including transport along the residual mean mass circulation and two-way mixing. A large spread in the simulation of AoA by current global models exists. Using CCMVal-2 and CCMI-1 global model data, we show that this spread can only in small parts be attributed to differences in the simulated residual circulation. Instead, large differences in the "mixing efficiency" strongly contribute to the differences in the simulated AoA. The "mixing efficiency" is defined as the ratio of the two-way mixing mass flux across the subtropical barrier to the net (residual) mass flux, and this mixing efficiency controls the relative increase in AoA by mixing. We derive the mixing efficiency from global model data using the analytical solution of a simplified version of the tropical leaky pipe (TLP) model, in which vertical diffusion is neglected. Thus, it is assumed that only residual mean transport and horizontal two-way mixing across the subtropical barrier controls AoA. However, in global models vertical mixing and numerical diffusion modify AoA, and these processes likely contribute to the differences in the mixing efficiency between models. We explore the contributions of diffusion and mixing on mean AoA by a) using simulations with the tropical leaky pipe model including vertical diffusion and b) explicit calculations of aging by mixing on resolved scales. Using the TLP model, we show that vertical diffusion leads to a decrease in tropical AoA, i.e. counteracts the increase in tropical mean AoA due to horizontal mixing. Thus, neglecting vertical diffusion leads to an underestimation of the mixing efficiency. With explicit calculations of aging by mixing via integration of daily local mixing tendencies along residual circulation trajectories, we explore the contributions of vertical and horizontal mixing for aging by mixing. The
Generalized Additive Models, Cubic Splines and Penalized Likelihood.
1987-05-22
in case control studies ). All models in the table include dummy variable to account for the matching. The first 3 lines of the table indicate that OA...Ausoc. Breslow, N. and Day, N. (1980). Statistical methods in cancer research, volume 1- the analysis of case - control studies . International agency
Modeling of mixing processes: Fluids, particulates, and powders
Ottino, J.M.; Hansen, S.
1995-12-31
Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is found that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be
Modeling of surface temperature effects on mixed material migration in NSTX-U
NASA Astrophysics Data System (ADS)
Nichols, J. H.; Jaworski, M. A.; Schmid, K.
2016-10-01
NSTX-U will initially operate with graphite walls, periodically coated with thin lithium films to improve plasma performance. However, the spatial and temporal evolution of these films during and after plasma exposure is poorly understood. The WallDYN global mixed-material surface evolution model has recently been applied to the NSTX-U geometry to simulate the evolution of poloidally inhomogenous mixed C/Li/O plasma-facing surfaces. The WallDYN model couples local erosion and deposition processes with plasma impurity transport in a non-iterative, self-consistent manner that maintains overall material balance. Temperature-dependent sputtering of lithium has been added to WallDYN, utilizing an adatom sputtering model developed from test stand experimental data. Additionally, a simplified temperature-dependent diffusion model has been added to WallDYN so as to capture the intercalation of lithium into a graphite bulk matrix. The sensitivity of global lithium migration patterns to changes in surface temperature magnitude and distribution will be examined. The effect of intra-discharge increases in surface temperature due to plasma heating, such as those observed during NSTX Liquid Lithium Divertor experiments, will also be examined. Work supported by US DOE contract DE-AC02-09CH11466.
Technical Work Plan for: Additional Multoscale Thermohydrologic Modeling
B. Kirstein
2006-08-24
The primary objective of Revision 04 of the MSTHM report is to provide TSPA with revised repository-wide MSTHM analyses that incorporate updated percolation flux distributions, revised hydrologic properties, updated IEDs, and information pertaining to the emplacement of transport, aging, and disposal (TAD) canisters. The updated design information is primarily related to the incorporation of TAD canisters, but also includes updates related to superseded IEDs describing emplacement drift cross-sectional geometry and layout. The intended use of the results of Revision 04 of the MSTHM report, as described in this TWP, is to predict the evolution of TH conditions (temperature, relative humidity, liquid-phase saturation, and liquid-phase flux) at specified locations within emplacement drifts and in the adjoining near-field host rock along all emplacement drifts throughout the repository. This information directly supports the TSPA for the nominal and seismic scenarios. The revised repository-wide analyses are required to incorporate updated parameters and design information and to extend those analyses out to 1,000,000 years. Note that the previous MSTHM analyses reported in Revision 03 of Multiscale Thermohydrologic Model (BSC 2005 [DIRS 173944]) only extend out to 20,000 years. The updated parameters are the percolation flux distributions, including incorporation of post-10,000-year distributions, and updated calibrated hydrologic property values for the host-rock units. The applied calibrated hydrologic properties will be an updated version of those available in Calibrated Properties Model (BSC 2004 [DIRS 169857]). These updated properties will be documented in an Appendix of Revision 03 of UZ Flow Models and Submodels (BSC 2004 [DIRS 169861]). The updated calibrated properties are applied because they represent the latest available information. The reasonableness of applying the updated calibrated' properties to the prediction of near-fieldin-drift TH conditions
Commute Maps: Separating Slowly Mixing Molecular Configurations for Kinetic Modeling.
Noé, Frank; Banisch, Ralf; Clementi, Cecilia
2016-11-08
Identification of the main reaction coordinates and building of kinetic models of macromolecular systems require a way to measure distances between molecular configurations that can distinguish slowly interconverting states. Here we define the commute distance that can be shown to be closely related to the expected commute time needed to go from one configuration to the other, and back. A practical merit of this quantity is that it can be easily approximated from molecular dynamics data sets when an approximation of the Markov operator eigenfunctions is available, which can be achieved by the variational approach to approximate eigenfunctions of Markov operators, also called variational approach of conformation dynamics (VAC) or the time-lagged independent component analysis (TICA). The VAC or TICA components can be scaled such that a so-called commute map is obtained in which Euclidean distance corresponds to the commute distance, and thus kinetic models such as Markov state models can be computed based on Euclidean operations, such as standard clustering. In addition, the distance metric gives rise to a quantity we call total kinetic content, which is an excellent score to rank input feature sets and kinetic model quality.
Comparison of Mixed-Model Approaches for Association Mapping
Stich, Benjamin; Möhring, Jens; Piepho, Hans-Peter; Heckenberger, Martin; Buckler, Edward S.; Melchinger, Albrecht E.
2008-01-01
Association-mapping methods promise to overcome the limitations of linkage-mapping methods. The main objectives of this study were to (i) evaluate various methods for association mapping in the autogamous species wheat using an empirical data set, (ii) determine a marker-based kinship matrix using a restricted maximum-likelihood (REML) estimate of the probability of two alleles at the same locus being identical in state but not identical by descent, and (iii) compare the results of association-mapping approaches based on adjusted entry means (two-step approaches) with the results of approaches in which the phenotypic data analysis and the association analysis were performed in one step (one-step approaches). On the basis of the phenotypic and genotypic data of 303 soft winter wheat (Triticum aestivum L.) inbreds, various association-mapping methods were evaluated. Spearman's rank correlation between P-values calculated on the basis of one- and two-stage association-mapping methods ranged from 0.63 to 0.93. The mixed-model association-mapping approaches using a kinship matrix estimated by REML are more appropriate for association mapping than the recently proposed QK method with respect to (i) the adherence to the nominal α-level and (ii) the adjusted power for detection of quantitative trait loci. Furthermore, we showed that our data set could be analyzed by using two-step approaches of the proposed association-mapping method without substantially increasing the empirical type I error rate in comparison to the corresponding one-step approaches. PMID:18245847
NASA Astrophysics Data System (ADS)
Gamil, Y. M. R.; Bakar, I. H.
2016-07-01
Resilient Modulus (Mr) is considered one of the most important parameters in the design of road structure. This paper describes the development of the mathematical model to predict resilient modulus of organic soil stabilized by the mix of Palm Oil Fuel Ash - Ordinary Portland Cement (POFA-OPC) soil stabilization additives. It aims to optimize the use of the use of POFA in soil stabilization. The optimization models enable to eliminate the arbitrary selection and its associated disadvantages in determination of the optimum additive proportion. The model was developed based on Scheffe regression theory. The mix proportions of the samples in the experiment were adopted from similar studies reported in the literature Twenty five samples were designed, prepared and then characterized for each mix proportion based on the MR in 28 days curing. The results are used to develop the mathematical prediction model. The model was statistically analyzed and verified for its adequacy and validity using F-test.
An uncertainty inclusive un-mixing model to identify tracer non-conservativeness
NASA Astrophysics Data System (ADS)
Sherriff, Sophie; Rowan, John; Franks, Stewart; Fenton, Owen; Jordan, Phil; hUallacháin, Daire Ó.
2015-04-01
Sediment fingerprinting is being increasingly recognised as an essential tool for catchment soil and water management. Selected physico-chemical properties (tracers) of soils and river sediments are used in a statistically-based 'un-mixing' model to apportion sediment delivered to the catchment outlet (target) to its upstream sediment sources. Development of uncertainty-inclusive approaches, taking into account uncertainties in the sampling, measurement and statistical un-mixing, are improving the robustness of results. However, methodological challenges remain including issues of particle size and organic matter selectivity and non-conservative behaviour of tracers - relating to biogeochemical transformations along the transport pathway. This study builds on our earlier uncertainty-inclusive approach (FR2000) to detect and assess the impact of tracer non-conservativeness using synthetic data before applying these lessons to new field data from Ireland. Un-mixing was conducted on 'pristine' and 'corrupted' synthetic datasets containing three to fifty tracers (in the corrupted dataset one target tracer value was manually corrupted to replicate non-conservative behaviour). Additionally, a smaller corrupted dataset was un-mixed using a permutation version of the algorithm. Field data was collected in an 11 km2 river catchment in Ireland. Source samples were collected from topsoils, subsoils, channel banks, open field drains, damaged road verges and farm tracks. Target samples were collected using time integrated suspended sediment samplers at the catchment outlet at 6-12 week intervals from July 2012 to June 2013. Samples were dried (<40°C), sieved (125 µm) and analysed for mineral magnetic susceptibility, anhysteretic remanence and iso-thermal remanence, and geochemical elements Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn (following microwave-assisted acid digestion). Discriminant analysis was used to reduce the number of tracer numbers before un-mixing. Tracer non
Mixed domain models for the distribution of aluminum in high silica zeolite SSZ-13.
Prasad, Subramanian; Petrov, Maria
2013-01-01
High silica zeolite SSZ-13 with Si/Al ratios varying from 11 to 17 was characterized by aluminum-27 and silicon-29 NMR spectroscopy. Aluminum-27 MAS and MQMAS NMR data indicated that in addition to tetrahedral aluminum sites, a fraction of aluminum sites are present in distorted tetrahedral environments. Although in samples of SSZ-13 having high Si/Al ratios all aluminum atoms are expected to be isolated, silicon-29 NMR spectra revealed that in addition to isolated aluminum atoms (Si(1Al)), non-isolated aluminum atoms (Si(2Al)) exist in the crystals. To model these contributions of the various aluminum atoms, a mixed-domain distribution was developed, using double-six membered rings (D6R) as the basic building units of SSZ-13. A combination of different ideal domains, one containing isolated and the other with non-isolated aluminum sites, has been found to describe the experimental silicon-29 NMR data.
Best practices for use of stable isotope mixing models in food-web studies
Stable isotope mixing models are increasingly used to quantify contributions of resources to consumers. While potentially powerful tools, these mixing models have the potential to be misused, abused, and misinterpreted. Here we draw on our collective experiences to address the qu...
The Analysis of Repeated Measurements with Mixed-Model Adjusted "F" Tests
ERIC Educational Resources Information Center
Kowalchuk, Rhonda K.; Keselman, H. J.; Algina, James; Wolfinger, Russell D.
2004-01-01
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic.…
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS
Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
ERIC Educational Resources Information Center
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
Li, H W; Vishwasrao, P; Hölzl, M A; Chen, S; Choi, G; Zhao, G; Sykes, M
2017-02-01
Mixed chimerism is a promising approach to inducing allograft and xenograft tolerance. Mixed allogeneic and xenogeneic chimerism in mouse models induced specific tolerance and global hyporesponsiveness, respectively, of host mouse natural killer (NK) cells. In this study, we investigated whether pig/human mixed chimerism could tolerize human NK cells in a humanized mouse model. Our results showed no impact of induced human NK cell reconstitution on porcine chimerism. NK cells from most pig/human mixed chimeric mice showed either specifically decreased cytotoxicity to pig cells or global hyporesponsiveness in an in vitro cytotoxicity assay. Mixed xenogeneic chimerism did not hamper the maturation of human NK cells but was associated with an alteration in NK cell subset distribution and interferon gamma (IFN-γ) production in the bone marrow. In summary, we demonstrate that mixed xenogeneic chimerism induces human NK cell hyporesponsiveness to pig cells. Our results support the use of this approach to inducing xenogeneic tolerance in the clinical setting. However, additional approaches are required to improve the efficacy of tolerance induction while ensuring adequate NK cell functions.
Application of mixing-controlled combustion models to gas turbine combustors
NASA Technical Reports Server (NTRS)
Nguyen, Hung Lee
1990-01-01
Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.
Additional Developments in Atmosphere Revitalization Modeling and Simulation
NASA Technical Reports Server (NTRS)
Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.; Gomez, Carlos
2013-01-01
NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.
Additional Developments in Atmosphere Revitalization Modeling and Simulation
NASA Technical Reports Server (NTRS)
Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.
2013-01-01
NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM)
A time-dependent Mixing Model for PDF Methods in Heterogeneous Aquifers
NASA Astrophysics Data System (ADS)
Schüler, Lennart; Suciu, Nicolae; Knabner, Peter; Attinger, Sabine
2016-04-01
Predicting the transport of groundwater contaminations remains a demanding task, especially with respect to the heterogeneity of the subsurface and the large measurement uncertainties. A risk analysis also includes the quantification of the uncertainty in order to evaluate how accurate the predictions are. Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, which can be used as a first measure of uncertainty. A mixing model, also known as a dissipation model, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling. The implications of the new mixing model for different kinds of flow conditions are discussed and some comments are made on efficiently handling spatially resolved higher moments.
Prediction of microbial growth in mixed culture with a competition model.
Fujikawa, Hiroshi; Sakha, Mohammad Z
2014-01-01
Prediction of microbial growth in mixed culture was studied with a competition model that we had developed recently. The model, which is composed of the new logistic model and the Lotka-Volterra model, is shown to successfully describe the microbial growth of two species in mixed culture using Staphylococcus aureus, Escherichia coli, and Salmonella. With the parameter values of the model obtained from the experimental data on monoculture and mixed culture with two species, it then succeeded in predicting the simultaneous growth of the three species in mixed culture inoculated with various cell concentrations. To our knowledge, it is the first time for a prediction model for multiple (three) microbial species to be reported. The model, which is not built on any premise for specific microorganisms, may become a basic competition model for microorganisms in food and food materials.
Peace, Gerald L.; Goering, Timothy James; Miller, Mark Laverne; Ho, Clifford Kuofei
2007-01-01
A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations when data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.
Mixed Models: Combining incompatible scalar models in any space-time dimension
NASA Astrophysics Data System (ADS)
Klauder, John R.
2017-01-01
Traditionally, covariant scalar field theory models are either super renormalizable, strictly renormalizable, or nonrenormalizable. The goal of “Mixed Models” is to make sense of sums of these distinct examples, e.g. gφ34 + g‧φ 36 + g″φ 38, which includes an example of each kind for space-time dimension n = 3. We show how the several interactions such mixed models have may be turned on and off in any order without any difficulties. Analogous results are shown for gφn4 + g‧φ n138, etc. for all n ≥ 3. Different categories hold for n = 2 such as, e.g. gP(φ)2 + g‧NP(φ) 2, that involve polynomial (P) and suitable nonpolynomial (NP) interactions, etc. Analogous situations for n = 1 (time alone) offer simple “toy” examples of how such mixed models may be constructed. As a general rule, if the introduction of a specific interaction term reduces the domain of the free classical action, we invariably find that the introduction of the associated quantum interaction leads, effectively, to a “nonrenormalizable” quantum theory. However, in special cases, a classical interaction that does not reduce the domain of the classical free action may generate an “unsatisfactory” quantum theory, which generally involves a model-specific, different approach to become “satisfactory.” We will encounter both situations in our analysis.
Bjornsson, H.; Mysak, L.A.; Schmidt, G.A.
1997-10-01
The Wright and Stocker oceanic thermohaline circulation model is coupled to a recently developed zonally averaged energy moisture balance model for the atmosphere. The results obtained with this coupled model are compared with those from an ocean-only model that employs mixed boundary conditions. The ocean model geometry uses either one zonally averaged interhemispheric basin (the {open_quotes}Atlantic{close_quotes}) or two zonally averaged basins (roughly approximating the Atlantic and the Pacific Oceans) connected by a parameterized Antarctic Circumpolar Current. The differences in the steady states and their linear stability are examined over a wide range of parameters. The presence of additional feedbacks between the ocean circulation and the atmosphere and hydrological cycle in the coupled model produces significant differences between the latter and the ocean-only model, in both the one-basin and two-basin geometries. The authors conclude that due to the effects produced by the feedbacks in the coupled model, they must have serious reservations about the results concerning long-term climate variability obtained from ocean-only models. Thus, to investigate long-term climatic variability a coupled model is necessary. 31 refs., 15 figs., 7 tabs.
Madrasi, Kumpal; Chaturvedula, Ayyappa; Haberer, Jessica E; Sale, Mark; Fossler, Michael J; Bangsberg, David; Baeten, Jared M; Celum, Connie; Hendrix, Craig W
2016-12-06
Adherence is a major factor in the effectiveness of preexposure prophylaxis (PrEP) for HIV prevention. Modeling patterns of adherence helps to identify influential covariates of different types of adherence as well as to enable clinical trial simulation so that appropriate interventions can be developed. We developed a Markov mixed-effects model to understand the covariates influencing adherence patterns to daily oral PrEP. Electronic adherence records (date and time of medication bottle cap opening) from the Partners PrEP ancillary adherence study with a total of 1147 subjects were used. This study included once-daily dosing regimens of placebo, oral tenofovir disoproxil fumarate (TDF), and TDF in combination with emtricitabine (FTC), administered to HIV-uninfected members of serodiscordant couples. One-coin and first- to third-order Markov models were fit to the data using NONMEM(®) 7.2. Model selection criteria included objective function value (OFV), Akaike information criterion (AIC), visual predictive checks, and posterior predictive checks. Covariates were included based on forward addition (α = 0.05) and backward elimination (α = 0.001). Markov models better described the data than 1-coin models. A third-order Markov model gave the lowest OFV and AIC, but the simpler first-order model was used for covariate model building because no additional benefit on prediction of target measures was observed for higher-order models. Female sex and older age had a positive impact on adherence, whereas Sundays, sexual abstinence, and sex with a partner other than the study partner had a negative impact on adherence. Our findings suggest adherence interventions should consider the role of these factors.
NASA Astrophysics Data System (ADS)
Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.
2016-07-01
To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were
Comparative quantification of physically and numerically induced mixing in ocean models
NASA Astrophysics Data System (ADS)
Burchard, Hans; Rennau, Hannes
A diagnostic method for calculating physical and numerical mixing of tracers in ocean models is presented. The physical mixing is defined as the turbulent mean tracer variance decay rate. The numerical mixing due to discretisation errors of tracer advection schemes is shown to be the decay rate between the advected square of the tracer variance and the square of the advected tracer and can be easily implemented into any ocean model. The applicability of the method is demonstrated for four test cases: (i) a one-dimensional linear advection equation with periodic boundary conditions, (ii) a two-dimensional flat-bottom lock exchange test case without mixing, (iii) a two-dimensional marginal sea overflow study with mixing and entrainment and (iv) the DOME test case with a dense bottom current propagating down a broad linear slope. The method has a number of advantages over previously introduced estimates for numerical mixing.
Segregation parameters and pair-exchange mixing models for turbulent nonpremixed flames
NASA Technical Reports Server (NTRS)
Chen, J.-Y.; Kollman, W.
1991-01-01
The progress of chemical reactions in nonpremixed turbulent flows depends on the coexistence of reactants, which are brought together by mixing. The degree of mixing can strongly influence the chemical reactions and it can be quantified by segregation parameters. In this paper, the relevance of segregation parameters to turbulent mixing and chemical reactions is explored. An analysis of the pair-exchange mixing models is performed and an explanation is given for the peculiar behavior of such models in homogeneous turbulence. The nature of segregation parameters in a H2/Ar-air nonpremixed jet flame is investigated. The results show that Monte Carlo simulation with the modified Curl's mixing model predicts segregation parameters in close agreement with the experimental values, providing an indirect validation for the theoretical model.
Mixing in the Extratropical Stratosphere: Model-measurements Comparisons using MLM Diagnostics
NASA Technical Reports Server (NTRS)
Ma, Jun; Waugh, Darryn W.; Douglass, Anne R.; Kawa, Stephan R.; Bhartia, P. K. (Technical Monitor)
2001-01-01
We evaluate transport processes in the extratropical lower stratosphere for both models and measurements with the help of equivalent length diagnostic from the modified Lagrangian-mean (MLM) analysis. This diagnostic is used to compare measurements of long-lived tracers made by the Cryogenic Limb Array Etalon Spectrometer (CLAES) on the Upper Atmosphere Research Satellite (UARS) with simulated tracers. Simulations are produced in Chemical and Transport Models (CTMs), in which meteorological fields are taken from the Goddard Earth Observing System Data Assimilation System (GEOS DAS), the Middle Atmosphere Community Climate Model (MACCM2), and the Geophysical Fluid Dynamics Laboratory (GFDL) "SKYHI" model, respectively. Time series of isentropic equivalent length show that these models are able to capture major mixing and transport properties observed by CLAES, such as the formation and destruction of polar barriers, the presence of surf zones in both hemispheres. Differences between each model simulation and the observation are examined in light of model performance. Among these differences, only the simulation driven by GEOS DAS shows one case of the "top-down" destruction of the Antarctic polar vortex, as observed in the CLAES data. Additional experiments of isentropic advection of artificial tracer by GEOS DAS winds suggest that diabatic movement might have considerable contribution to the equivalent length field in the 3D CTM diagnostics.
Model analysis of influences of aerosol mixing state upon its optical properties in East Asia
NASA Astrophysics Data System (ADS)
Han, Xiao; Zhang, Meigen; Zhu, Lingyun; Xu, Liren
2013-07-01
The air quality model system RAMS (Regional Atmospheric Modeling System)-CMAQ (Models-3 Community Multi-scale Air Quality) coupled with an aerosol optical/radiative module was applied to investigate the impact of different aerosol mixing states (i.e., externally mixed, half externally and half internally mixed, and internally mixed) on radiative forcing in East Asia. The simulation results show that the aerosol optical depth (AOD) generally increased when the aerosol mixing state changed from externally mixed to internally mixed, while the single scattering albedo (SSA) decreased. Therefore, the scattering and absorption properties of aerosols can be significantly affected by the change of aerosol mixing states. Comparison of simulated and observed SSAs at five AERONET (Aerosol Robotic Network) sites suggests that SSA could be better estimated by considering aerosol particles to be internally mixed. Model analysis indicates that the impact of aerosol mixing state upon aerosol direct radiative forcing (DRF) is complex. Generally, the cooling effect of aerosols over East Asia are enhanced in the northern part of East Asia (Northern China, Korean peninsula, and the surrounding area of Japan) and are reduced in the southern part of East Asia (Sichuan Basin and Southeast China) by internal mixing process, and the variation range can reach ±5 W m-2. The analysis shows that the internal mixing between inorganic salt and dust is likely the main reason that the cooling effect strengthens. Conversely, the internal mixture of anthropogenic aerosols, including sulfate, nitrate, ammonium, black carbon, and organic carbon, could obviously weaken the cooling effect.
Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model
Rossi, R; Gallagher, B; Neville, J; Henderson, K
2011-11-11
Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.
Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements
Wang, Zhien
2016-12-13
Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations
A weighted dictionary learning model for denoising images corrupted by mixed noise.
Liu, Jun; Tai, Xue-Cheng; Huang, Haiyang; Huan, Zhongdan
2013-03-01
This paper proposes a general weighted l(2)-l(0) norms energy minimization model to remove mixed noise such as Gaussian-Gaussian mixture, impulse noise, and Gaussian-impulse noise from the images. The approach is built upon maximum likelihood estimation framework and sparse representations over a trained dictionary. Rather than optimizing the likelihood functional derived from a mixture distribution, we present a new weighting data fidelity function, which has the same minimizer as the original likelihood functional but is much easier to optimize. The weighting function in the model can be determined by the algorithm itself, and it plays a role of noise detection in terms of the different estimated noise parameters. By incorporating the sparse regularization of small image patches, the proposed method can efficiently remove a variety of mixed or single noise while preserving the image textures well. In addition, a modified K-SVD algorithm is designed to address the weighted rank-one approximation. The experimental results demonstrate its better performance compared with some existing methods.
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
Narrowing the Search for Sources of Fecal Indicator Bacteria with a Simple Salinity Mixing Model
NASA Astrophysics Data System (ADS)
McLaughlin, K.; Ahn, J.; Litton, R.; Grant, S. B.
2006-12-01
Newport Bay, the second largest estuarine embayment in Southern California, provides critical natural habitat for terrestrial and aquatic species and is a regionally important recreational area. Unfortunately, the beneficial uses of Newport Bay are threatened by numerous sources of pollutant loading, either through direct discharge into the bay or through its tributaries. Fecal indicator bacteria (FIB) are associated with human pathogens and are present in high concentrations in sewage and urban runoff. Standardized and inexpensive assays used for the detection of FIB have allowed their concentrations to be used as a common test of water quality. In order to assess FIB impairment in Newport Bay, weekly transects of FIB concentrations were conducted, specifically Total Coliform, Escherichia coli and Enterococci spp., as well as salinity, temperature, and transmissivity, from the upper reaches of the estuary to an offshore control site. Using salinity as a conservative tracer for water mass mixing and determining the end-member values of FIB and transmissivity in both the creek sites and the offshore control site, we created a simple, two end-member mixing model of FIB and transmissivity within Newport Bay. Deviations from the mixing model would suggest either an additional source of FIB to the bay (e.g. bird feces) or regrowth of FIB within the bay. Our results indicate that, with a few notable exceptions, salinity is a good tracer for FIB concentrations along the transect, but is not particularly effective for transmissivity. This suggests that the largest contributor of FIB loading to Newport Bay comes from the discharge of creeks into the upper reaches of the estuary.
NASA Astrophysics Data System (ADS)
Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.
2008-07-01
The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.
Experimental constraints on the neutrino oscillations and a simple model of three-flavor mixing
Raczka, P.A.; Szymacha, A. ); Tatur, S. )
1994-02-01
A simple model of neutrino mixing is considered which contains only one right-handed neutrino field coupled, via the mass term, to the three usual left-handed fields. This is the simplest model that allows for three-flavor neutrino oscillations. The existing experimental limits on the neutrino oscillations are used to obtain constraints on the two free-mixing parameters of the model. A specific sum rule relating the oscillation probabilities of different flavors is derived.
NASA Astrophysics Data System (ADS)
Lansdown, Katrina; Heppell, Kate; Ullah, Sami; Heathwaite, A. Louise; Trimmer, Mark; Binley, Andrew; Heaton, Tim; Zhang, Hao
2010-05-01
The dynamics of groundwater and surface water mixing and associated nitrogen transformations in the hyporheic zone have been investigated within a gaining reach of a groundwater-fed river (River Leith, Cumbria, UK). The regional aquifer consists of Permo-Triassic sandstone, which is overlain by varying depths of glaciofluvial sediments (~15 to 50 cm) to form the river bed. The reach investigated (~250m long) consists of a series of riffle and pool sequences (Käser et al. 2009), with other geomorphic features such as vegetated islands and marginal bars also present. A network of 17 piezometers, each with six depth-distributed pore water samplers based on the design of Rivett et al. (2008), was installed in the river bed in June 2009. An additional 18 piezometers with a single pore water sampler were installed in the riparian zone along the study reach. Water samples were collected from the pore water samplers on three occasions during summer 2009, a period of low flow. The zone of groundwater-surface water mixing within the river bed sediments was inferred from depth profiles (0 to 100 cm) of conservative chemical species and isotopes of water with the collected samples. Sediment cores collected during piezometer installation also enabled characterisation of grain size within the hyporheic zone. A multi-component mixing model was developed to quantify the relative contributions of different water sources (surface water, groundwater and bank exfiltration) to the hyporheic zone. Depth profiles of ‘predicted' nitrate concentration were constructed using the relative contribution of each water source to the hyporheic and the nitrate concentration of the end members. This approach assumes that the mixing of different sources of water is the only factor controlling the nitrate concentration of pore water in the river bed sediments. Comparison of predicted nitrate concentrations (which assume only mixing of waters with different nitrate concentrations) with actual
Mathematical, physical and numerical principles essential for models of turbulent mixing
Sharp, David Howland; Lim, Hyunkyung; Yu, Yan; Glimm, James G
2009-01-01
We propose mathematical, physical and numerical principles which are important for the modeling of turbulent mixing, especially the classical and well studied Rayleigh-Taylor and Richtmyer-Meshkov instabilities which involve acceleration driven mixing of a fluid discontinuity layer, by a steady accerleration or an impulsive force.
Pricing European option under the time-changed mixed Brownian-fractional Brownian model
NASA Astrophysics Data System (ADS)
Guo, Zhidong; Yuan, Hongjun
2014-07-01
This paper deals with the problem of discrete time option pricing by a mixed Brownian-fractional subdiffusive Black-Scholes model. Under the assumption that the price of the underlying stock follows a time-changed mixed Brownian-fractional Brownian motion, we derive a pricing formula for the European call option in a discrete time setting.
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
A Simple Scheme to Implement a Nonlocal Turbulent Convection Model for Convective Overshoot Mixing
NASA Astrophysics Data System (ADS)
Zhang, Q. S.
2016-02-01
Classical “ballistic” overshoot models show some contradictions and are not consistent with numerical simulations and asteroseismic studies. Asteroseismic studies imply that overshoot is a weak mixing process. A diffusion model is suitable to deal with it. The form of diffusion coefficient in a diffusion model is crucial. Because overshoot mixing is related to convective heat transport (i.e., entropy mixing), there should be some similarity between them. A recent overshoot mixing model shows consistency between composition mixing and entropy mixing in the overshoot region. A prerequisite to apply the model is to know the dissipation rate of turbulent kinetic energy. The dissipation rate can be worked out by solving turbulent convection models (TCMs). But it is difficult to apply TCMs because of some numerical problems and the enormous time cost. In order to find a convenient way, we have used the asymptotic solution and simplified the TCM to a single linear equation for turbulent kinetic energy. This linear model is easy to implement in calculations of stellar evolution with negligible extra time cost. We have tested the linear model in stellar evolution, and have found that it can well reproduce the turbulent kinetic energy profile of the full TCM, as well as the diffusion coefficient, abundance profile, and stellar evolutionary tracks. We have also studied the effects of different values of the model parameters and have found that the effect due to the modification of temperature gradient in the overshoot region is slight.
Kale, A.; Bazzanella, N.; Checchetto, R.; Miotello, A.
2009-05-18
Mg films with mixed Fe and Zr metallic additives were prepared by rf magnetron sputtering keeping the total metal content constant, about 7 at. %, and changing the [Fe]/[Zr] ratio. Isothermal hydrogen desorption curves showed that the kinetics depends on [Fe]/[Zr] ratio and is fastest when the [Fe]/[Zr] ratio is {approx}1.8. X-ray diffraction analysis revealed formation of Fe nanoclusters and Mg grain refinement. The improvement of the hydrogen desorption kinetics can be explained by the presence of atomically dispersed Zr and Fe nanoclusters acting as nucleation centers, as well as Mg grain refinement.
NASA Technical Reports Server (NTRS)
Nguyen, H. Lee; Wey, Ming-Jyh
1990-01-01
Two-dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.
NASA Technical Reports Server (NTRS)
Nguyen, H. Lee; Wey, Ming-Jyh
1990-01-01
Two dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.
Malloy, Elizabeth J.; Morris, Jeffrey S.; Adar, Sara D.; Suh, Helen; Gold, Diane R.; Coull, Brent A.
2010-01-01
Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1–7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants. PMID:20156988
Yuan, Xianzheng; Shi, Xiaoshuang; Zhang, Peidong; Wei, Yueli; Guo, Rongbo; Wang, Lisheng
2011-10-01
This study investigated the influence of particle size on anaerobic biohydrogen production from wheat stalk by mixed microflora. In addition, the kinetic model for the formation of main products was also mentioned. The results demonstrated that all the cumulative productions of hydrogen, acetate and butyrate decreased as the particle size increasing from 1 to 10mm at a constant TS value of 2%, 5% and 8%, respectively. However, this difference for aqueous products was not very obvious compared with hydrogen. A modified Gompertz equation was able to adequately describe the cumulative production of hydrogen, acetate and butyrate (R² higher than 0.989). The results also indicated that the formation of the main products were all associated with the degradation of cellulose and hemicellulose (R² higher than 0.855).
Drikvandi, Reza
2017-02-13
Nonlinear mixed-effects models are frequently used for pharmacokinetic data analysis, and they account for inter-subject variability in pharmacokinetic parameters by incorporating subject-specific random effects into the model. The random effects are often assumed to follow a (multivariate) normal distribution. However, many articles have shown that misspecifying the random-effects distribution can introduce bias in the estimates of parameters and affect inferences about the random effects themselves, such as estimation of the inter-subject variability. Because random effects are unobservable latent variables, it is difficult to assess their distribution. In a recent paper we developed a diagnostic tool based on the so-called gradient function to assess the random-effects distribution in mixed models. There we evaluated the gradient function for generalized liner mixed models and in the presence of a single random effect. However, assessing the random-effects distribution in nonlinear mixed-effects models is more challenging, especially when multiple random effects are present, and therefore the results from linear and generalized linear mixed models may not be valid for such nonlinear models. In this paper, we further investigate the gradient function and evaluate its performance for such nonlinear mixed-effects models which are common in pharmacokinetics and pharmacodynamics. We use simulations as well as real data from an intensive pharmacokinetic study to illustrate the proposed diagnostic tool.
Representation and evaluation of aerosol mixing state in a climate model
NASA Astrophysics Data System (ADS)
Bauer, S. E.; Prather, K. A.; Ault, A. P.
2011-12-01
Aerosol particles in the atmosphere are composed out of multiple chemical species. The aerosol mixing state is an important aerosol property that will determine the interaction of aerosols with the climate system via radiative forcings and cloud activation. Through the introduction of aerosol microphysics into climate models, aerosol mixing state is by now taken into account to a certain extend in climate models, and evaluation of mixing state is the next challenge. Here we use data from the Aerosol Time of Flight Mass Spectrometer (ATOFMS) and compare the results to the GISS-modelE-MATRIX model, a global climate model including a detailed aerosol micro-physical scheme. We use data from various field campaigns probing, urban, rural and maritime air masses and compare those to climatological and nudged simulations for the years 2005 to 2009. ATOFMS provides information about the size distributions of several mixing state classes, including the chemical components of black and organic carbon, sulfates, dust and salts. MATRIX simulates 16 aerosol populations, which definitions are based on mixing state. We have grouped ATOFMS and MATRIX data into similar mixing state classes and compare the size resolved number concentrations against each other. As a first result we find that climatological simulations are rather difficult to evaluate with field data, and that nudged simulations give a much better agreement. However this is not just caused by the better fit of natural - meteorological driven - aerosol components, but also due to the interaction between meteorology and aerosol formation. The model seems to get the right amount of mixing state of black carbon material with sulfate and organic components, but seems to always overestimate the fraction of black carbon that is externally mixed. In order to understand this bias between model and the ATOFMS data, we will look into microphysical processes near emission sources and investigate the climate relevance of these sub
Unit physics performance of a mix model in Eulerian fluid computations
Vold, Erik; Douglass, Rod
2011-01-25
In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.
Analytical models for well-mixed populations of cooperators and defectors under limiting resources
NASA Astrophysics Data System (ADS)
Requejo, R. J.; Camacho, J.
2012-06-01
In the study of the evolution of cooperation, resource limitations are usually assumed just to provide a finite population size. Recently, however, agent-based models have pointed out that resource limitation may modify the original structure of the interactions and allow for the survival of unconditional cooperators in well-mixed populations. Here, we present analytical simplified versions of two types of agent-based models recently published: one in which the limiting resource constrains the ability of reproduction of individuals but not their survival, and a second one where the limiting resource is necessary for both reproduction and survival. One finds that the analytical models display, with a few differences, the same qualitative behavior of the more complex agent-based models. In addition, the analytical models allow us to expand the study and identify the dimensionless parameters governing the final fate of the system, such as coexistence of cooperators and defectors, or dominance of defectors or of cooperators. We provide a detailed analysis of the occurring phase transitions as these parameters are varied.
The Brown Muck of $B^0$ and $B^0_s$ Mixing: Beyond the Standard Model
Bouchard, Christopher Michael
2011-01-01
Standard Model contributions to neutral $B$ meson mixing begin at the one loop level where they are further suppressed by a combination of the GIM mechanism and Cabibbo suppression. This combination makes $B$ meson mixing a promising probe of new physics, where as yet undiscovered particles and/or interactions can participate in the virtual loops. Relating underlying interactions of the mixing process to experimental observation requires a precise calculation of the non-perturbative process of hadronization, characterized by hadronic mixing matrix elements. This thesis describes a calculation of the hadronic mixing matrix elements relevant to a large class of new physics models. The calculation is performed via lattice QCD using the MILC collaboration's gauge configurations with $2+1$ dynamical sea quarks.
The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2013-01-01
Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…
An R2 statistic for fixed effects in the linear mixed model.
Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver
2008-12-20
Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.
NASA Astrophysics Data System (ADS)
Kanemura, Shinya; Kikuchi, Mariko; Yagyu, Kei
2016-06-01
We calculate renormalized Higgs boson couplings with gauge bosons and fermions at the one-loop level in the model with an additional isospin singlet real scalar field. These coupling constants can deviate from the predictions in the standard model due to tree-level mixing effects and one-loop contributions of the extra neutral scalar boson. We investigate how they can be significant under the theoretical constraints from perturbative unitarity and vacuum stability and also the condition of avoiding the wrong vacuum. Furthermore, comparing with the predictions in the Type I two Higgs doublet model, we numerically demonstrate how the singlet extension model can be distinguished and identified by using precision measurements of the Higgs boson couplings at future collider experiments.
Neutrino mixing model based on an A4×Z3×Z4 flavor symmetry
NASA Astrophysics Data System (ADS)
Ky, Nguyen Anh; Quang Vǎn, Phi; Há»`ng Vân, Nguyen Thi
2016-11-01
A model of a neutrino mixing with an A4×Z3×Z4 flavor symmetry is suggested. In addition to the standard model fields, the present model contains six new fields that transform under different representations of A4×Z3×Z4. The model is constructed to slightly deviate from a tribimaximal model in agreement with the current experimental data; thus, all analysis can be done in the base of the perturbation method. Within this model, as an application, a relation between the mixing angles (θ12 , θ23 , θ13 ) and the Dirac C P -violation phase (δC P) is established. This relation allows a prediction of δC P and the Jarlskog parameter (JC P). The predicted value δC P is in the 1 σ region of the global fit for both the normal and inverse neutrino mass ordering and gives JC P to be within the bound |JC P|≤0.04 . For an illustration, the model is checked numerically and gives values of the neutrino masses (of the order of 0.1 eV) and the mixing angle θ13 (about 9°) very close to the current experimental data.
Carbonell-Capella, Juana M; Buniowska, Magdalena; Esteve, María J; Frígola, Ana
2015-10-01
In order to determine the impact of Stevia rebaudiana (SR) addition on bioactive compounds bioaccessibility of a new developed functional beverage based on exotic fruits (mango juice, papaya juice and açaí) mixed with orange juice and oat, an in vitro gastrointestinal digestion was performed. Ascorbic acid, total carotenoids, total phenolics, total anthocyanins, total antioxidant capacity and steviol glycosides were evaluated before and after a simulated gastrointestinal digestion. Salivary and gastric digestion had no substantial effect on any of the major phenolic compounds, ascorbic acid, total antioxidant capacity and steviol glycosides, whereas carotenoids and anthocyanins diminished significantly during the gastric step. All analysed compounds were significantly altered during the pancreatic-bile digestion and this effect was more marked for carotenoids and total anthocyanins. However, phenolic compounds, anthocyanins, total antioxidant capacity and steviol glycosides bioaccessibility increased as did SR concentration. Ascorbic acid bioaccessibility was negatively affected by the SR addition.
A quantitative approach to combine sources in stable isotope mixing models
Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...
Phillips & Koch (2002) outlined a new stable isotope mixing model which incorporates differences in elemental concentrations in the determinations of source proportions in a mixture. They illustrated their method with sensitivity analyses and two examples from the wildlife ecolog...
A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine mechanism of the metabolic interactions occurring during simultaneous inhalation exposures to the organic solvents chloroform and trichloroethylene (TCE).
V...
Nguyen, Nam-Trung; Huang, Xiaoyang
2005-11-01
This paper theoretically and experimentally investigates a micromixer based on combined hydrodynamic focusing and time-interleaved segmentation. Both hydrodynamic focusing and time-interleaved segmentation are used in the present study to reduce mixing path, to shorten mixing time, and to enhance mixing quality. While hydrodynamic focusing reduces the transversal mixing path, time-interleaved sequential segmentation shortens the axial mixing path. With the same viscosity in the different streams, the focused width can be adjusted by the flow rate ratio. The axial mixing path or the segment length can be controlled by the switching frequency and the mean velocity of the flow. Mixing ratio can be controlled by both flow rate ratio and pulse width modulation of the switching signal. This paper first presents a time-dependent two-dimensional analytical model for the mixing concept. The model considers an arbitrary mixing ratio between solute and solvent as well as the axial Taylor-Aris dispersion. A micromixer was designed and fabricated based on lamination of four polymer layers. The layers were machined using a CO2 laser. Time-interleaved segmentation was realized by two piezoelectric valves. The sheath streams for hydrodynamic focusing are introduced through the other two inlets. A special measurement set-up was designed with synchronization of the mixer's switching signal and the camera's trigger signal. The set-up allows a relatively slow and low-resolution CCD camera to freeze and to capture a large transient concentration field. The concentration profile along the mixing channel agrees qualitatively well with the analytical model. The analytical model and the device promise to be suitable tools for studying Taylor-Aris dispersion near the entrance of a flat microchannel.
Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.
Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng
2014-06-01
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network.
2014-09-30
Submesoscale Flows and Mixing in the Oceanic Surface Layer Using the Regional Oceanic Modeling System (ROMS) M. Jeroen Molemaker (PI) James C...long-term goals of this project are to further the insight into the dynamics of submesoscale flow in the oceanic surface layer. Using the Regional...Oceanic Modeling System (ROMS), we aim to understand the impact of submesoscale processes on tracer mixing at small scales and the transfer of energy
An explicit SU(12) family and flavor unification model with natural fermion masses and mixings
Albright, Carl H.; Feger, Robert P.; Kephart, Thomas W.
2012-07-01
We present an SU(12) unification model with three light chiral families, avoiding any external flavor symmetries. The hierarchy of quark and lepton masses and mixings is explained by higher dimensional Yukawa interactions involving Higgs bosons that contain SU(5) singlet fields with VEVs about 50 times smaller than the SU(12) unification scale. The presented model has been analyzed in detail and found to be in very good agreement with the observed quark and lepton masses and mixings.
Efficient multivariate linear mixed model algorithms for genome-wide association studies.
Zhou, Xiang; Stephens, Matthew
2014-04-01
Multivariate linear mixed models (mvLMMs) are powerful tools for testing associations between single-nucleotide polymorphisms and multiple correlated phenotypes while controlling for population stratification in genome-wide association studies. We present efficient algorithms in the genome-wide efficient mixed model association (GEMMA) software for fitting mvLMMs and computing likelihood ratio tests. These algorithms offer improved computation speed, power and P-value calibration over existing methods, and can deal with more than two phenotypes.
Fermion masses and mixing in SU(5)×D4 × U(1) model
NASA Astrophysics Data System (ADS)
Ahl Laamara, R.; Loualidi, M. A.; Miskaoui, M.; Saidi, E. H.
2017-03-01
We propose a supersymmetric SU (5) ×Gf GUT model with flavor symmetry Gf =D4 × U (1) providing a good description of fermion masses and mixing. The model has twenty eight free parameters, eighteen are fixed to produce approximative experimental values of the physical parameters in the quark and charged lepton sectors. In the neutrino sector, the TBM matrix is generated at leading order through type I seesaw mechanism, and the deviation from TBM studied to reconcile with the phenomenological values of the mixing angles. Other features in the charged sector such as Georgi-Jarlskog relations and CKM mixing matrix are also studied.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805
Data on copula modeling of mixed discrete and continuous neural time series.
Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou
2016-06-01
Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.
Thorvaldsen, Tom; Osnes, Harald; Sundnes, Joakim
2005-12-01
In this paper we present a mixed finite element method for modeling the passive properties of the myocardium. The passive properties are described by a non-linear, transversely isotropic, hyperelastic material model, and the myocardium is assumed to be almost incompressible. Single-field, pure displacement-based formulations are known to cause numerical difficulties when applied to incompressible or slightly compressible material cases. This paper presents an alternative approach in the form of a mixed formulation, where a separately interpolated pressure field is introduced as a primary unknown in addition to the displacement field. Moreover, a constraint term is included in the formulation to enforce (almost) incompressibility. Numerical results presented in the paper demonstrate the difficulties related to employing a pure displacement-based method, applying a set of physically relevant material parameter values for the cardiac tissue. The same problems are not experienced for the proposed mixed method. We show that the mixed formulation provides reasonable numerical results for compressible as well as nearly incompressible cases, also in situations of large fiber stretches. There is good agreement between the numerical results and the underlying analytical models.
Optimal design of mixed-effects PK/PD models based on differential equations.
Wang, Yi; Eskridge, Kent M; Nadarajah, Saralees
2012-01-01
There is a vast literature on the analysis of optimal design of nonlinear mixed-effects models (NLMMs) described by ordinary differential equations (ODEs) with analytic solution. However, much less has been published on the design of trials to fit such models with nonanalytic solution. In this article, we use the "direct" method to find parameter sensitivities, which are required during the optimization of models defined as ODEs, and apply them to find D-optimal designs for various specific situations relevant to population pharmacokinetic studies using a particular model with first-order absorption and elimination. In addition, we perform two simulation studies. The first one aims to show that the criterion computed from the development of the Fisher information matrix expression is a good measure to compare and optimize population designs, thus avoiding a large number of simulations; In the second one, a sensitivity analysis with respect to parameter misspecification allows us to compare the robustness of different population designs constructed in this article.
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
NASA Astrophysics Data System (ADS)
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)
Development of a competition model for microbial growth in mixed culture.
Fujikawa, Hiroshi; Munakata, Kanako; Sakha, Mohammad Z
2014-01-01
A novel competition model for describing bacterial growth in mixed culture was developed in this study. Several model candidates were made with our logistic growth model that precisely describes the growth of a monoculture of bacteria. These candidates were then evaluated for the usefulness in describing growth of two competing species in mixed culture using Staphylococcus aureus, Escherichia coli, and Salmonella. Bacterial cells of two species grew at initial doses of 10(3), 10(4), and 10(5) CFU/g at 28ºC. Among the candidates, a model where the Lotka-Volterra model, a general competition model in ecology, was incorporated as a new term in our growth model was the best for describing all types of growth of two competitors in mixed culture. Moreover, the values for the competition coefficient in the model were stable at various combinations of the initial populations of the species. The Baranyi model could also successfully describe the above types of growth in mixed culture when it was coupled with the Gimenez and Dalgaard model. However, the values for the competition coefficients in the competition model varied with the conditions. The present study suggested that our model could be a basic model for describing microbial competition.
A mixed contact model for an immersed collision between two solid surfaces.
Yang, Fu-Ling; Hunt, Melany L
2008-06-28
Experimental evidence shows that the presence of an ambient liquid can greatly modify the collision process between two solid surfaces. Interactions between the solid surfaces and the surrounding liquid result in energy dissipation at the particle level, which leads to solid-liquid mixture rheology deviating from dry granular flow behaviour. The present work investigates how the surrounding liquid modifies the impact and rebound of solid spheres. Existing collision models use elastohydrodynamic lubrication (EHL) theory to address the surface deformation under the developing lubrication pressure, thereby coupling the motion of the liquid and solid. With EHL theory, idealized smooth particles are made to rebound from a lubrication film. Modified EHL models, however, allow particles to rebound from mutual contacts of surface asperities, assuming negligible liquid effects. In this work, a new contact mechanism, 'mixed contact', is formulated, which considers the interplay between the asperities and the interstitial liquid as part of a hybrid rebound scheme. A recovery factor is further proposed to characterize the additional energy loss due to asperity-liquid interactions. The resulting collision model is evaluated through comparisons with experimental data, exhibiting a better performance than the existing models. In addition to the three non-dimensional numbers that result from the EHL analysis--the wet coefficient of restitution, the particle Stokes number and the elasticity parameter--a fourth parameter is introduced to correlate particle impact momentum to the EHL deformation impulse. This generalized collision model covers a wide range of impact conditions and could be employed in numerical codes to simulate the bulk motion of solid particles with non-negligible liquid effects.
A refined and dynamic cellular automaton model for pedestrian-vehicle mixed traffic flow
NASA Astrophysics Data System (ADS)
Liu, Mianfang; Xiong, Shengwu
2016-12-01
Mixed traffic flow sharing the “same lane” and having no discipline on road is a common phenomenon in the developing countries. For example, motorized vehicles (m-vehicles) and nonmotorized vehicles (nm-vehicles) may share the m-vehicle lane or nm-vehicle lane and pedestrians may share the nm-vehicle lane. Simulating pedestrian-vehicle mixed traffic flow consisting of three kinds of traffic objects: m-vehicles, nm-vehicles and pedestrians, can be a challenge because there are some erratic drivers or pedestrians who fail to follow the lane disciplines. In the paper, we investigate various moving and interactive behavior associated with mixed traffic flow, such as lateral drift including illegal lane-changing and transverse crossing different lanes, overtaking and forward movement, and propose some new moving and interactive rules for pedestrian-vehicle mixed traffic flow based on a refined and dynamic cellular automaton (CA) model. Simulation results indicate that the proposed model can be used to investigate the traffic flow characteristic in a mixed traffic flow system and corresponding complicated traffic problems, such as, the moving characteristics of different traffic objects, interaction phenomenon between different traffic objects, traffic jam, traffic conflict, etc., which are consistent with the actual mixed traffic system. Therefore, the proposed model provides a solid foundation for the management, planning and evacuation of the mixed traffic flow.
A mixed time series model of binomial counts
NASA Astrophysics Data System (ADS)
Khoo, Wooi Chen; Ong, Seng Huat
2015-10-01
Continuous time series modelling has been an active research in the past few decades. However, time series data in terms of correlated counts appear in many situations such as the counts of rainy days and access downloading. Therefore, the study on count data has become popular in time series modelling recently. This article introduces a new mixture model, which is an univariate non-negative stationary time series model with binomial marginal distribution, arising from the combination of the well-known binomial thinning and Pegram's operators. A brief review of important properties will be carried out and the EM algorithm is applied in parameter estimation. A numerical study is presented to show the performance of the model. Finally, a potential real application will be presented to illustrate the advantage of the new mixture model.
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Colborne, Scott F.; Rush, Scott A.; Paterson, Gordon; Johnson, Timothy B.; Lantry, Brian F.; Fisk, Aaron T.
2016-01-01
Recent development of multi-dimensional stable isotope models for estimating both foraging patterns and niches have presented the analytical tools to further assess the food webs of freshwater populations. One approach to refine predictions from these analyses is to include a third isotope to the more common two-isotope carbon and nitrogen mixing models to increase the power to resolve different prey sources. We compared predictions made with two-isotope carbon and nitrogen mixing models and three-isotope models that also included sulphur (δ34S) for the diets of Lake Ontario lake trout (Salvelinus namaycush). We determined the isotopic compositions of lake trout and potential prey fishes sampled from Lake Ontario and then used quantitative estimates of resource use generated by two- and three-isotope Bayesian mixing models (SIAR) to infer feeding patterns of lake trout. Both two- and three-isotope models indicated that alewife (Alosa pseudoharengus) and round goby (Neogobius melanostomus) were the primary prey items, but the three-isotope models were more consistent with recent measures of prey fish abundances and lake trout diets. The lake trout sampled directly from the hatcheries had isotopic compositions derived from the hatchery food which were distinctively different from those derived from the natural prey sources. Those hatchery signals were retained for months after release, raising the possibility to distinguish hatchery-reared yearlings and similarly sized naturally reproduced lake trout based on isotopic compositions. Addition of a third-isotope resulted in mixing model results that confirmed round goby have become an important component of lake trout diet and may be overtaking alewife as a prey resource.
Water model experiments of multiphase mixing in the top-blown smelting process of copper concentrate
NASA Astrophysics Data System (ADS)
Zhao, Hong-liang; Yin, Pan; Zhang, Li-feng; Wang, Sen
2016-12-01
We constructed a 1:10 cold water experimental model by geometrically scaling down an Isa smelting furnace. The mixing processes at different liquid heights, lance diameters, lance submersion depths, and gas flow rates were subsequently measured using the conductivity method. A new criterion was proposed to determine the mixing time. On this basis, the quasi-equations of the mixing time as a function of different parameters were established. The parameters of the top-blown smelting process were optimized using high-speed photography. An excessively high gas flow rate or excessively low liquid height would enhance the fluctuation and splashing of liquid in the bath, which is unfavorable for material mixing. Simultaneously increasing the lance diameter and the lance submersion depth would promote the mixing in the bath, thereby improving the smelting efficiency.
Su, Li; Farewell, Vernon T
2013-01-01
For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. PMID:24201470
Response to selection in finite locus models with non-additive effects.
Esfandyari, Hadi; Henryon, Mark; Berg, Peer; Thomasen, Jorn Rind; Bijma, Piter; Sørensen, Anders Christian
2017-01-12
Under the finite-locus model in the absence of mutation, the additive genetic variation is expected to decrease when directional selection is acting on a population, according to quantitative-genetic theory. However, some theoretical studies of selection suggest that the level of additive variance can be sustained or even increased when non-additive genetic effects are present. We tested the hypothesis that finite-locus models with both additive and non-additive genetic effects maintain more additive genetic variance (V_A) and realize larger medium-to-long term genetic gains than models with only additive effects when the trait under selection is subject to truncation selection. Four genetic models that included additive, dominance, and additive-by-additive epistatic effects were simulated. The simulated genome for individuals consisted of 25 chromosomes, each with a length of 1M. One hundred bi-allelic QTL, four on each chromosome, were considered. In each generation, 100 sires and 100 dams were mated, producing five progeny per mating. The population was selected for a single trait (h(2)=0.1) for 100 discrete generations with selection on phenotype or BLUP-EBV. V_A decreased with directional truncation selection even in presence of non-additive genetic effects. Non-additive effects influenced long-term response to selection and among genetic models additive gene action had highest response to selection. In addition, in all genetic models, BLUP-EBV resulted in a greater fixation of favourable and unfavourable alleles and higher response than phenotypic selection. In conclusion, for the schemes we simulated, the presence of non-additive genetic effects had little effect in changes of additive variance and V_A decreased by directional selection.
Demeter, Marc A.; Lemire, Joseph A.; Yue, Gordon; Ceri, Howard; Turner, Raymond J.
2015-01-01
Oil sands surface mining for bitumen results in the formation of oil sands process water (OSPW), containing acutely toxic naphthenic acids (NAs). Potential exists for OSPW toxicity to be mitigated by aerobic degradation of the NAs by microorganisms indigenous to the oil sands tailings ponds, the success of which is dependent on the methods used to exploit the metabolisms of the environmental microbial community. Having hypothesized that the xenobiotic tolerant biofilm mode-of-life may represent a feasible way to harness environmental microbes for ex situ treatment of OSPW NAs, we aerobically grew OSPW microbes as single and mixed species biofilm and planktonic cultures under various conditions for the purpose of assaying their ability to tolerate and degrade NAs. The NAs evaluated were a diverse mixture of eight commercially available model compounds. Confocal microscopy confirmed the ability of mixed and single species OSPW cultures to grow as biofilms in the presence of the NAs evaluated. qPCR enumeration demonstrated that the addition of supplemental nutrients at concentrations of 1 g L-1 resulted in a more numerous population than 0.001 g L-1 supplementation by approximately 1 order of magnitude. GC-FID analysis revealed that mixed species cultures (regardless of the mode of growth) are the most effective at degrading the NAs tested. All constituent NAs evaluated were degraded below detectable limits with the exception of 1-adamantane carboxylic acid (ACA); subsequent experimentation with ACA as the sole NA also failed to exhibit degradation of this compound. Single species cultures degraded select few NA compounds. The degradation trends highlighted many structure-persistence relationships among the eight NAs tested, demonstrating the effect of side chain configuration and alkyl branching on compound recalcitrance. Of all the isolates, the Rhodococcus spp. degraded the greatest number of NA compounds, although still less than the mixed species cultures
Mixed Poisson distributions in exact solutions of stochastic autoregulation models.
Iyer-Biswas, Srividya; Jayaprakash, C
2014-11-01
In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies.
Milford, A.; Devaud, C.B.
2010-08-15
The present paper examines the case of autoignition of high pressure methane jets in a shock tube over a range of pre-heated air temperatures in engine-relevant conditions. The two objectives of the present paper are: (i) to examine the effect of the inhomogeneous mixing model on the autoignition predictions relative to the results obtained using homogeneous mixing models and (ii) to see if the magnitude of the change can explain the discrepancy between the predictions of ignition delay previously obtained with homogeneous mixing models and the experimental data. The governing equation of the scalar dissipation rate is solved for transient conditions and two different formulations of the same model are tested and compared: one using the linear model for the conditional velocity and one including the gradient diffusion model. The predicted ignition kernel location and time delay over a range of pre-combustion air temperatures are compared with results obtained using two homogeneous turbulent mixing models and available experimental data. The profiles of conditional velocity and the conditional scalar dissipation rate are examined. Issues related to the conditional velocity model are discussed. It is found that the differences in the predictions are due to the mixing model only. The inhomogeneous model using the gradient conditional velocity model produces much larger ignition delays compared to the other models, whereas the inhomogeneous form including the linear model does not produce any significant differences. The effect of the turbulent inhomogeneous model is larger at high air temperatures and decreases with decreasing air temperatures. In comparison with the measured ignition delays, the inhomogeneous-Gradient model brings a small improvement at high air temperatures over the results from the turbulent homogeneous models. At low air temperatures, other parameters need to be investigated in order to bring the predicted ignition delays and locations within the
Molecular Biomarker-Based Biokinetic Modeling of a PCE-Dechlorinating and Methanogenic Mixed Culture
Heavner, Gretchen L. W.; Rowe, Annette R.; Mansfeldt, Cresten B.; Pan, Ju Khuan; Gossett, James M.; Richardson, Ruth E.
2013-04-16
Bioremediation of chlorinated ethenes via anaerobic reductive dechlorination relies upon the activity of specific microbial population-most notably Dehalococcoides (DHC) strains. In the lab and field Dehalococcoides grow most robustly in mixed communities which usually contain both fermenters and methanogens. Recently, researchers have been developing quantitative molecular biomarkers to aid in field site diagnostics and it is hoped that these biomarkers could aid in the modeling of anaerobic reductive dechlorination. A comprehensive biokinetic model of a community containing Dehalococcoides mccartyi (formerly D. ethenogenes) was updated to describe continuously fed reactors with specific biomass levels based on quantitative PCR (qPCR)-based population data (DNA and RNA). The model was calibrated and validated with subsets of chemical and molecular biological data from various continuous feed experiments (n = 24) with different loading rates of the electron acceptor (1.5 to 482 μeeq/L-h), types of electron acceptor (PCE, TCE, cis-DCE) and electron donor to electron acceptor ratios. The resulting model predicted the sum of dechlorination products vinyl chloride (VC) and ethene (ETH) well. However, VC alone was under-predicted and ETH was over predicted. Consequently, competitive inhibition among chlorinated ethenes was examined and then added to the model. Additionally, as 16S rRNA gene copy numbers did not provide accurate model fits in all cases, we examined whether an improved fit could be obtained if mRNA levels for key functional enzymes could be used to infer respiration rates. The resulting empirically derived mRNA “adjustment factors” were added to the model for both DHC and the main methanogen in the culture (a Methanosaeta species) to provide a more nuanced prediction of activity. Results of this study suggest that at higher feeding rates competitive inhibition is important and mRNA provides a more accurate indicator of a population’s instantaneous
NASA Technical Reports Server (NTRS)
Seufzer, William J.
2014-01-01
Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.
A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.
Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin
2017-02-01
The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed.
Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction
Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.; ...
2016-08-01
In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less
Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction
Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.; Gleichauf, Karla T.; Fong, Derek A.; Monismith, Stephen G.
2016-08-01
In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results. An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.
An a priori DNS study of the shadow-position mixing model
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; Haworth, Daniel C.; Pope, Stephen B.
2016-01-15
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of the shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and
An a priori DNS study of the shadow-position mixing model
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; ...
2016-01-15
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-03-28
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
Koolivand, Ali; Rajaei, Mohammad Sadegh; Ghanadzadeh, Mohammad Javad; Saeedi, Reza; Abtahi, Hamid; Godini, Kazem
2017-03-21
The effect of mixing ratio and nutrients addition on the efficiency of a two-stage composting system in removal of total petroleum hydrocarbons (TPH) from storage tank bottom sludge (STBS) was investigated. The system consisted of ten windrow piles as primary composting (PC) followed by four in-vessel reactors as secondary composting (SC). Various initial C/N/P and mixing ratios of STBS to immature compost (IC) were examined in the PC and SC for 12 and 6weeks, respectively. The removal rates of TPH in the two-stage system (93.72-95.24%) were higher than those in the single-stage one. Depending on the experiments, TPH biodegradation fitted to the first- and second-order kinetics with the rate constants of 0.051-0.334d(-1) and 0.002-0.165gkg(-1)d(-1), respectively. The bacteria identified were Pseudomonas sp., Bacillus sp., Klebsiella sp., Staphylococcus sp., and Proteus sp. The study verified that a two-stage composting system is effective in treating the STBS.
Johnson, Anthony N; Hromadka, T V
2015-01-01
The Laplace equation that results from specifying either the normal or tangential force equilibrium equation in terms of the warping functions or its conjugate can be modeled as a complex variable boundary element method or CVBEM mixed boundary problem. The CVBEM is a well-known numerical technique that can provide solutions to potential value problems in two or more dimensions by the use of an approximation function that is derived from the Cauchy Integral in complex analysis. This paper highlights three customizations to the technique.•A least squares approach to modeling the complex-valued approximation function will be compared and analyzed to determine if modeling error on the boundary can be reduced without the need to find and evaluated additional linearly independent complex functions.•The nodal point locations will be moved outside the problem domain.•Contour and streamline plots representing the warping function and its complementary conjugate are generated simultaneously from the complex-valued approximating function.
Johnson, Anthony N.; Hromadka, T.V.
2015-01-01
The Laplace equation that results from specifying either the normal or tangential force equilibrium equation in terms of the warping functions or its conjugate can be modeled as a complex variable boundary element method or CVBEM mixed boundary problem. The CVBEM is a well-known numerical technique that can provide solutions to potential value problems in two or more dimensions by the use of an approximation function that is derived from the Cauchy Integral in complex analysis. This paper highlights three customizations to the technique.•A least squares approach to modeling the complex-valued approximation function will be compared and analyzed to determine if modeling error on the boundary can be reduced without the need to find and evaluated additional linearly independent complex functions.•The nodal point locations will be moved outside the problem domain.•Contour and streamline plots representing the warping function and its complementary conjugate are generated simultaneously from the complex-valued approximating function. PMID:26151000
Modeling the adsorption of mixed gases based on pure gas adsorption properties
NASA Astrophysics Data System (ADS)
Tzabar, N.; Holland, H. J.; Vermeer, C. H.; ter Brake, H. J. M.
2015-12-01
Sorption-based Joule-Thomson (JT) cryocoolers usually operate with pure gases. A sorption-based compressor has many benefits; however, it is limited by the pressure ratios it can provide. Using a mixed-refrigerant (MR) instead of a pure refrigerant in JT cryocoolers allows working at much lower pressure ratios. Therefore, it is attractive using MRs in sorption- based cryocoolers in order to reduce one of its main limitations. The adsorption of mixed gases is usually investigated under steady-state conditions, mainly for storage and separation processes. However, the process in a sorption compressor goes through various temperatures, pressures and adsorption concentrations; therefore, it differs from the common mixed gases adsorption applications. In order to simulate the sorption process in a compressor a numerical analysis for mixed gases is developed, based on pure gas adsorption characteristics. The pure gas adsorption properties have been measured for four gases (nitrogen, methane, ethane, and propane) with Norit-RB2 activated carbon. A single adsorption model is desired to describe the adsorption of all four gases. This model is further developed to a mixed-gas adsorption model. In future work more adsorbents will be tested using these four gases and the adsorption model will be verified against experimental results of mixed-gas adsorption measurements.
An Investigation of a Hybrid Mixing Timescale Model for PDF Simulations of Turbulent Premixed Flames
NASA Astrophysics Data System (ADS)
Zhou, Hua; Kuron, Mike; Ren, Zhuyin; Lu, Tianfeng; Chen, Jacqueline H.
2016-11-01
Transported probability density function (TPDF) method features the generality for all combustion regimes, which is attractive for turbulent combustion simulations. However, the modeling of micromixing due to molecular diffusion is still considered to be a primary challenge for TPDF method, especially in turbulent premixed flames. Recently, a hybrid mixing rate model for TPDF simulations of turbulent premixed flames has been proposed, which recovers the correct mixing rates in the limits of flamelet regime and broken reaction zone regime while at the same time aims to properly account for the transition in between. In this work, this model is employed in TPDF simulations of turbulent premixed methane-air slot burner flames. The model performance is assessed by comparing the results from both direct numerical simulation (DNS) and conventional constant mechanical-to-scalar mixing rate model. This work is Granted by NSFC 51476087 and 91441202.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; D'Costa, Joseph F.
1991-01-01
This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.
Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O
2017-02-01
One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.
The Closure of the Ocean Mixed Layer Temperature Budget using Level-Coordinate Model Fields
NASA Technical Reports Server (NTRS)
Kim, Seung-Bum; Fukumori, Ichiro; Lee, Tong
2005-01-01
Entrainment is an important element of the mixed layer mass, heat, and temperature budgets. Conventional procedures to estimate entrainment heat advection often do not permit the closure of heat and temperature budgets because of inaccuracies in its formulation. In this study a rigorous approach to evaluate the effect of entrainment using the output of a general circulation model (GCM) that does not have an explicit prognostic mixed layer model is described. The integral elements of the evaluation are 1) the rigorous estimates of the temperature difference between mixed layer water and entrained water at each horizontal grid point, 2) the formulation of the temperature difference such that the budget closes over a volume greater than one horizontal grid point, and 3) the apparent warming of the mixed layer during the mixed layer shoaling to account for the weak vertical temperature gradient within the mixed layer. This evaluation of entrainment heat advection is compared with the estimates by other commonly used ad hoc formulations by applying them in three regions: the north-central Pacific, the Kuroshio Extension, and the Nino-3 areas in the tropical Pacific. In all three areas the imbalance in the mixed layer temperature budget by the ad hoc estimates is significant, reaching a maximum of about 4 K yr(exp -1).
Application of fall-line mix models to understand degraded yield
Welser-Sherrill, L; Cooley, J H; Haynes, D A; Wilson, D C; Sherrill, M E; Mancini, R C; Tommasini, R
2008-02-28
Mixing between fuel and shell material is an important topic in the inertial confinement fusion community, and is commonly accepted as the primary mechanism for neutron yield degradation. Typically, radiation hydrodynamic simulations that lack mixing (clean simulations) tend to considerably overestimate the neutron yield. We present here a series of yield calculations based on a variety of fall-line inspired mix models. The results are compared to a series of OMEGA experiments which provide total neutron yields and time-dependent yield rates.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Wolf, D. E.
1984-01-01
The interactive phenomena that occur in supersonic jet mixing flowfields, and numerical modeling techniques developed to analyze such phenomena are discussed. A spatial marching procedure based on solving the parabolized Navier-Stokes jet mixing equations is presented. This procedure combines shock-capturing methodology for the analysis of supersonic mixing regions with pressure-split methodology for the analysis of subsonic mixing regions. The two regions are coupled at viscous sonic lines utilizing a viscous-characteristic coupling procedure. Specialized techniques for the treatment of jet boundary growth, strong discontinuties (Mach disks), and small embedded subsonic zones (behind Mach disks) are presented. Turbulent processes are represented by two-equation turbulence model formulations. In Part II of this article, numerical studies are presented for a variety of supersonic jet interactive phenomena.
Estimating the numerical diapycnal mixing in the GO5.0 ocean model
NASA Astrophysics Data System (ADS)
Megann, Alex; Nurser, George
2014-05-01
Constant-depth (or "z-coordinate") ocean models such as MOM and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes (e.g. Hofmann and Maqueda, 2006), and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2013). It uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. Two approaches to quantifying the numerical diapycnal mixing in this model are described: the first is based on the isopycnal watermass analysis of Lee et al (2002), while the second uses a passive tracer to diagnose mixing across density surfaces. Results from these two methods will be compared and contrasted. Hofmann, M. and Maqueda, M. A. M., 2006. Performance of a second-order moments advection scheme in an ocean general circulation model. JGR-Oceans, 111(C5). Lee, M.-M., Coward, A.C., Nurser, A.G., 2002. Spurious diapycnal mixing of deep waters in an eddy-permitting global ocean model. JPO 32, 1522-1535 Megann, A., Storkey, D., Aksenov, Y., Alderson, S., Calvert, D., Graham, T., Hyder, P., Siddorn, J., and Sinha, B., 2013: GO5.0: The joint NERC-Met Office NEMO global ocean model for use in coupled and forced applications, Geosci. Model Dev. Discuss., 6, 5747-5799,.
A Mixed Approach for Modeling Blood Flow in Brain Microcirculation
NASA Astrophysics Data System (ADS)
Peyrounette, M.; Sylvie, L.; Davit, Y.; Quintard, M.
2014-12-01
We have previously demonstrated [1] that the vascular system of the healthy human brain cortex is a superposition of two structural components, each corresponding to a different spatial scale. At small-scale, the vascular network has a capillary structure, which is homogeneous and space-filling over a cut-off length. At larger scale, veins and arteries conform to a quasi-fractal branched structure. This structural duality is consistent with the functional duality of the vasculature, i.e. distribution and exchange. From a modeling perspective, this can be viewed as the superposition of: (a) a continuum model describing slow transport in the small-scale capillary network, characterized by a representative elementary volume and effective properties; and (b) a discrete network approach [2] describing fast transport in the arterial and venous network, which cannot be homogenized because of its fractal nature. This problematic is analogous to modeling problems encountered in geological media, e.g, in petroleum engineering, where fast conducting channels (wells or fractures) are embedded in a porous medium (reservoir rock). An efficient method to reduce the computational cost of fractures/continuum simulations is to use relatively large grid blocks for the continuum model. However, this also makes it difficult to accurately couple both structural components. In this work, we solve this issue by adapting the "well model" concept used in petroleum engineering [3] to brain specific 3-D situations. We obtain a unique linear system of equations describing the discrete network, the continuum and the well model coupling. Results are presented for realistic geometries and compared with a non-homogenized small-scale network model of an idealized periodic capillary network of known permeability. [1] Lorthois & Cassot, J. Theor. Biol. 262, 614-633, 2010. [2] Lorthois et al., Neuroimage 54 : 1031-1042, 2011. [3] Peaceman, SPE J. 18, 183-194, 1978.
Liao, Chenyi; Zhao, Xiaochuan; Liu, Jiyuan; Schneebeli, Severin T; Shelley, John C; Li, Jianing
2017-03-20
The structures and dynamics of protein complexes are often challenging to model in heterogeneous environments such as biological membranes. Herein, we meet this fundamental challenge at attainable cost with all-atom, mixed-resolution, and coarse-grained models of vital membrane proteins. We systematically simulated five complex models formed by two distinct G protein-coupled receptors (GPCRs) in the lipid-bilayer membrane on the ns-to-μs timescales. These models, which suggest the swinging motion of an intracellular loop, for the first time, provide the molecular details for the regulatory role of such a loop. For the models at different resolutions, we observed consistent structural stability but various levels of speed-ups in protein dynamics. The mixed-resolution and coarse-grained models show two and four times faster protein diffusion than the all-atom models, in addition to a 4- and 400-fold speed-up in the simulation performance. Furthermore, by elucidating the strengths and challenges of combining all-atom models with reduced resolution models, this study can serve as a guide to simulating other complex systems in heterogeneous environments efficiently.
Statistical tests with accurate size and power for balanced linear mixed models.
Muller, Keith E; Edwards, Lloyd J; Simpson, Sean L; Taylor, Douglas J
2007-08-30
The convenience of linear mixed models for Gaussian data has led to their widespread use. Unfortunately, standard mixed model tests often have greatly inflated test size in small samples. Many applications with correlated outcomes in medical imaging and other fields have simple properties which do not require the generality of a mixed model. Alternately, stating the special cases as a general linear multivariate model allows analysing them with either the univariate or multivariate approach to repeated measures (UNIREP, MULTIREP). Even in small samples, an appropriate UNIREP or MULTIREP test always controls test size and has a good power approximation, in sharp contrast to mixed model tests. Hence, mixed model tests should never be used when one of the UNIREP tests (uncorrected, Huynh-Feldt, Geisser-Greenhouse, Box conservative) or MULTIREP tests (Wilks, Hotelling-Lawley, Roy's, Pillai-Bartlett) apply. Convenient methods give exact power for the uncorrected and Box conservative tests. Simulations demonstrate that new power approximations for all four UNIREP tests eliminate most inaccuracy in existing methods. In turn, free software implements the approximations to give a better choice of sample size. Two repeated measures power analyses illustrate the methods. The examples highlight the advantages of examining the entire response surface of power as a function of sample size, mean differences, and variability.
Computational modeling of jet induced mixing of cryogenic propellants in low-G
NASA Technical Reports Server (NTRS)
Hochstein, J. I.; Gerhart, P. M.; Aydelot, J. C.
1984-01-01
The SOLA-ECLIPSE Code is being developed to enable computational prediction of jet induced mixing in cryogenic propellant tanks in a low-gravity environment. Velocity fields, predicted for scale model tanks, are presented which compare favorably with the available experimental data. A full scale liquid hydrogen tank for a typical Orbit Transfer Vehicle is analyzed with the conclusion that coupling an axial mixing jet with a thermodynamic vent system appears to be a viable concept for the control of tank pressure.
2013-09-30
DISTRIBUTION A: Approved for public release; distribution unlimited. Submesoscale Flows and Mixing in the Ocean Surface Layer Using the Regional...long-term goals of this project are to further the insight into the dynamics of submesoscale flow in the oceanic surface layer. Using the regional...oceanic modeling system (ROMS) we aim to understand the impact of submesoscale processes on the mixing at small scales of tracers and the transfer of
Sales, Pablo S; Fernández, Mariana A
2016-05-01
This study investigates the effect of a mixed surfactant system on the desorption of polycyclic aromatic hydrocarbons (PAHs) from soil model systems. The interaction of a non-ionic surfactant, Tween 80, and an anionic one, sodium laurate, forming mixed micelles, produces several beneficial effects, including reduction of adsorption onto solid of the non-ionic surfactant, decrease in the precipitation of the fatty acid salt, and synergism to solubilize PAHs from solids compared with individual surfactants.
Wave-induced upper-ocean mixing in a climate model of intermediate complexity
NASA Astrophysics Data System (ADS)
Babanin, Alexander V.; Ganopolski, Andrey; Phillips, William R. C.
Climate modelling, to a great extent, is based on simulating air-sea interactions at larger scales. Small-scale interactions and related phenomena, such as wind-generated waves and wave-induced turbulence are sub-grid processes for such models and therefore cannot be simulated explicitly. In the meantime, the waves play the principal role in the upper-ocean mixing. This role is usually parameterized, mostly to account for the wave-breaking turbulence and to describe downward diffusion of such turbulence. The main purpose of the paper is to demonstrate that an important physical mechanism, that is the ocean mixing due to waves, is presently missing in the climate models, whereas the effect of this mixing is significant. It is argued that the mixing role of the surface waves is not limited to the mere transfer of the wind stress and energy across the ocean interface by means of breaking and surface currents. The waves facilitate two processes in the upper-ocean which can deliver turbulence to the depths of the order of 100 m directly, rather than diffusing it from the surface. The first process is due to capacity of the waves to generate turbulence, unrelated to the wave breaking, at all depths where the wave orbital motion is significant. The second process is Langmuir circulation, triggered by the waves. Such wave-controlled mixing should cause seasonal variations of the mixed-layer depth, which regulates the thermodynamic balance between the ocean and atmosphere. In the present paper, these variations are parameterized in terms of the global winds. The variable mixed-layer depth is then introduced in the climate model of intermediated complexity CLIMBER-2 with a purpose of reproducing the pre-industrial climate. Comparisons are conducted with the NRL global atlas of the mixed layer, and performance of the wave-mixing parameterisations was found satisfactory in circumstances where the mixing is expected to be dominated by the wind-generated waves. It is shown that
Secondary flows enhance mixing in a model of vibration-assisted dialysis
NASA Astrophysics Data System (ADS)
Pitre, John; Mueller, Bruce; Lewis, Susan; Bull, Joseph
2014-11-01
Hemodialysis is an integral part of treatment for patients with end stage renal disease. While hemodialysis has traditionally been described as a diffusion-dominated process, recent in vitro work has shown that vibration of the dialyzer can enhance the clearance of certain solutes during treatment. We hypothesize that the addition of vibration generates secondary flows in the dialysate compartment. These flows, perpendicular to the longitudinal axis of the dialysis fibers, advect solute away from the fiber walls, thus maintaining a larger concentration gradient and enhancing diffusion. Using the finite element method, we simulated the flow of dialysate through a hexagonally-packed array of cylinders and the transport of solute away from the cylinder walls. The addition of vibration was modeled using sinusoidal body forces of various frequencies and amplitudes. Using the variance of the concentration field as a metric, we found that vibration improves mixing according to a power law dependency on frequency. We will discuss the implications of these computational results on our understanding of the in vitro experiments and propose optimal vibration patterns for improving clearance in dialysis treatments. This work was supported by the Michigan Institute for Clinical and Health Research and NIH Grant UL1TR000433.
Mixed-effects state-space models for analysis of longitudinal dynamic systems.
Liu, Dacheng; Lu, Tao; Niu, Xu-Feng; Wu, Hulin
2011-06-01
The rapid development of new biotechnologies allows us to deeply understand biomedical dynamic systems in more detail and at a cellular level. Many of the subject-specific biomedical systems can be described by a set of differential or difference equations that are similar to engineering dynamic systems. In this article, motivated by HIV dynamic studies, we propose a class of mixed-effects state-space models based on the longitudinal feature of dynamic systems. State-space models with mixed-effects components are very flexible in modeling the serial correlation of within-subject observations and between-subject variations. The Bayesian approach and the maximum likelihood method for standard mixed-effects models and state-space models are modified and investigated for estimating unknown parameters in the proposed models. In the Bayesian approach, full conditional distributions are derived and the Gibbs sampler is constructed to explore the posterior distributions. For the maximum likelihood method, we develop a Monte Carlo EM algorithm with a Gibbs sampler step to approximate the conditional expectations in the E-step. Simulation studies are conducted to compare the two proposed methods. We apply the mixed-effects state-space model to a data set from an AIDS clinical trial to illustrate the proposed methodologies. The proposed models and methods may also have potential applications in other biomedical system analyses such as tumor dynamics in cancer research and genetic regulatory network modeling.
Mixing and Transport in the Small Intestine: A Lattice-Boltzmann Model
NASA Astrophysics Data System (ADS)
Banco, Gino; Brasseur, James; Wang, Yanxing; Aliani, Amit; Webb, Andrew
2007-11-01
The two primary functions of the small intestine are absorption of nutrients into the blood stream and transport of material along the gut for eventual evacuation. The primary transport mechanism is peristalsis. The time scales for absorption, however, rely on mixing and transport of molecules between the bulk flow and epithelial surface. Two basic motions contribute to mixing: peristalsis and repetitive segmental contraction of short segments of the gut. In this study we evaluate the relative roles of peristalsis vs. segmental contraction on the degree of mixing and time scales of nutrient transport to the epithelium using a two-dimensional model of flow and mixing in the small intestine. The model uses the lattice-Boltzmann framework with second-order moving boundary conditions and passive scalar (Sc = 10). Segmental and peristaltic contractions were parameterized using magnetic resonance imaging data from rat models. The Reynolds numbers (1.9), segment lengths (33 mm), max radii (2.75 mm) and occlusion ratios (0.33) were matched for direct comparison. Mixing is quantified by the rate of dispersion of scalar from an initial concentration in the center of the segment. We find that radial mixing is more rapid with segmental than peristaltic motion, that radial dispersion is much more rapid than axial, and that axial is comparable between the motions.
Xu, Xu Steven; Samtani, Mahesh N; Dunne, Adrian; Nandy, Partha; Vermeulen, An; De Ridder, Filip
2013-08-01
Beta regression models have been recommended for continuous bounded outcome scores that are often collected in clinical studies. Implementing beta regression in NONMEM presents difficulties since it does not provide gamma functions required by the beta distribution density function. The objective of the study was to implement mixed-effects beta regression models in NONMEM using Nemes' approximation to the gamma function and to evaluate the performance of the NONMEM implementation of mixed-effects beta regression in comparison to the commonly used SAS approach. Monte Carlo simulations were conducted to simulate continuous outcomes within an interval of (0, 70) based on a beta regression model in the context of Alzheimer's disease. Six samples per subject over a 3 years period were simulated at 0, 0.5, 1, 1.5, 2, and 3 years. One thousand trials were simulated and each trial had 250 subjects. The simulation-reestimation exercise indicated that the NONMEM implementation using Laplace and Nemes' approximations provided only slightly higher bias and relative RMSE (RRMSE) compared to the commonly used SAS approach with adaptive Gaussian quadrature and built-in gamma functions, i.e., the difference in bias and RRMSE for fixed-effect parameters, random effects on intercept, and the precision parameter were <1-3 %, while the difference in the random effects on the slope was <3-7 % under the studied simulation conditions. The mixed-effect beta regression model described the disease progression for the cognitive component of the Alzheimer's disease assessment scale from the Alzheimer's Disease Neuroimaging Initiative study. In conclusion, with Nemes' approximation of the gamma function, NONMEM provided comparable estimates to those from SAS for both fixed and random-effect parameters. In addition, the NONMEM run time for the mixed beta regression models appeared to be much shorter compared to SAS, i.e., 1-2 versus 20-40 s for the model and data used in the manuscript.
Conflicts Management Model in School: A Mixed Design Study
ERIC Educational Resources Information Center
Dogan, Soner
2016-01-01
The object of this study is to evaluate the reasons for conflicts occurring in school according to perceptions and views of teachers and resolution strategies used for conflicts and to build a model based on the results obtained. In the research, explanatory design including quantitative and qualitative methods has been used. The quantitative part…
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors
Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.
Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.
Multitrait mixed modeling and categorical data analyses of phenotypic variances
Technology Transfer Automated Retrieval System (TEKTRAN)
Quantitative and categorical data were digitally recorded, measured or scored on whole canopies; single plants, leaves, and siliques; and on random seed samples of 224 genotypes in a phenotyping nursery of Brassica napus. They were used to (1) develop a pyramiding phenotyping model based on multitra...
Effective Genetic-Risk Prediction Using Mixed Models
Golan, David; Rosset, Saharon
2014-01-01
For predicting genetic risk, we propose a statistical approach that is specifically adapted to dealing with the challenges imposed by disease phenotypes and case-control sampling. Our approach (termed Genetic Risk Scores Inference [GeRSI]), combines the power of fixed-effects models (which estimate and aggregate the effects of single SNPs) and random-effects models (which rely primarily on whole-genome similarities between individuals) within the framework of the widely used liability-threshold model. We demonstrate in extensive simulation that GeRSI produces predictions that are consistently superior to current state-of-the-art approaches. When applying GeRSI to seven phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) study, we confirm that the use of random effects is most beneficial for diseases that are known to be highly polygenic: hypertension (HT) and bipolar disorder (BD). For HT, there are no significant associations in the WTCCC data. The fixed-effects model yields an area under the ROC curve (AUC) of 54%, whereas GeRSI improves it to 59%. For BD, using GeRSI improves the AUC from 55% to 62%. For individuals ranked at the top 10% of BD risk predictions, using GeRSI substantially increases the BD relative risk from 1.4 to 2.5. PMID:25279982
Rupšys, P.
2015-10-28
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
NASA Astrophysics Data System (ADS)
Rupšys, P.
2015-10-01
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Mechanisms and modeling of the effects of additives on the nitrogen oxides emission
NASA Technical Reports Server (NTRS)
Kundu, Krishna P.; Nguyen, Hung Lee; Kang, M. Paul
1991-01-01
A theoretical study on the emission of the oxides of nitrogen in the combustion of hydrocarbons is presented. The current understanding of the mechanisms and the rate parameters for gas phase reactions were used to calculate the NO(x) emission. The possible effects of different chemical species on thermal NO(x), on a long time scale were discussed. The mixing of these additives at various stages of combustion were considered and NO(x) concentrations were calculated; effects of temperatures were also considered. The chemicals such as hydrocarbons, H2, CH3OH, NH3, and other nitrogen species were chosen as additives in this discussion. Results of these calculations can be used to evaluate the effects of these additives on the NO(x) emission in the industrial combustion system.
Experience with mixed MPI/threaded programming models
May, J M; Supinski, B R
1999-04-01
A shared memory cluster is a parallel computer that consists of multiple nodes connected through an interconnection network. Each node is a symmetric multiprocessor (SMP) unit in which multiple CPUs share uniform access to a pool of main memory. The SGI Origin 2000, Compaq (formerly DEC) AlphaServer Cluster, and recent IBM RS6000/SP systems are all variants of this architecture. The SGI Origin 2000 has hardware that allows tasks running on any processor to access any main memory location in the system, so all the memory in the nodes forms a single shared address space. This is called a nonuniform memory access (NUMA) architecture because it gives programs a single shared address space, but the access time to different memory locations varies. In the IBM and Compaq systems, each node's memory forms a separate address space, and tasks communicate between nodes by passing messages or using other explicit mechanisms. Many large parallel codes use standard MPI calls to exchange data between tasks in a parallel job, and this is a natural programming model for distributed memory architectures. On a shared memory architecture, message passing is unnecessary if the code is written to use multithreading: threads run in parallel on different processors, and they exchange data simply by reading and writing shared memory locations. Shared memory clusters combine architectural elements of both distributed memory and shared memory systems, and they support both message passing and multithreaded programming models. Application developers are now trying to determine which programming model is best for these machines. This paper presents initial results of a study aimed at answering that question. We interviewed developers representing nine scientific code groups at Lawrence Livermore National Laboratory (LLNL). All of these groups are attempting to optimize their codes to run on shared memory clusters, specifically the IBM and DEC platforms at LLNL. This paper will focus on ease
Behavior changes in SIS STD models with selective mixing
Hyman, J.M.; Li, J.
1997-08-01
The authors propose and analyze a heterogeneous, multigroup, susceptible-infective-susceptible (SIS) sexually transmitted disease (STD) model where the desirability and acceptability in partnership formations are functions of the infected individuals. They derive explicit formulas for the epidemic thresholds, prove the existence and uniqueness of the equilibrium states for the two-group model and provide a complete analysis of their local and global stability. The authors then investigate the effects of behavior changes on the transmission dynamics and analyze the sensitivity of the epidemic to the magnitude of the behavior changes. They verify that if people modify their behavior to reduce the probability of infection with individuals in highly infected groups, through either reduced contacts, reduced partner formations, or using safe sex, the infection level may be decreased. However, if people continue to have intragroup and intergroup partnerships, then changing the desirability and acceptability formation cannot eradicate the epidemic once it exceeds the epidemic threshold.
Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M
2015-03-01
Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs.
Examples of Mixed-Effects Modeling with Crossed Random Effects and with Binomial Data
ERIC Educational Resources Information Center
Quene, Hugo; van den Bergh, Huub
2008-01-01
Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not nested. Traditional ANOVAs are compared against…
Marketing for a Web-Based Master's Degree Program in Light of Marketing Mix Model
ERIC Educational Resources Information Center
Pan, Cheng-Chang
2012-01-01
The marketing mix model was applied with a focus on Web media to re-strategize a Web-based Master's program in a southern state university in U.S. The program's existing marketing strategy was examined using the four components of the model: product, price, place, and promotion, in hopes to repackage the program (product) to prospective students…
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Taking Advantage of Model-Driven Engineering Foundations for Mixed Interaction Design
NASA Astrophysics Data System (ADS)
Gauffre, Guillaume; Dubois, Emmanuel
New forms of interactive systems, hereafter referred to as Mixed Interactive Systems (MIS), are based on the use of physical artefacts present in the environment. Mixing the digital and physical worlds affects the development of interactive systems, especially from the point of view of the design resources which need to express new dimensions. Consequently, there is a crucial need to clearly describe the content and utility of the recent models associated to these new interaction forms. Based on existing initiatives in the field of HCI, this chapter first highlights the interest of using a Model-Driven Engineering (MDE) approach for the design of MIS. Then, this chapter retraces the application of a MDE approach on a specific Mixed Interaction design resource. The resulted contribution is a motivated, explicit, complete and standardized definition of the ASUR model, a model for mixed interaction design. This definition constitutes a basis to promote the use of this model, to support its diffusion and to derive design tools from this model. The model-driven development of a flexible ASUR editor is finally introduced, thus facilitating the insertion of model extensions and articulations.
Golbon, Reza; Ogutu, Joseph Ochieng; Cotter, Marc; Sauerborn, Joachim
2015-12-01
Linear mixed models were developed and used to predict rubber (Hevea brasiliensis) yield based on meteorological conditions to which rubber trees had been exposed for periods ranging from 1 day to 2 months prior to tapping events. Predictors included a range of moving averages of meteorological covariates spanning different windows of time before the date of the tapping events. Serial autocorrelation in the latex yield measurements was accounted for using random effects and a spatial generalization of the autoregressive error covariance structure suited to data sampled at irregular time intervals. Information theoretics, specifically the Akaike information criterion (AIC), AIC corrected for small sample size (AICc), and Akaike weights, was used to select models with the greatest strength of support in the data from a set of competing candidate models. The predictive performance of the selected best model was evaluated using both leave-one-out cross-validation (LOOCV) and an independent test set. Moving averages of precipitation, minimum and maximum temperature, and maximum relative humidity with a 30-day lead period were identified as the best yield predictors. Prediction accuracy expressed in terms of the percentage of predictions within a measurement error of 5 g for cross-validation and also for the test dataset was above 99 %.
NASA Astrophysics Data System (ADS)
Golbon, Reza; Ogutu, Joseph Ochieng; Cotter, Marc; Sauerborn, Joachim
2015-12-01
Linear mixed models were developed and used to predict rubber ( Hevea brasiliensis) yield based on meteorological conditions to which rubber trees had been exposed for periods ranging from 1 day to 2 months prior to tapping events. Predictors included a range of moving averages of meteorological covariates spanning different windows of time before the date of the tapping events. Serial autocorrelation in the latex yield measurements was accounted for using random effects and a spatial generalization of the autoregressive error covariance structure suited to data sampled at irregular time intervals. Information theoretics, specifically the Akaike information criterion (AIC), AIC corrected for small sample size (AICc), and Akaike weights, was used to select models with the greatest strength of support in the data from a set of competing candidate models. The predictive performance of the selected best model was evaluated using both leave-one-out cross-validation (LOOCV) and an independent test set. Moving averages of precipitation, minimum and maximum temperature, and maximum relative humidity with a 30-day lead period were identified as the best yield predictors. Prediction accuracy expressed in terms of the percentage of predictions within a measurement error of 5 g for cross-validation and also for the test dataset was above 99 %.
Mixing Phenomena in a Bottom Blown Copper Smelter: A Water Model Study
NASA Astrophysics Data System (ADS)
Shui, Lang; Cui, Zhixiang; Ma, Xiaodong; Akbar Rhamdhani, M.; Nguyen, Anh; Zhao, Baojun
2015-03-01
The first commercial bottom blown oxygen copper smelting furnace has been installed and operated at Dongying Fangyuan Nonferrous Metals since 2008. Significant advantages have been demonstrated in this technology mainly due to its bottom blown oxygen-enriched gas. In this study, a scaled-down 1:12 model was set up to simulate the flow behavior for understanding the mixing phenomena in the furnace. A single lance was used in the present study for gas blowing to establish a reliable research technique and quantitative characterisation of the mixing behavior. Operating parameters such as horizontal distance from the blowing lance, detector depth, bath height, and gas flow rate were adjusted to investigate the mixing time under different conditions. It was found that when the horizontal distance between the lance and detector is within an effective stirring range, the mixing time decreases slightly with increasing the horizontal distance. Outside this range, the mixing time was found to increase with increasing the horizontal distance and it is more significant on the surface. The mixing time always decreases with increasing gas flow rate and bath height. An empirical relationship of mixing time as functions of gas flow rate and bath height has been established first time for the horizontal bottom blowing furnace.
Börner, Jan; Marinho, Eduardo; Wunder, Sven
2015-01-01
Annual forest loss in the Brazilian Amazon had in 2012 declined to less than 5,000 sqkm, from over 27,000 in 2004. Mounting empirical evidence suggests that changes in Brazilian law enforcement strategy and the related governance system may account for a large share of the overall success in curbing deforestation rates. At the same time, Brazil is experimenting with alternative approaches to compensate farmers for conservation actions through economic incentives, such as payments for environmental services, at various administrative levels. We develop a spatially explicit simulation model for deforestation decisions in response to policy incentives and disincentives. The model builds on elements of optimal enforcement theory and introduces the notion of imperfect payment contract enforcement in the context of avoided deforestation. We implement the simulations using official deforestation statistics and data collected from field-based forest law enforcement operations in the Amazon region. We show that a large-scale integration of payments with the existing regulatory enforcement strategy involves a tradeoff between the cost-effectiveness of forest conservation and landholder incomes. Introducing payments as a complementary policy measure increases policy implementation cost, reduces income losses for those hit hardest by law enforcement, and can provide additional income to some land users. The magnitude of the tradeoff varies in space, depending on deforestation patterns, conservation opportunity and enforcement costs. Enforcement effectiveness becomes a key determinant of efficiency in the overall policy mix.
Bayesian inferences for beta semiparametric-mixed models to analyze longitudinal neuroimaging data.
Wang, Xiao-Feng; Li, Yingxing
2014-07-01
Diffusion tensor imaging (DTI) is a quantitative magnetic resonance imaging technique that measures the three-dimensional diffusion of water molecules within tissue through the application of multiple diffusion gradients. This technique is rapidly increasing in popularity for studying white matter properties and structural connectivity in the living human brain. One of the major outcomes derived from the DTI process is known as fractional anisotropy, a continuous measure restricted on the interval (0,1). Motivated from a longitudinal DTI study of multiple sclerosis, we use a beta semiparametric-mixed regression model for the neuroimaging data. This work extends the generalized additive model methodology with beta distribution family and random effects. We describe two estimation methods with penalized splines, which are formalized under a Bayesian inferential perspective. The first one is carried out by Markov chain Monte Carlo (MCMC) simulations while the second one uses a relatively new technique called integrated nested Laplace approximation (INLA). Simulations and the neuroimaging data analysis show that the estimates obtained from both approaches are stable and similar, while the INLA method provides an efficient alternative to the computationally expensive MCMC method.
Börner, Jan; Marinho, Eduardo; Wunder, Sven
2015-01-01
Annual forest loss in the Brazilian Amazon had in 2012 declined to less than 5,000 sqkm, from over 27,000 in 2004. Mounting empirical evidence suggests that changes in Brazilian law enforcement strategy and the related governance system may account for a large share of the overall success in curbing deforestation rates. At the same time, Brazil is experimenting with alternative approaches to compensate farmers for conservation actions through economic incentives, such as payments for environmental services, at various administrative levels. We develop a spatially explicit simulation model for deforestation decisions in response to policy incentives and disincentives. The model builds on elements of optimal enforcement theory and introduces the notion of imperfect payment contract enforcement in the context of avoided deforestation. We implement the simulations using official deforestation statistics and data collected from field-based forest law enforcement operations in the Amazon region. We show that a large-scale integration of payments with the existing regulatory enforcement strategy involves a tradeoff between the cost-effectiveness of forest conservation and landholder incomes. Introducing payments as a complementary policy measure increases policy implementation cost, reduces income losses for those hit hardest by law enforcement, and can provide additional income to some land users. The magnitude of the tradeoff varies in space, depending on deforestation patterns, conservation opportunity and enforcement costs. Enforcement effectiveness becomes a key determinant of efficiency in the overall policy mix. PMID:25650966
Modeling seasonal circulation, upwelling and tidal mixing in the Arafura and Timor Seas
NASA Astrophysics Data System (ADS)
Condie, Scott A.
2011-09-01
The extensive shallow tropical seas off northern Australia, encompassing the Arafura and Timor Seas, have been identified as one of the most pristine marine environments on the planet. However, the remoteness and the absence of major industrial development that has contributed to this status have the additional consequence that relatively little is known about these systems. This study is the first to model oceanographic conditions across the tidally dominated Arafura and Timor Seas, and their seasonal variability. The results are based on a high-resolution (0.05°) ocean circulation model forced by realistic winds, waves and tides. The main focus of the study is on physical processes that influence the distributions of sediments and primary productivity across the system. Regions of high bottom stress and tidal mixing have been identified, including a large offshore area around Van Diemen Rise (Timor Sea). Lagrangian particle tracks have revealed a seasonal overturning cell that stretches across the Gulf of Carpentaria (Arafura Sea) with upwelling and downwelling on either side of the Gulf. The presence of coastal upwelling and downwelling is shown to provide a dynamically consistent explanation for the persistent turbid boundary layer observed around the shallow coastal waters of the Gulf.
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
NASA Astrophysics Data System (ADS)
Magic, Z.; Weiss, A.; Asplund, M.
2015-01-01
Aims: We investigate the relation between 1D atmosphere models that rely on the mixing length theory and models based on full 3D radiative hydrodynamic (RHD) calculations to describe convection in the envelopes of late-type stars. Methods: The adiabatic entropy value of the deep convection zone, sbot, and the entropy jump, Δs, determined from the 3D RHD models, were matched with the mixing length parameter, αMLT, from 1D hydrostatic atmosphere models with identical microphysics (opacities and equation-of-state). We also derived the mass mixing length parameter, αm, and the vertical correlation length of the vertical velocity, C[vz,vz], directly from the 3D hydrodynamical simulations of stellar subsurface convection. Results: The calibrated mixing length parameter for the Sun is α๏MLT (Sbot) = 1.98. . For different stellar parameters, αMLT varies systematically in the range of 1.7 - 2.4. In particular, αMLT decreases towards higher effective temperature, lower surface gravity and higher metallicity. We find equivalent results for α๏MLT (ΔS). In addition, we find a tight correlation between the mixing length parameter and the inverse entropy jump. We derive an analytical expression from the hydrodynamic mean-field equations that motivates the relation to the mass mixing length parameter, αm, and find that it qualitatively shows a similar variation with stellar parameter (between 1.6 and 2.4) with the solar value of α๏m = 1.83.. The vertical correlation length scaled with the pressure scale height yields 1.71 for the Sun, but only displays a small systematic variation with stellar parameters, the correlation length slightly increases with Teff. Conclusions: We derive mixing length parameters for various stellar parameters that can be used to replace a constant value. Within any convective envelope, αm and related quantities vary strongly. Our results will help to replace a constant αMLT. Appendices are available in electronic form at http
Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain.
Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises
2015-01-01
Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction.
Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain
Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises
2015-01-01
Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156
Mixed Model with Correction for Case-Control Ascertainment Increases Association Power
Hayeck, Tristan J.; Zaitlen, Noah A.; Loh, Po-Ru; Vilhjalmsson, Bjarni; Pollack, Samuela; Gusev, Alexander; Yang, Jian; Chen, Guo-Bo; Goddard, Michael E.; Visscher, Peter M.; Patterson, Nick; Price, Alkes L.
2015-01-01
We introduce a liability-threshold mixed linear model (LTMLM) association statistic for case-control studies and show that it has a well-controlled false-positive rate and more power than existing mixed-model methods for diseases with low prevalence. Existing mixed-model methods suffer a loss in power under case-control ascertainment, but no solution has been proposed. Here, we solve this problem by using a χ2 score statistic computed from posterior mean liabilities (PMLs) under the liability-threshold model. Each individual’s PML is conditional not only on that individual’s case-control status but also on every individual’s case-control status and the genetic relationship matrix (GRM) obtained from the data. The PMLs are estimated with a multivariate Gibbs sampler; the liability-scale phenotypic covariance matrix is based on the GRM, and a heritability parameter is estimated via Haseman-Elston regression on case-control phenotypes and then transformed to the liability scale. In simulations of unrelated individuals, the LTMLM statistic was correctly calibrated and achieved higher power than existing mixed-model methods for diseases with low prevalence, and the magnitude of the improvement depended on sample size and severity of case-control ascertainment. In a Wellcome Trust Case Control Consortium 2 multiple sclerosis dataset with >10,000 samples, LTMLM was correctly calibrated and attained a 4.3% improvement (p = 0.005) in χ2 statistics over existing mixed-model methods at 75 known associated SNPs, consistent with simulations. Larger increases in power are expected at larger sample sizes. In conclusion, case-control studies of diseases with low prevalence can achieve power higher than that in existing mixed-model methods. PMID:25892111
Mixed model with correction for case-control ascertainment increases association power.
Hayeck, Tristan J; Zaitlen, Noah A; Loh, Po-Ru; Vilhjalmsson, Bjarni; Pollack, Samuela; Gusev, Alexander; Yang, Jian; Chen, Guo-Bo; Goddard, Michael E; Visscher, Peter M; Patterson, Nick; Price, Alkes L
2015-05-07
We introduce a liability-threshold mixed linear model (LTMLM) association statistic for case-control studies and show that it has a well-controlled false-positive rate and more power than existing mixed-model methods for diseases with low prevalence. Existing mixed-model methods suffer a loss in power under case-control ascertainment, but no solution has been proposed. Here, we solve this problem by using a χ(2) score statistic computed from posterior mean liabilities (PMLs) under the liability-threshold model. Each individual's PML is conditional not only on that individual's case-control status but also on every individual's case-control status and the genetic relationship matrix (GRM) obtained from the data. The PMLs are estimated with a multivariate Gibbs sampler; the liability-scale phenotypic covariance matrix is based on the GRM, and a heritability parameter is estimated via Haseman-Elston regression on case-control phenotypes and then transformed to the liability scale. In simulations of unrelated individuals, the LTMLM statistic was correctly calibrated and achieved higher power than existing mixed-model methods for diseases with low prevalence, and the magnitude of the improvement depended on sample size and severity of case-control ascertainment. In a Wellcome Trust Case Control Consortium 2 multiple sclerosis dataset with >10,000 samples, LTMLM was correctly calibrated and attained a 4.3% improvement (p = 0.005) in χ(2) statistics over existing mixed-model methods at 75 known associated SNPs, consistent with simulations. Larger increases in power are expected at larger sample sizes. In conclusion, case-control studies of diseases with low prevalence can achieve power higher than that in existing mixed-model methods.
NASA Astrophysics Data System (ADS)
Kjellsson, Joakim; Holland, Paul R.; Marshall, Gareth J.; Mathiot, Pierre; Aksenov, Yevgeny; Coward, Andrew C.; Bacon, Sheldon; Megann, Alex P.; Ridley, Jeff
2015-10-01
We examine the sensitivity of the Weddell and Ross seas to vertical mixing and surface freshwater forcing using an ocean-sea ice model. The high latitude Southern Ocean is very weakly stratified, with a winter salinity difference across the pycnocline of only ∼0.2 PSU. We find that insufficient vertical mixing, freshwater supply from the Antarctic Ice Sheet, or initial sea ice causes a high salinity bias in the mixed layer which erodes the stratification and causes excessive deep convection. This leads to vertical homogenisation of the Weddell and Ross seas, opening of polynyas in the sea ice and unrealistic spin-up of the subpolar gyres and Antarctic Circumpolar Current. The model freshwater budget shows that a ∼30% error in any component can destratify the ocean in about a decade. We find that freshwater forcing in the model should be sufficient along the Antarctic coastline to balance a salinity bias caused by dense coastal water that is unable to sink to the deep ocean. We also show that a low initial sea ice area introduces a salinity bias in the marginal ice zone. We demonstrate that vertical mixing, freshwater forcing and initial sea ice conditions need to be constrained simultaneously to reproduce the Southern Ocean hydrography, circulation and sea ice in a model. As an example, insufficient vertical mixing will cause excessive convection in the Weddell and Ross seas even in the presence of large surface freshwater forcing and initial sea ice cover.
Analytical and numerical modeling of non-collinear shear wave mixing at an imperfect interface
NASA Astrophysics Data System (ADS)
Zhang, Ziyin; Nagy, Peter B.; Hassan, Waled
2016-02-01
Non-collinear shear wave mixing at an imperfect interface between two solids can be exploited for nonlinear ultrasonic assessment of bond quality. In this study we developed two analytical models for nonlinear imperfect interfaces. The first model uses a finite nonlinear interfacial stiffness representation of an imperfect interface of vanishing thickness, while the second model relies on a thin nonlinear interphase layer to represent an imperfect interface region. The second model is actually a derivative of the first model obtained by calculating the equivalent interfacial stiffness of a thin isotropic nonlinear interphase layer in the quasi-static approximation. The predictions of both analytical models were numerically verified by comparison to COMSOL finite element simulations. These models can accurately predict the excess nonlinearity caused by interface imperfections based on the strength of the reflected and transmitted mixed longitudinal waves produced by them under non-collinear shear wave interrogation.
Comparing Bayesian stable isotope mixing models: Which tools are best for sediments?
NASA Astrophysics Data System (ADS)
Morris, David; Macko, Stephen
2016-04-01
Bayesian stable isotope mixing models have received much attention as a means of coping with multiple sources and uncertainty in isotope ecology (e.g. Phillips et al., 2014), enabling the probabilistic determination of the contributions made by each food source to the total diet of the organism in question. We have applied these techniques to marine sediments for the first time. The sediments of the Chukchi Sea and Beaufort Sea offer an opportunity to utilize these models for organic geochemistry, as there are three likely sources of organic carbon; pelagic phytoplankton, sea ice algae and terrestrial material from rivers and coastal erosion, as well as considerable variation in the marine δ13C values. Bayesian mixing models using bulk δ13C and δ15N data from Shelf Basin Interaction samples allow for the probabilistic determination of the contributions made by each of the sources to the organic carbon budget, and can be compared with existing source contribution estimates based upon biomarker models (e.g. Belicka & Harvey, 2009, Faux, Belicka, & Rodger Harvey, 2011). The δ13C of this preserved material varied from -22.1 to -16.7‰ (mean -19.4±1.3‰), while δ15N varied from 4.1 to 7.6‰ (mean 5.7±1.1‰). Using the SIAR model, we found that water column productivity was the source of between 50 and 70% of the organic carbon buried in this portion of the western Arctic with the remainder mainly supplied by sea ice algal productivity (25-35%) and terrestrial inputs (15%). With many mixing models now available, this study will compare SIAR with MixSIAR and the new FRUITS model. Monte Carlo modeling of the mixing polygon will be used to validate the models, and hierarchical models will be utilised to glean more information from the data set.
NASA Astrophysics Data System (ADS)
Sanford, Lawrence P.
2008-10-01
Erosion and deposition of bottom sediments reflect a continual, dynamic adjustment between the fluid forces applied to a sediment bed and the condition of the bed itself. Erosion of fine and mixed sediment beds depends on their composition, their vertical structure, their disturbance/recovery history, and the biota that inhabit them. This paper presents a new one-dimensional (1D), multi-layer sediment bed model for simulating erosion and deposition of fine and mixed sediments subject to consolidation, armoring, and bioturbation. The distinguishing characteristics of this model are a greatly simplified first-order relaxation treatment for consolidation, a mud erosion formulation that adapts to both Type I and II erosion behavior and is based directly on observations, a continuous deposition formulation for mud that can mimic exclusive erosion and deposition behavior, and straightforward inclusion of bioturbation effects. Very good agreement with two laboratory data sets on consolidation effects is achieved by adjusting only the first-order consolidation rate r c. Full model simulations of three idealized cases based on upper Chesapeake Bay, USA observations are presented. In the mud only case, fluid stresses match mud critical stresses at maximum erosion. A consolidation lag results in higher suspended sediment concentrations after erosional events. Erosion occurs only during accelerating currents and deposition does not occur until just before slack water. In the mixed mud and sand case without bioturbation, distinct layers of high and low sand content form and mud suspension is strongly limited by sand armoring. In the mixed mud and sand case with bioturbation, suspended mud concentrations are greater than or equal to either of the other cases. Low surface critical stresses are mixed down into the bed, constrained by the tendency to return towards equilibrium. Sand layers and the potential for armoring of the bed develop briefly, but mix rapidly. This model offers
Genome-wide efficient mixed-model analysis for association studies.
Zhou, Xiang; Stephens, Matthew
2012-06-17
Linear mixed models have attracted considerable attention recently as a powerful and effective tool for accounting for population stratification and relatedness in genetic association tests. However, existing methods for exact computation of standard test statistics are computationally impractical for even moderate-sized genome-wide association studies. To address this issue, several approximate methods have been proposed. Here, we present an efficient exact method, which we refer to as genome-wide efficient mixed-model association (GEMMA), that makes approximations unnecessary in many contexts. This method is approximately n times faster than the widely used exact method known as efficient mixed-model association (EMMA), where n is the sample size, making exact genome-wide association analysis computationally practical for large numbers of individuals.
Klein, Stephen A.; McCoy, Renata; Morrison, H.; Ackerman, Andrew; Avramov, Alexander; DeBoer, GIJS; Chen, Mingxuan; Cole, Jason N.; DelGenio, Anthony D.; Falk, Michael; Foster, Mike; Fridlind, Ann; Golaz, Jean-Christophe; Hashino, Tempei; Harrington, Jerry Y.; Hoose, Corinna; Khairoutdinov, Marat; Larson, Vince; Liu, Xiaohong; Luo, Yali; McFarquhar, Greg; Menon, Surabi; Neggers, Roel; Park, Sungsu; Poellot, M. R.; Schmidt, Jerome M.; Sednev, Igor; Shipway, Ben; Shupe, Matthew D.; Spangenberg, D.; Sud, Yogesh; Turner, David D.; Veron, Dana; Von Salzen, Knut; Walker, Gregory K.; Wang, Zhien; Wolf, Audrey; Xie, Shaocheng; Xu, Kuan-Man; Yang, Fanglin; Zhang, G.
2009-05-21
Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the ARM Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of –15°C. While the cloud was water dominated, ice precipitation appears to have lowered the liquid water path to about 2/3 of the adiabatic value. The simulations, which were performed by seventeen single column and nine cloud-resolving models, generally underestimate the liquid water path with the median single-column and cloud-resolving model liquid water path a factor of 3 smaller than observed. While the simulated ice water path is in general agreement with the observed values, results from a sensitivity study in which models removed ice microphysics indicate that in many models the interaction between liquid and ice phase microphysics is responsible for the strong model underestimate of liquid water path. Although no single factor is found to lead to a good simulation, these results emphasize the need for care in the model treatment of mixed-phase microphysics. This case study, which has been well observed from both aircraft and ground-based remote sensors, could be benchmark for model simulations of mixed-phase clouds.
A model for halo formation with axion mixed dark matter
NASA Astrophysics Data System (ADS)
Marsh, David J. E.; Silk, Joseph
2014-01-01
There are several issues to do with dwarf galaxy predictions in the standard Λ cold dark matter (ΛCDM) cosmology that have suscitated much recent debate about the possible modification of the nature of dark matter as providing a solution. We explore a novel solution involving ultralight axions that can potentially resolve the missing satellites problem, the cusp-core problem and the `too big to fail' problem. We discuss approximations to non-linear structure formation in dark matter models containing a component of ultralight axions across four orders of magnitude in mass, 10-24 ≲ ma ≲ 10-20 eV, a range too heavy to be well constrained by linear cosmological probes such as the cosmic microwave background and matter power spectrum, and too light/non-interacting for other astrophysical or terrestrial axion searches. We find that an axion of mass ma ≈ 10-21 eV contributing approximately 85 per cent of the total dark matter can introduce a significant kpc scale core in a typical Milky Way satellite galaxy in sharp contrast to a thermal relic with a transfer function cut off at the same scale, while still allowing such galaxies to form in significant number. Therefore, ultralight axions do not suffer from the Catch 22 that applies to using a warm dark matter as a solution to the small-scale problems of CDM. Our model simultaneously allows formation of enough high-redshift galaxies to allow reconciliation with observational constraints, and also reduces the maximum circular velocities of massive dwarfs so that baryonic feedback may more plausibly resolve the predicted overproduction of massive Milky Way Galaxy dwarf satellites.
Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.
1997-01-01
The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from
NASA Astrophysics Data System (ADS)
Zamani, Hossein; Faroughi, Pouya; Ismail, Noriszura
2014-06-01
This study relates the Poisson, mixed Poisson (MP), generalized Poisson (GP) and finite Poisson mixture (FPM) regression models through mean-variance relationship, and suggests the application of these models for overdispersed count data. As an illustration, the regression models are fitted to the US skin care count data. The results indicate that FPM regression model is the best model since it provides the largest log likelihood and the smallest AIC, followed by Poisson-Inverse Gaussion (PIG), GP and negative binomial (NB) regression models. The results also show that NB, PIG and GP regression models provide similar results.
Application of the mixing-reaction in series model to NO/x/-O3 plume chemistry
NASA Technical Reports Server (NTRS)
Carmichael, G. R.; Peters, L. K.
1981-01-01
The mixing-reaction in series model developed by Ghodsizadem (1978) is successfully applied to the study of NO oxidation in the near-source portion of the Potomac Electric Company's Morgantown, Maryland power plant plume. The model employs a single parameter. With initial conditions consistent with the plume data measured by Davis et al. (1974) and utilizing the mixing parameter estimated from the study by Shu et al. (1978), the predicted temporal profiles of the ratio of the concentration of NO2 to NO, of NO2 to NO plus NO2, in-plume concentrations of NO and O3, and fraction of NO remaining are consistent with field study data. In addition, the model predicts large deviations from the photostationary state in the near-source portion of the plume, also consistent with field study data. In the far field region of this plume (t greater than approximately 20 min), the mixing processes are essentially complete over much of the plume cross-section.
Modeling a mixed SEP event with the PATH model: December 13, 2006
NASA Astrophysics Data System (ADS)
Verkhoglyadova, Olga P.; Li, Gang; Zank, Gary P.; Hu, Qiang
2008-08-01
There are often two particle components which form a major SEP event, one originating from a solar flare and the other from solar wind particles accelerated at a traveling CME-driven shock [1]. If a CME and a flare are part of the same process, then the interplay between corresponding energetic particle components may yield temporal, spectral, and compositional differences in observations. Depending on spacecraft location and magnetic connection to either a flare site or a CME-driven shock (or both), we expect to observe distinct signatures in the time intensity profiles. Following an approach by Li and Zank [2], we apply the Particle Acceleration and Transport in the Heliosphere (PATH) one-dimensional numerical code developed at University of California in Riverside to model the mixed SEP event of December 13, 2006. We initiate the code by modeling a quiet-time solar wind. Observed shock parameters at 1 AU and flare characteristics then are used as input into the code. We model energetic particle acceleration at a traveling quasi-parallel CME-driven shock and subsequent transport throughout the interplanetary medium to 1 AU. Time-intensity profiles and spectra of proton and heavy ions are presented and compared with in situ measurements by ACE. Contributions from the solar wind suprathermal and flare particles to the resultant SEP event are discussed.
Guangyi, Mei; Yujun, Sun; Hao, Xu; de-Miguel, Sergio
2015-01-01
A systematic evaluation of nonlinear mixed-effect taper models for volume prediction was performed. Of 21 taper equations with fewer than 5 parameters each, the best 4-parameter fixed-effect model according to fitting statistics was then modified by comparing its values for the parameters total height (H), diameter at breast height (DBH), and aboveground height (h) to modeling data. Seven alternative prediction strategies were compared using the best new equation in the absence of calibration data, which is often unavailable in forestry practice. The results of this study suggest that because calibration may sometimes be a realistic option, though it is rarely used in practical applications, one of the best strategies for improving the accuracy of volume prediction is the strategy with 7 calculated total heights of 3, 6 and 9 trees in the largest, smallest and medium-size categories, respectively. We cannot use the average trees or dominant trees for calculating the random parameter for further predictions. The method described here will allow the user to make the best choices of taper type and the best random-effect calculated strategy for each practical application and situation at tree level. PMID:26445505
$B^0_{(s)}$-mixing matrix elements from lattice QCD for the Standard Model and beyond
Bazavov, A.; Bernard, C.; Bouchard, C. M.; Chang, C. C.; DeTar, C.; Du, Daping; El-Khadra, A. X.; Freeland, E. D.; Gamiz, E.; Gottlieb, Steven; Heller, U. M.; Kronfeld, A. S.; Laiho, J.; Mackenzie, P. B.; Neil, E. T.; Simone, J.; Sugar, R.; Toussaint, D.; Van de Water, R. S.; Zhou, Ran
2016-06-28
We calculate—for the first time in three-flavor lattice QCD—the hadronic matrix elements of all five local operators that contribute to neutral B^{0}- and B_{s}-meson mixing in and beyond the Standard Model. We present a complete error budget for each matrix element and also provide the full set of correlations among the matrix elements. We also present the corresponding bag parameters and their correlations, as well as specific combinations of the mixing matrix elements that enter the expression for the neutral B-meson width difference. We obtain the most precise determination to date of the SU(3)-breaking ratio ξ=1.206(18)(6), where the second error stems from the omission of charm-sea quarks, while the first encompasses all other uncertainties. The threefold reduction in total uncertainty, relative to the 2013 Flavor Lattice Averaging Group results, tightens the constraint from B mixing on the Cabibbo-Kobayashi-Maskawa (CKM) unitarity triangle. Our calculation employs gauge-field ensembles generated by the MILC Collaboration with four lattice spacings and pion masses close to the physical value. We use the asqtad-improved staggered action for the light-valence quarks and the Fermilab method for the bottom quark. We use heavy-light meson chiral perturbation theory modified to include lattice-spacing effects to extrapolate the five matrix elements to the physical point. We combine our results with experimental measurements of the neutral B-meson oscillation frequencies to determine the CKM matrix elements |V_{td}| = 8.00(34)(8)×10^{-3}, |V_{ts}| = 39.0(1.2)(0.4)×10^{-3}, and |V_{td}/V_{ts}| = 0.2052(31)(10), which differ from CKM-unitarity expectations by about 2σ. In addition, these results and others from flavor-changing-neutral currents point towards an emerging tension between weak processes that are mediated at the loop and tree levels.
$$B^0_{(s)}$$-mixing matrix elements from lattice QCD for the Standard Model and beyond
Bazavov, A.; Bernard, C.; Bouchard, C. M.; ...
2016-06-28
We calculate—for the first time in three-flavor lattice QCD—the hadronic matrix elements of all five local operators that contribute to neutral B0- and Bs-meson mixing in and beyond the Standard Model. We present a complete error budget for each matrix element and also provide the full set of correlations among the matrix elements. We also present the corresponding bag parameters and their correlations, as well as specific combinations of the mixing matrix elements that enter the expression for the neutral B-meson width difference. We obtain the most precise determination to date of the SU(3)-breaking ratio ξ=1.206(18)(6), where the second errormore » stems from the omission of charm-sea quarks, while the first encompasses all other uncertainties. The threefold reduction in total uncertainty, relative to the 2013 Flavor Lattice Averaging Group results, tightens the constraint from B mixing on the Cabibbo-Kobayashi-Maskawa (CKM) unitarity triangle. Our calculation employs gauge-field ensembles generated by the MILC Collaboration with four lattice spacings and pion masses close to the physical value. We use the asqtad-improved staggered action for the light-valence quarks and the Fermilab method for the bottom quark. We use heavy-light meson chiral perturbation theory modified to include lattice-spacing effects to extrapolate the five matrix elements to the physical point. We combine our results with experimental measurements of the neutral B-meson oscillation frequencies to determine the CKM matrix elements |Vtd| = 8.00(34)(8)×10-3, |Vts| = 39.0(1.2)(0.4)×10-3, and |Vtd/Vts| = 0.2052(31)(10), which differ from CKM-unitarity expectations by about 2σ. In addition, these results and others from flavor-changing-neutral currents point towards an emerging tension between weak processes that are mediated at the loop and tree levels.« less
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Product versus additive threshold models for analysis of reproduction outcomes in animal genetics.
David, I; Bodin, L; Gianola, D; Legarra, A; Manfredi, E; Robert-Granié, C
2009-08-01
The phenotypic observation of some reproduction traits (e.g., insemination success, interval from lambing to insemination) is the result of environmental and genetic factors acting on 2 individuals: the male and female involved in a mating couple. In animal genetics, the main approach (called additive model) proposed for studying such traits assumes that the phenotype is linked to a purely additive combination, either on the observed scale for continuous traits or on some underlying scale for discrete traits, of environmental and genetic effects affecting the 2 individuals. Statistical models proposed for studying human fecundability generally consider reproduction outcomes as the product of hypothetical unobservable variables. Taking inspiration from these works, we propose a model (product threshold model) for studying a binary reproduction trait that supposes that the observed phenotype is the product of 2 unobserved phenotypes, 1 for each individual. We developed a Gibbs sampling algorithm for fitting a Bayesian product threshold model including additive genetic effects and showed by simulation that it is feasible and that it provides good estimates of the parameters. We showed that fitting an additive threshold model to data that are simulated under a product threshold model provides biased estimates, especially for individuals with high breeding values. A main advantage of the product threshold model is that, in contrast to the additive model, it provides distinct estimates of fixed effects affecting each of the 2 unobserved phenotypes.
NASA Astrophysics Data System (ADS)
Hussey, Dennis Frank
2000-10-01
Scope and method of study. The purpose of this study was to develop a generalized rate model to handle multicomponent mixed-bed ion exchange (MBIE) with multivalent dissociative species and variable influent conditions. To achieve this goal, mass transfer mechanisms of weak electrolytes in ion exchange column have been studied; and based on which, rate expressions for weak electrolyte transfer have been proposed. In addition, the column material balance has been derived in terms of the constituent species concentrations only. Finally, generalized dissociation equilibrium equations for several types of weak electrolyte constituents were implemented, and the effluent concentrations were determined by solving column material balance equations along with the rate expressions. Findings and conclusions. The mixed bed ion exchange column model has been successfully programmed into a computer program and is capable of predicting the effluent concentration histories, dynamic resin loading, solution, and rate profiles. The column material balance has been satisfied to within 1% for all chemistries studied. The model is capable of simulating variable influent contaminant concentrations and flow rates by sequentially using the loading profiles of previous simulations. The model maintains electroneutrality at all times. Dissociative species transfer is adequate for many systems, but additional work is required to incorporate molecular constituent mass transfer.
Mixed-symmetry 2 sup + state of sup 56 Fe in realistic shell model
Nakada, H. ); Otsuka, T. ); Sebe, T. )
1991-08-26
The mixed-symmetry 2{sup +} state of {sup 56}Fe is investigated by a large-scale shell-model calculation. We can reproduce the experimental energy levels by the Kuo-Brown interaction, as well as the {ital E}2 and {ital M}1 transition probabilities. The ({ital e},{ital e}{prime}) form factors are also reproduced by including the core-polarization effect. By inspecting the shell-model wave functions thus tested, it is found that the 2{sub 2}{sup +} and 2{sub 4}{sup +} states share a large fraction of the mixed-symmetry component.
NASA Astrophysics Data System (ADS)
England, Matthew H.; Hirst, Anthony C.
1997-07-01
, in addition to deep convective mixing of CFC, there is also mixing into the ocean interior along isopycnal surfaces having an unrealistic orientation. The Southern Ocean CFC uptake in case 5, using the mixing scheme of Gent and McWilliams [1990], is dramatically reduced over that in the other runs. Only in this run do deep densities approach the observed values, and wintertime convection is largely suppressed south of the Antarctic Circumpolar Current. Deep penetration of CFC-rich water occurs only in the western Weddell and Ross Seas. This run yields CFC sections in the Southern Ocean which compare most favourably with observations, although substantial differences still exist between observed and simulated CFC. The simulation of NADW production is problematic in all runs, with the CFC signature indicating primary source regions in the Labrador Sea and immediately to the southeast of Greenland, while the Norwegian-Greenland Sea overflow water (which is dominant in reality) plays only a minor role. Lower NADW is insufficiently dense in all runs. Only in the run with surface forcing designed to enhance NADW production does the CFC signal penetrate down the western Atlantic boundary in a realistic manner. However, this case exhibits an unrealistic net ocean surface heat loss adjacent to Greenland and so cannot be advocated as a technique to improve model NADW production. Conventional depth sections and volumetric maps of CFC concentration indicate that on the decadal timescales resolved by CFC uptake the dominant determining factor in overall model ventilation is the choice of subsurface mixing scheme. The surface thermohaline forcing only determines more subtle aspects of the subsurface CFC content. This means that the choice of subgridscale mixing scheme plays a key role in determining ocean model ventilation over decadal to centennial timescales. This has important implications for climate model studies.
Mixed Phase Modeling in GlennICE with Application to Engine Icing
NASA Technical Reports Server (NTRS)
Wright, William B.; Jorgenson, Philip C. E.; Veres, Joseph P.
2011-01-01
A capability for modeling ice crystals and mixed phase icing has been added to GlennICE. Modifications have been made to the particle trajectory algorithm and energy balance to model this behavior. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to four mixed phase ice accretions performed in the Cox icing tunnel in order to calibrate an ice erosion model. A sample ice ingestion case was performed using the Energy Efficient Engine (E3) model in order to illustrate current capabilities. Engine performance characteristics were supplied using the Numerical Propulsion System Simulation (NPSS) model for this test case.
Shi, J Q; Wang, B; Will, E J; West, R M
2012-11-20
We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime.
New Theory of Stellar Convection without the mixing-length parameter: new stellar atmosphere models
NASA Astrophysics Data System (ADS)
Pasetto, Stefano; Chiosi, Cesare; Cropper, Mark; Grebel, Eva K.
2015-08-01
Stellar convection is customarily described by the mixing-length theory, which makes use of the mixing-length scale to express the convective flux, velocity, and temperature gradients of the convective elements and stellar medium. The mixing-length scale is taken to be proportional to the local pressure scale height, and the proportionality factor (the mixing-length parameter) must be determined by comparing the stellar models to some calibrator, usually the Sun.No strong arguments exist to claim that the mixing-length parameter is the same in all stars and all evolutionary phases. Because of this, all stellar models in literature are hampered by this basic uncertainty.In a recent paper (Pasetto et al 2014) we presented a new theory of stellar convection that does not require the mixing length parameter. Our self-consistent analytical formulation of stellar convection determines all the properties of stellar convection as a function of the physical behaviour of the convective elements themselves and the surrounding medium. The new theory of stellar convection is formulated starting from a conventional solution of the Navier-Stokes/Euler equations, i.e. the Bernoulli equation for a perfect fluid, but expressed in a non-inertial reference frame co-moving with the convective elements. In our formalism, the motion of stellar convective cells inside convective-unstable layers is fully determined by a new system of equations for convection in a non-local and time dependent formalism.We obtained an analytical, non-local, time-dependent solution for the convective energy transport that does not depend on any free parameter. The predictions of the new theory are now compared with those from the standard mixing-length paradigm with very satisfactory results for atmosphere models of the Sun and all the stars around the Hertzsprung-Russell diagram.
New Theory of Stellar Convection without the mixing-length parameter: new stellar atmosphere models
NASA Astrophysics Data System (ADS)
Pasetto, Stefano; Chiosi, Cesare; Cropper, Mark
Stellar convection is customarily described by the mixing-length theory, which makes use of the mixing-length scale to express the convective flux, velocity, and temperature gradients of the convective elements and stellar medium. The mixing-length scale is taken to be proportional to the local pressure scale height, and the proportionality factor (the mixing-length parameter) must be determined by comparing the stellar models to some calibrator, usually the Sun. No strong arguments exist to suggest that the mixing-length parameter is the same in all stars and all evolutionary phases. Because of this, all stellar models in the literature are hampered by this basic uncertainty. In a recent paper (Pasetto et al. 2014) we presented a new theory that does not require the mixing length parameter. Our self-consistent analytical formulation of stellar convection determines all the properties of stellar convection as a function of the physical behavior of the convective elements themselves and the surrounding medium. The new theory of stellar convection is formulated starting from a conventional solution of the Navier-Stokes/Euler equations, i.e. the Bernoulli equation for a perfect fluid, but expressed in a non-inertial reference frame co-moving with the convective elements. In our formalism, the motion of stellar convective cells inside convective-unstable layers is fully determined by a new system of equations for convection in a non-local and time-dependent formalism. We obtained an analytical, non-local, time-dependent solution for the convective energy transport that does not depend on any free parameter. The predictions of the new theory are compared with those from the standard mixing-length paradigm with positive results for atmosphere models of the Sun and all the stars in the Hertzsprung-Russell diagram.
Modelling sensitivities to mixing and advection in a sill-basin estuarine system
NASA Astrophysics Data System (ADS)
Soontiens, Nancy; Allen, Susan E.
2017-04-01
This study investigates the sensitivity of a high resolution regional ocean model to several choices in mixing and advection. The oceanographic process examined is a deep water renewal event in the Juan de Fuca Strait-Strait of Georgia sill-basin estuarine system located on the west coast of North America. Previous observational work has shown that the timing of the renewal events is linked to the spring/neap tidal cycle, and in turn, is sensitive to the amount of vertical mixing induced by tidal currents interacting with sills and complicated bathymetry. It is found that the model's representation of deep water renewal is relatively insensitive to several mixing choices, including the vertical turbulence closure and direction of lateral mixing. No significant difference in deep or intermediate salinity was found between cases that used k - ɛ versus k - ω closures and isoneutral versus horizontal lateral mixing. Modifications that had a stronger effect included those that involved advection such as modifying the salinity of the open boundary conditions which supply the source waters for the renewal event. The strongest impact came from the removal of the Hollingsworth instability, a kinetic energy sink in the energy-enstrophy discretization of the momentum equations. A marked improvement to the salinity of the deep water renewal suggests that the removal of the Hollingsworth instability will correct a fresh drift in the deep and intermediate waters in an operational version of this model.
NASA Astrophysics Data System (ADS)
Kettle, H.
2009-08-01
Biogeochemical models of the ocean carbon cycle are frequently validated by, or tuned to, satellite chlorophyll data. However, ocean carbon cycle models are required to accurately model the movement of carbon, not chlorophyll, and due to the high variability of the carbon to chlorophyll ratio in phytoplankton, chlorophyll is not a robust proxy for carbon. Using inherent optical property (IOP) inversion algorithms it is now possible to also derive the amount of light backscattered by the upper ocean (bb) which is related to the amount of particulate organic carbon (POC) present. Using empirical relationships between POC and bb, a 1-D marine biogeochemical model is used to simulate bb at 490 nm thereby allowing the model to be compared with both remotely-sensed chlorophyll or bb data. Here I investigate the possibility of using bb in conjunction with chlorophyll data to help constrain the parameters in a simple 1-D NPZD model. The parameters of the biogeochemical model are tuned with a genetic algorithm, so that the model is fitted to either chlorophyll data or to both chlorophyll and bb data at three sites in the Atlantic with very different characteristics. Several inherent optical property (IOP) algorithms are available for estimating bb, three of which are used here. The effect of the different bb datasets on the behaviour of the tuned model is examined to ascertain whether the uncertainty in bb is significant. The results show that the addition of bb data does not consistently alter the same model parameters at each site and in fact can lead to some parameters becoming less well constrained, implying there is still much work to be done on the mechanisms relating chlorophyll to POC and bb within the model. However, this study does indicate that including bb data has the potential to significantly effect the modelled mixed layer detritus and that uncertainties in bb due to the different IOP algorithms are not particularly significant.
A model of microbial activity in lake sediments in response to periodic water-column mixing.
Gantzer, Charles J; Stefan, Heinz G
2003-07-01
Under stagnant conditions, the mass transport of a soluble substrate from a lake's water column to the sediment/water interface is limited by molecular diffusion. Stagnant conditions coupled with a continuing sediment biological demand create a substrate depletion zone above the sediment/water interface. The frequency at which the substrate depletion zone is destroyed by internal seiches and other intermittent flow phenomena influences the time-averaged substrate concentration at the sediment/water interface. A more frequent mixing results in a greater time-averaged interface concentration and consequently affects the amount of microbial biomass that can be supported in the lake sediments and the flux of the substrate into the sediment. A one-dimensional, two-substrate model is used to examine the impact of mixing frequency on the activity of sulfate-reducing bacteria (SRB) in lake sediments. In the model, sulfate is supplied from the water column, while acetate is generated within the sediments. Mass transport to and within the sediments is by molecular diffusion except for instantaneous mixing events. Between mixing events, sulfate concentration gradients form above the sediment/water interface in the diffusive boundary layer. Sulfate depletion zones can be centimeters thick. When typical biological rate and diffusion coefficients for sulfate and acetate are used as inputs, the model indicates that a more frequent water-column mixing results in greater SRB concentrations. For an assumed bulk water-column sulfate concentration of 4.8 mg x l(-1), the sediment SRB concentrations for the modeled hourly, 6-hourly, daily, and weekly mixing frequencies were 175, 136, 91, and 30 mg x m(-2), respectively. The model also predicts higher time-averaged sulfate flux rates at more frequent water-column mixing. The time-averaged sulfate flux rates for the hourly, 6-hourly, daily, and weekly mixing frequencies were 1.26, 1.13, 0.78, and 0.30 mg x m(-2)h(-1), respectively. Thus
Understanding Flow Pathways, Mixing and Transit Times for Water Quality Modelling
NASA Astrophysics Data System (ADS)
Dunn, S. M.; Bacon, J. R.; Soulsby, C.; Tetzlaff, D.
2007-12-01
Water quality modelling requires representation of the physical processes controlling the movement of solutes and particulates at an appropriate level of detail to address the objective of the model simulations. To understand and develop mitigation strategies for diffuse pollution at catchment scales, it is necessary for models to be able to represent the sources and age of water reaching rivers at different times. Experimental and modelling studies undertaken on several catchments in the north east of Scotland have used natural hydrochemical and isotopic tracers as a means of obtaining spatially integrated information about mixing processes. Methods for obtaining and integrating appropriate data are considered together with the implications of neglecting it. The tracer data have been incorporated in a conceptual hydrological model to study the sensitivity of the modelled tracer response to factors that may not affect runoff simulations but do affect mixing and transit times of the water. Results from the studies have shown how model structural and parameter uncertainties can lead to errors in the representation of: the flow pathways of water; the degree to which these flow pathways have mixed and the length of time for which water has been stored within the soil / groundwater system. It has been found to be difficult to eliminate structural uncertainty regarding the mechanisms of mixing, and parameter uncertainty regarding the role of groundwater. Simulations of nitrate pollution, resulting from the application of agricultural fertilisers, have been undertaken to demonstrate the sensitivity of water quality simulations to the potential errors in physical transport mechanisms, inherent in models that fail to account correctly for flow pathways, mixing and transit times.
Modelling transverse turbulent mixing in a shallow flow by using an eddy viscosity approach
NASA Astrophysics Data System (ADS)
Gualtieri, C.
2009-04-01
The mixing of contaminants in streams and rivers is a significant problem in environmental fluid mechanics and rivers engineering since to understand the impact and the fate of pollutants in these water bodies is a primary goal of water quality management. Since most rivers have a high aspect ratio, that is the width to depth ratio, discharged pollutants become vertically mixed within a short distance from the source and vertical mixing is only important in the so-called near-field. As a rule of thumb, neutrally buoyant solute becomes fully mixed vertically within 50-75 depths from the source. Notably, vertical mixing analysis relies on well-known theoretical basis, that is Prandtl mixing length model, which assumes the hypothesis of plane turbulent shear flow and provides theoretical predictions of the vertical turbulent diffusivity which closely match experimental results. In the mid-field, the vertical concentration gradients are negligible and both subsequent transverse and longitudinal changes of the depth-averaged concentrations of the pollutants should be addressed. In the literature, for the application of one-dimensional water quality models the majority of research efforts were devoted to estimate the rate of longitudinal mixing of a contaminant, that is the development of a plume resulting from a temporally varying pollutant source once it has become cross-sectionally well-mixed, in the far-field. Although transverse mixing is a significant process in river engineering when dealing with the discharge of pollutants from point sources or the mixing of tributary inflows, no theoretical basis exists for the prediction of its rate, which is indeed based upon the results of experimental works carried on in laboratory channels or in streams and rivers. Turbulence models based on the eddy viscosity approach, such as the k-É model, k-? and their variation are the most widely used turbulence models and this is largely due to their ease in implementation
Effects of additional food in a delayed predator-prey model.
Sahoo, Banshidhar; Poria, Swarup
2015-03-01
We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section.
Observations and Model Simulations of Orographic Mixed-Phase Clouds at Mountain Range Site
NASA Astrophysics Data System (ADS)
Lohmann, U.; Henneberg, O. C.; Henneberger, J.
2014-12-01
Aerosol-cloud interactions constitute the highest uncertainties in forcing estimation. Especially uncertainties due to mixed clouds (MPCs) have a large impact on the radiative balance and precipitation prediction. Due to Wegener-Bergeron-Findeisen-process (WBF) which describes glaciation of MPCs due to the lower saturation over ice than over water, MPCs are mostly expected as short lived clouds. In contrast to the theory of the WBF, in-situ measurements have shown that MPCs can persist over longer time. But only a small number of measurements of MPCs is available. In addition modeling studies about MPCs are difficult as their processes of the three-phase-system are on the micro scale and therefore not resolved in models. We present measurements obtained at the high-altitude research station Jungfraujoch (JFJ, 3580 m asl) in the Swiss Alps partly taken during the CLoud-Aerosol Interaction Experiments (CLACE). During the winter season, the JFJ has a high frequency of super-cooled clouds and is considered representative for being in the free troposphere. In-situ measurements of the microstructure of MPCs have been obtained with the digital imager HOLIMO, that delivers phase-resolved size distributions, concentrations, and water contents. The data set of MPCs at JFJ shows that for northerly wind cases partially-glaciated MPCs are more frequently observed than for southerly wind cases. The higher frequency of these intermediate states of MPCs suggests either higher updraft velocities, and therefore higher water-vapor supersaturations, or the absence of sufficiently high IN concentrations to quickly glaciate the MPC. Because of the limitation of in-situ information, i.e. point measurements and missing measurements of vertical velocities at JFJ, the mechanism of the long persistence of MPCs cannot be fully understood. Therefore, in addition to measurements we will investigate the JFJ region with a model study with the non-hydrostatic model COSMO-ART-M7. Combination of km
Numerical Modeling of Mixing of Chemically Reacting, Non-Newtonian Slurry for Tank Waste Retrieval
Yuen, David A.; Onishi, Yasuo; Rustad, James R.; Michener, Thomas E.; Felmy, Andrew R.; Ten, Arkady A.; Hier, Catherine A.
2000-06-01
Many highly radioactive wastes will be retrieved by installing mixer pumps that inject high-speed jets to stir up the sludge, saltcake, and supernatant liquid in the tank, blending them into a slurry. This slurry will then be pumped out of the tank into a waste treatment facility. Our objectives are to investigate interactions-chemical reactions, waste rheology, and slurry mixing-occurring during the retrieval operation and to provide a scientific basis for the waste retrieval decision-making process. Specific objectives are to: (1) Evaluate numerical modeling of chemically active, non-Newtonian tank waste mixing, coupled with chemical reactions and realistic rheology; (2) Conduct numerical modeling analysis of local and global mixing of non-Newtonian and Newtonian slurries; and (3) Provide the bases to develop a scientifically justifiable, decision-making support tool for the tank waste retrieval operation.
Influence of pitch motion on the turbulent mixing in the wake of floating wind turbine models
NASA Astrophysics Data System (ADS)
Rockel, Stanislav; Peinke, Joachim; Hoelling, Michael; Cal, Raúl Bayoán
2014-11-01
Offshore wind turbines use fixed foundations, which are economical in shallow water up to a depth of 50m. For deeper water areas floating support structures are feasible alternatives. The added degrees of freedom of a floating platform introduce additional oscillations to the wind turbine and therefore influence the aerodynamics at the rotor and its wake, respectively. The influence of platform pitch motion on the wake of an upstream wind turbine and a turbine positioned in the wake is investigated. Wind tunnel experiments were performed using classical bottom fixed wind turbine models and turbines in free pitch motion. Using 2D-3C particle image elocimetry (SPIV), wakes of both turbines were measured. In both cases - fixed and pitching - the inflow conditions were kept constant. The differences in the turbulent quantities of the wake of the upwind turbine for the fixed and oscillating case are investigated and their influence the wake of the downwind turbine. Our results show that platform pitch and oscillatory motions of the wind turbine have a strong impact on the shape of the fluctuating components of the wake. Also the turbulent mixing is changed by the oscillations, which is transferred to statistical quantities of higher order in the wake of the downwind turbine.
Optimal composite scores for longitudinal clinical trials under the linear mixed effects model.
Ard, M Colin; Raghavan, Nandini; Edland, Steven D
2015-01-01
Clinical trials of chronic, progressive conditions use rate of change on continuous measures as the primary outcome measure, with slowing of progression on the measure as evidence of clinical efficacy. For clinical trials with a single prespecified primary endpoint, it is important to choose an endpoint with the best signal-to-noise properties to optimize statistical power to detect a treatment effect. Composite endpoints composed of a linear weighted average of candidate outcome measures have also been proposed. Composites constructed as simple sums or averages of component tests, as well as composites constructed using weights derived from more sophisticated approaches, can be suboptimal, in some cases performing worse than individual outcome measures. We extend recent research on the construction of efficient linearly weighted composites by establishing the often overlooked connection between trial design and composite performance under linear mixed effects model assumptions and derive a formula for calculating composites that are optimal for longitudinal clinical trials of known, arbitrary design. Using data from a completed trial, we provide example calculations showing that the optimally weighted linear combination of scales can improve the efficiency of trials by almost 20% compared with the most efficient of the individual component scales. Additional simulations and analytical results demonstrate the potential losses in efficiency that can result from alternative published approaches to composite construction and explore the impact of weight estimation on composite performance.
Effects of turbulent mixing on critical behaviour: renormalization-group analysis of the Potts model
NASA Astrophysics Data System (ADS)
Antonov, N. V.; Malyshev, A. V.
2012-06-01
The critical behaviour of a system, subjected to strongly anisotropic turbulent mixing, is studied by means of the field-theoretic renormalization group. Specifically, the relaxational stochastic dynamics of a non-conserved multicomponent order parameter of the Ashkin-Teller-Potts model, coupled to a random velocity field with prescribed statistics, is considered. The velocity is taken to be Gaussian, white in time, with a correlation function of the form ∝δ(t - t‧)/|k⊥|d - 1 + ξ, where k⊥ is the component of the wave vector, perpendicular to the distinguished direction (‘direction of the flow’)—the d-dimensional generalization of the ensemble was introduced by Avellaneda and Majda (1990 Commun. Math. Phys. 131 381) within the context of passive scalar advection. This model can describe a rich class of physical situations. It is shown that, depending on the values of the parameters that define the self-interaction of the order parameter and the relation between the exponent ξ and the space dimension d, the system exhibits various types of large-scale scaling behaviour, associated with different infrared attractive fixed points of the renormalization-group equations. In addition to known asymptotic regimes (critical dynamics of the Potts model and passively advected field without self-interaction), the existence of a new, non-equilibrium and strongly anisotropic, type of critical behaviour (universality class) is established, and the corresponding critical dimensions are calculated to the leading order of the double expansion in ξ and ɛ = 6 - d (one-loop approximation). The scaling appears to be strongly anisotropic in the sense that the critical dimensions related to the directions parallel and perpendicular to the flow are essentially different.
The treatment of mixing in core helium burning models - II. Constraints from cluster star counts
NASA Astrophysics Data System (ADS)
Constantino, Thomas; Campbell, Simon W.; Lattanzio, John C.; van Duijneveldt, Adam
2016-03-01
The treatment of convective boundaries during core helium burning is a fundamental problem in stellar evolution calculations. In the first paper of this series, we showed that new asteroseismic observations of these stars imply they have either very large convective cores or semiconvection/partially mixed zones that trap g modes. We probe this mixing by inferring the relative lifetimes of asymptotic giant branch (AGB) and horizontal branch (HB) from R2, the observed ratio of these stars in recent HST photometry of 48 Galactic globular clusters. Our new determinations of R2 are more self-consistent than those of previous studies and our overall calculation of R2 = 0.117 ± 0.005 is the most statistically robust now available. We also establish that the luminosity difference between the HB and the AGB clump is Δ log {L}_HB^AGB = 0.455 ± 0.012. Our results accord with earlier findings that standard models predict a lower R2 than is observed. We demonstrate that the dominant sources of uncertainty in models are the prescription for mixing and the stochastic effects that can result from its numerical treatment. The luminosity probability density functions that we derive from observations feature a sharp peak near the AGB clump. This constitutes a strong new argument against core breathing pulses, which broaden the predicted width of the peak. We conclude that the two mixing schemes that can match the asteroseismology are capable of matching globular cluster observations, but only if (i) core breathing pulses are avoided in models with a semiconvection/partially mixed zone, or (ii) that models with large convective cores have a particular depth of mixing beneath the Schwarzschild boundary during subsequent early-AGB `gravonuclear' convection.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Using the Mixed Rasch Model to analyze data from the beliefs and attitudes about memory survey.
Smith, Everett V; Ying, Yuping; Brown, Scott W
2012-01-01
In this study, we used the Mixed Rasch Model (MRM) to analyze data from the Beliefs and Attitudes About Memory Survey (BAMS; Brown, Garry, Silver, and Loftus, 1997). We used the original 5-point BAMS data to investigate the functioning of the "Neutral" category via threshold analysis under a 2-class MRM solution. The "Neutral" category was identified as not eliciting the model expected responses and observations in the "Neutral" category were subsequently treated as missing data. For the BAMS data without the "Neutral" category, exploratory MRM analyses specifying up to 5 latent classes were conducted to evaluate data-model fit using the consistent Akaike information criterion (CAIC). For each of three BAMS subscales, a two latent class solution was identified as fitting the mixed Rasch rating scale model the best. Results regarding threshold analysis, person parameters, and item fit based on the final models are presented and discussed as well as the implications of this study.
Sun, Yong; Peng, Jiajun; Chen, Yani; Yao, Yingshan; Liang, Ziqi
2017-01-01
Organo-metal halide perovskites have suffered undesirably from structural and thermal instabilities. Moreover, thermal annealing is often indispensable to the crystallization of perovskites and removal of residual solvents, which is unsuitable for scalable fabrication of flexible solar modules. Herein, we demonstrate the non-thermal annealing fabrication of a novel type of air-stable triple-cation mixed-halide perovskites, FA0.7MA0.2Cs0.1Pb(I5/6Br1/6)3 (FMC) by incorporation of Pb(SCN)2 additive. It is found that adding Pb(SCN)2 functions the same as thermal annealing process by not only improving the crystallinity and optical absorption of perovskites, but also hindering the formation of morphological defects and non-radiative recombination. Furthermore, such Pb(SCN)2-treated FMC unannealed films present micrometer-sized crystal grains and remarkably high moisture stability. Planar solar cells built upon these unannealed films exhibit a high PCE of 14.09% with significantly suppressed hysteresis phenomenon compared to those of thermal annealing. The corresponding room-temperature fabricated flexible solar cell shows an impressive PCE of 10.55%. This work offers a new avenue to low-temperature fabrication of air-stable, flexible and high-efficiency perovskite solar cells. PMID:28383061
Sun, Yong; Peng, Jiajun; Chen, Yani; Yao, Yingshan; Liang, Ziqi
2017-04-06
Organo-metal halide perovskites have suffered undesirably from structural and thermal instabilities. Moreover, thermal annealing is often indispensable to the crystallization of perovskites and removal of residual solvents, which is unsuitable for scalable fabrication of flexible solar modules. Herein, we demonstrate the non-thermal annealing fabrication of a novel type of air-stable triple-cation mixed-halide perovskites, FA0.7MA0.2Cs0.1Pb(I5/6Br1/6)3 (FMC) by incorporation of Pb(SCN)2 additive. It is found that adding Pb(SCN)2 functions the same as thermal annealing process by not only improving the crystallinity and optical absorption of perovskites, but also hindering the formation of morphological defects and non-radiative recombination. Furthermore, such Pb(SCN)2-treated FMC unannealed films present micrometer-sized crystal grains and remarkably high moisture stability. Planar solar cells built upon these unannealed films exhibit a high PCE of 14.09% with significantly suppressed hysteresis phenomenon compared to those of thermal annealing. The corresponding room-temperature fabricated flexible solar cell shows an impressive PCE of 10.55%. This work offers a new avenue to low-temperature fabrication of air-stable, flexible and high-efficiency perovskite solar cells.
Felton, C A; DeVries, T J
2010-06-01
The objective of this study was to determine the effects of water addition to a high-moisture total mixed ration (TMR) on feed temperature, feed intake, feed sorting behavior, and milk production of dairy cows. Twelve lactating Holstein cows (155.8+/-60.1 DIM), individually fed once daily at 1000 h, were exposed to 3 diets in a Latin square design with 28-d treatment periods. Diets had the same ingredient composition [30.9% corn silage, 30.3% alfalfa haylage, 21.2% high-moisture corn, and 17.6% protein supplement; dry matter (DM) basis] and differed only in DM concentration, which was reduced by the addition of water. Treatment diets averaged 56.3, 50.8, and 44.1% DM. The study was conducted between May and August when environmental temperature was 18.2+/-3.6 degrees C and ambient temperature in the barn was 24.4+/-3.3 degrees C. Dry matter intake (DMI) was monitored for each animal for the last 14 d of each treatment period. For the final 7 d of each period, milk production was monitored, feed temperature and ambient temperature and humidity were recorded (daily at 1000, 1300, and 1600 h), and fresh feed and orts were sampled for determination of sorting. For the final 4 d of each period, milk samples were taken for composition analysis. Samples taken for determining sorting were separated using a Penn State Particle Separator that had 3 screens (19, 8, and 1.18 mm) and a bottom pan, resulting in 4 fractions (long, medium, short, and fine). Sorting was calculated as the actual intake of each particle size fraction expressed as a percentage of the predicted intake of that fraction. Greater amounts of water added to the TMR resulted in greater increases in feed temperature in the hours after feed delivery, greater sorting against long particles, and decreased DMI, reducing the overall intake of starch and neutral detergent fiber. Milk production and composition were not affected by the addition of water to the TMR. Efficiency of production of milk was, however
Mikolajewicz, U.; Maier-reimer, E.
1994-11-01
When driven under `mixed boundary conditions` coarse resolution ocean general circulation models (OGCMs) generally show a high sensitivity of the present-day thermohaline circulation against perturbations. We will show that an alternative formulation of the boundary condition for temperature, a mixture of prescribed heat fluxes and additional restoring of the sea surface temperature to a climatological boundary temperature with a longer time constant, drastically alters the stability of the modes of the thermohaline circulation. The results from simulations with the Hamburg large-scale geostrophic OGCM indicate that the stability of the mode of the thermohaline circulation with formation of North Atlantic deepwater increases, if the damping of sea surface temperature anomalies is reduced, whereas the opposite is true for the mode without North Atlantic deep water formation. It turns out that the formulation of the temperature boundary condition also affects the variability of the model.
NASA Astrophysics Data System (ADS)
Komiyama, Ryoichi; Shibata, Saeko; Nakamura, Yosuke; Fujii, Yasumasa
This paper presents the evaluation on the impact of an extensive introduction of photovoltaic (PV) system and stationary battery technology into optimal power generation mix in Kanto and Kinki region. The introduction of solar PV system is expected to be extensively deployed in Japanese household sector and utility company in order to address the concerns of energy security and climate change. Considering this expected large-scale deployment of PV system in electric power system, it is necessary to investigate the optimal power generation mix which is technologically capable of controlling and accommodating the intermittent output-power fluctuation inherently derived from PV system. On these backgrounds, we develop both solar photovoltaic power generation model and optimal power generation mix model, including stationary battery technology, which are able to explicitly analyze the impact of PV output fluctuation in detailed resolution of time interval like 10 minutes at consecutive 365 days. Simulation results reveal that PV introduction does not necessarily increase battery technology due to the cost competitiveness of thermal power plants in load following requirement caused by PV system. Additionally, on the basis of sensitivity analysis on PV system cost, dramatic cost reduction proves to be indispensable enough for PV to supply a bulk of electricity similarly as thermal and nuclear power plant.
Brannock, M; Wang, Y; Leslie, G
2010-05-01
Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).
Sensitivity of a global coupled ocean-sea ice model to the parameterization of vertical mixing
NASA Astrophysics Data System (ADS)
Goosse, H.; Deleersnijder, E.; Fichefet, T.; England, M. H.
1999-06-01
Three numerical experiments have been carried out with a global coupled ice-ocean model to investigate its sensitivity to the treatment of vertical mixing in the upper ocean. In the first experiment, a widely used fixed profile of vertical diffusivity and viscosity is imposed, with large values in the upper 50 m to crudely represent wind-driven mixing. In the second experiment, the eddy coefficients are functions of the Richardson number, and, in the third case, a relatively sophisticated parameterization, based on the turbulence closure scheme of Mellor and Yamada version 2.5, is introduced. We monitor the way the different mixing schemes affect the simulated ocean ventilation, water mass properties, and sea ice distributions. CFC uptake is also diagnosed in the model experiments. The simulation of the mixed layer depth is improved in the experiment which includes the sophisticated turbulence closure scheme. This results in a good representation of the upper ocean thermohaline structure and in heat exchange with the atmosphere within the range of current estimates. However, the error in heat flux in the experiment with simple fixed vertical mixing coefficients can be as high as 50 W m-2 in zonal mean during summer. Using CFC tracers allows us to demonstrate that the ventilation of the deep ocean is not significantly influenced by the parameterization of vertical mixing in the upper ocean. The only exception is the Southern Ocean. There, the ventilation is too strong in all three experiments. However, modifications of the vertical diffusivity and, surprisingly, the vertical viscosity significantly affect the stability of the water column in this region through their influence on upper ocean salinity, resulting in a more realistic Southern Ocean circulation. The turbulence scheme also results in an improved simulation of Antarctic sea ice coverage. This is due to to a better simulation of the mixed layer depth and thus of heat exchanges between ice and ocean. The
Estimating Multidimensional Item Response Models with Mixed Structure. Research Report. ETS RR-05-04
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
This study derived an expectation-maximization (EM) algorithm for estimating the parameters of multidimensional item response models. A genetic algorithm (GA) was developed to be used in the maximization step in each EM cycle. The focus of the EM-GA algorithm developed in this paper was on multidimensional items with "mixed structure."…
Converting isotope values to diet composition - the use of mixing models
A common use of stable isotope analysis in mammalogy is to make inferences about diet from isotopic values (typically δ13C and δ15N) measured in a consumer’s tissues and its food sources. Mathematical mixing models are used to estimate the proportional contributions of food sour...
Converting isotope ratios to diet composition - the use of mixing models - June 2010
One application of stable isotope analysis is to reconstruct diet composition based on isotopic mass balance. The isotopic value of a consumer’s tissue reflects the isotopic values of its food sources proportional to their dietary contributions. Isotopic mixing models are used ...
Mixed linear model approach adapted for genome-wide association studies
Technology Transfer Automated Retrieval System (TEKTRAN)
Mixed linear model (MLM) methods have proven useful in controlling for population structure and relatedness within genome-wide association studies. However, MLM-based methods can be computationally challenging for large datasets. We report a compression approach, called ‘compressed MLM,’ that decrea...
SOURCE AGGREGATION IN STABLE ISOTOPE MIXING MODELS: LUMP IT OR LEAVE IT?
A common situation when stable isotope mixing models are used to estimate source contributions to a mixture is that there are too many sources to allow a unique solution. To resolve this problem one option is to combine sources with similar signatures such that the number of sou...
Software engineering the mixed model for genome-wide association studies on large samples
Technology Transfer Automated Retrieval System (TEKTRAN)
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...
Two-Dimensional Modeling of Imprint and Feedthrough in OMEGA Mix Spherical Experiments
NASA Astrophysics Data System (ADS)
Delettrez, J. A.; Bradley, D. K.; Epstein, R.; Verdon, C. P.
1997-11-01
In the one-dimensional code LILAC*, mix is modeled as a diffusive transport process within a mix thickness obtained from a multimode growth model. The model requires knowledge of the imprint level (converting the illumination nonuniformity into a surface perturbation), the growth rates (now obtained from a Takabe-like formula), the feedthrough from the ablation sruface to the shell inner surface, and the effects of spherical convergence on the inner surface perturbation growth. We investigate the first three of these subjects in a series of simulations carried out with the two-dimensional hydrodynamic code ORCHID* for the conditions of the current mix experiments. The effect of the laser pulse shape is studied by comparing the imprinting and growth rates from 1-ns Gaussian pulses with that from fast-rising (100-ps) flat-top pulses, both with peak intensities near 10^15*:W/cm^2***. Legendre modes l=10* to l=120* are investigated in separate runs; this is the range of the fastest-growing modes during the acceleration phase instability. Growth rates and feedthrough amplitudes obtained from the mix model are compared to ORCHID* results. Parametric expressions for the equivalent surface perturbation are presented. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC03-92SF19460.
ERIC Educational Resources Information Center
Dardis, Christina M.; Kelley, Erika L.; Edwards, Katie M.; Gidycz, Christine A.
2013-01-01
Objective: This study assessed abused and nonabused women's perceptions of Investment Model (IM) variables (ie, relationship investment, satisfaction, commitment, quality of alternatives) utilizing a mixed-methods design. Participants: Participants included 102 college women, approximately half of whom were in abusive dating relationships.…
Wave Climate and Wave Mixing in the Marginal Ice Zones of Arctic Seas, Observations and Modelling
2014-09-30