Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution
Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen
2014-01-01
Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804
Toropov, A A; Toropova, A P; Raska, I
2008-04-01
Simplified molecular input line entry system (SMILES) has been utilized in constructing quantitative structure-property relationships (QSPR) for octanol/water partition coefficient of vitamins and organic compounds of different classes by optimal descriptors. Statistical characteristics of the best model (vitamins) are the following: n=17, R(2)=0.9841, s=0.634, F=931 (training set); n=7, R(2)=0.9928, s=0.773, F=690 (test set). Using this approach for modeling octanol/water partition coefficient for a set of organic compounds gives a model that is statistically characterized by n=69, R(2)=0.9872, s=0.156, F=5184 (training set) and n=70, R(2)=0.9841, s=0.179, F=4195 (test set).
NASA Astrophysics Data System (ADS)
Ise, Takeshi; Litton, Creighton M.; Giardina, Christian P.; Ito, Akihiko
2010-12-01
Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long-lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning to leaves, stems, and roots varies consistently with GPP and that the ratio of net primary production (NPP) to GPP is conservative across environmental gradients. To examine influences of carbon partitioning schemes employed by global ecosystem models, we used this meta-analysis-based model and a satellite-based (MODIS) terrestrial GPP data set to estimate global woody NPP and equilibrium biomass, and then compared it to two process-based ecosystem models (Biome-BGC and VISIT) using the same GPP data set. We hypothesized that different carbon partitioning schemes would result in large differences in global estimates of woody NPP and equilibrium biomass. Woody NPP estimated by Biome-BGC and VISIT was 25% and 29% higher than the meta-analysis-based model for boreal forests, with smaller differences in temperate and tropics. Global equilibrium woody biomass, calculated from model-specific NPP estimates and a single set of tissue turnover rates, was 48 and 226 Pg C higher for Biome-BGC and VISIT compared to the meta-analysis-based model, reflecting differences in carbon partitioning to structural versus metabolically active tissues. In summary, we found that different carbon partitioning schemes resulted in large variations in estimates of global woody carbon flux and storage, indicating that stand-level controls on carbon partitioning are not yet accurately represented in ecosystem models.
Burant, Aniela; Thompson, Christopher; Lowry, Gregory V; Karamalidis, Athanasios K
2016-05-17
Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch-reactor system with dual spectroscopic detectors: a near-infrared spectrometer for measuring the organic analyte in the CO2 phase and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly parameter linear free-energy relationship and to develop five new linear free-energy relationships for predicting water-sc-CO2 partitioning coefficients. A total of four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than does the model built for the entire data set.
NASA Astrophysics Data System (ADS)
Chandramouli, Bharadwaj; Kamens, Richard M.
Decamethyl cyclopentasiloxane (D 5) and decamethyl tetrasiloxane (MD 2M) were injected into a smog chamber containing fine Arizona road dust particles (95% surface area <2.6 μM) and an urban smog atmosphere in the daytime. A photochemical reaction - gas-particle partitioning reaction scheme, was implemented to simulate the formation and gas-particle partitioning of hydroxyl oxidation products of D 5 and MD 2M. This scheme incorporated the reactions of D 5 and MD 2M into an existing urban smog chemical mechanism carbon bond IV and partitioned the products between gas and particle phase by treating gas-particle partitioning as a kinetic process and specifying an uptake and off-gassing rate. A photochemical model PKSS was used to simulate this set of reactions. A Langmuirian partitioning model was used to convert the measured and estimated mass-based partitioning coefficients ( KP) to a molar or volume-based form. The model simulations indicated that >99% of all product silanol formed in the gas-phase partition immediately to particle phase and the experimental data agreed with model predictions. One product, D 4TOH was observed and confirmed for the D 5 reaction and this system was modeled successfully. Experimental data was inadequate for MD 2M reaction products and it is likely that more than one product formed. The model set up a framework into which more reaction and partitioning steps can be easily added.
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
NASA Astrophysics Data System (ADS)
Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.
The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.
The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the Modeling of adipose/blood partition coefficient for environmental chemicals.
Papadaki, K C; Karakitsios, S P; Sarigiannis, D A
2017-12-01
A Quantitative Structure Activity Relationship (QSAR) model was developed in order to predict the adipose/blood partition coefficient of environmental chemical compounds. The first step of QSAR modeling was the collection of inputs. Input data included the experimental values of adipose/blood partition coefficient and two sets of molecular descriptors for 67 organic chemical compounds; a) the descriptors from Linear Free Energy Relationship (LFER) and b) the PaDEL descriptors. The datasets were split to training and prediction set and were analysed using two statistical methods; Genetic Algorithm based Multiple Linear Regression (GA-MLR) and Artificial Neural Networks (ANN). The models with LFER and PaDEL descriptors, coupled with ANN, produced satisfying performance results. The fitting performance (R 2 ) of the models, using LFER and PaDEL descriptors, was 0.94 and 0.96, respectively. The Applicability Domain (AD) of the models was assessed and then the models were applied to a large number of chemical compounds with unknown values of adipose/blood partition coefficient. In conclusion, the proposed models were checked for fitting, validity and applicability. It was demonstrated that they are stable, reliable and capable to predict the values of adipose/blood partition coefficient of "data poor" chemical compounds that fall within the applicability domain. Copyright © 2017. Published by Elsevier Ltd.
Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio
2010-04-01
Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Corrigan, Catherine M.; Chabot, Nancy L.; McCoy, Timothy J.; McDonough, William F.; Watson, Heather C.; Saslow, Sarah A.; Ash, Richard D.
2009-05-01
To better understand the partitioning behavior of elements during the formation and evolution of iron meteorites, two sets of experiments were conducted at 1 atm in the Fe-Ni-P system. The first set examined the effect of P on solid metal/liquid metal partitioning behavior of 22 elements, while the other set explored the effect of the crystal structures of body-centered cubic (α)- and face-centered cubic (γ)-solid Fe alloys on partitioning behavior. Overall, the effect of P on the partition coefficients for the majority of the elements was minimal. As, Au, Ga, Ge, Ir, Os, Pt, Re, and Sb showed slightly increasing partition coefficients with increasing P-content of the metallic liquid. Co, Cu, Pd, and Sn showed constant partition coefficients. Rh, Ru, W, and Mo showed phosphorophile (P-loving) tendencies. Parameterization models were applied to solid metal/liquid metal results for 12 elements. As, Au, Pt, and Re failed to match previous parameterization models, requiring the determination of separate parameters for the Fe-Ni-S and Fe-Ni-P systems. Experiments with coexisting α and γ Fe alloy solids produced partitioning ratios close to unity, indicating that an α versus γ Fe alloy crystal structure has only a minor influence on the partitioning behaviors of the trace element studied. A simple relationship between an element's natural crystal structure and its α/γ partitioning ratio was not observed. If an iron meteorite crystallizes from a single metallic liquid that contains both S and P, the effect of P on the distribution of elements between the crystallizing solids and the residual liquid will be minor in comparison to the effect of S. This indicates that to a first order, fractional crystallization models of the Fe-Ni-S-P system that do not take into account P are appropriate for interpreting the evolution of iron meteorites if the effects of S are appropriately included in the effort.
Padró, Juan M; Pellegrino Vidal, Rocío B; Reta, Mario
2014-12-01
The partition coefficients, P IL/w, of several compounds, some of them of biological and pharmacological interest, between water and room-temperature ionic liquids based on the imidazolium, pyridinium, and phosphonium cations, namely 1-octyl-3-methylimidazolium hexafluorophosphate, N-octylpyridinium tetrafluorophosphate, trihexyl(tetradecyl)phosphonium chloride, trihexyl(tetradecyl)phosphonium bromide, trihexyl(tetradecyl)phosphonium bis(trifluoromethylsulfonyl)imide, and trihexyl(tetradecyl)phosphonium dicyanamide, were accurately measured. In this way, we extended our database of partition coefficients in room-temperature ionic liquids previously reported. We employed the solvation parameter model with different probe molecules (the training set) to elucidate the chemical interactions involved in the partition process and discussed the most relevant differences among the three types of ionic liquids. The multiparametric equations obtained with the aforementioned model were used to predict the partition coefficients for compounds (the test set) not present in the training set, most being of biological and pharmacological interest. An excellent agreement between calculated and experimental log P IL/w values was obtained. Thus, the obtained equations can be used to predict, a priori, the extraction efficiency for any compound using these ionic liquids as extraction solvents in liquid-liquid extractions.
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
NASA Astrophysics Data System (ADS)
Topping, D. O.; Lowe, D.; McFiggans, G.; Zaveri, R. A.
2016-12-01
Gas to particle partitioning of atmospheric compounds occurs through disequilibrium mass transfer rather than through instantaneous equilibrium. However, it is common to treat only the inorganic compounds as partitioning dynamically whilst organic compounds, represented by the Volatility Basis Set (VBS), are partitioned instantaneously. In this study we implement a more realistic dynamic partitioning of organic compounds in a regional framework and assess impact on aerosol mass and microphysics. It is also common to assume condensed phase water is only associated with inorganic components. We thus also assess sensitivity to assuming all organics are hygroscopic according to their prescribed molecular weight.For this study we use WRF-Chem v3.4.1, focusing on anthropogenic dominated North-Western Europe. Gas-phase chemistry is represented using CBM-Z whilst aerosol dynamics are simulated using the 8-section MOSAIC scheme, including a 9-bin volatility basis set (VBS) treatment of organic aerosol. Results indicate that predicted mass loadings can vary significantly. Without gas phase ageing of higher volatility compounds, dynamic partitioning always results in lower mass loadings downwind of emission sources. The inclusion of condensed phase water in both partitioning models increases the predicted PM mass, resulting from a larger contribution from higher volatility organics, if present. If gas phase ageing of VBS compounds is allowed to occur in a dynamic model, this can often lead to higher predicted mass loadings, contrary to expected behaviour from a simple non-reactive gas phase box model. As descriptions of aerosol phase processes improve within regional models, the baseline descriptions of partitioning should retain the ability to treat dynamic partitioning of organic compounds. Using our simulations, we discuss whether derived sensitivities to aerosol processes in existing models may be inherently biased.This work was supported by the Nature Environment Research Council within the RONOCO (NE/F004656/1) and CCN-Vol (NE/L007827/1) projects.
Equivalence of partition properties and determinacy
Kechris, Alexander S.; Woodin, W. Hugh
1983-01-01
It is shown that, within L(ℝ), the smallest inner model of set theory containing the reals, the axiom of determinacy is equivalent to the existence of arbitrarily large cardinals below Θ with the strong partition property κ → (κ)κ. PMID:16593299
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burant, Aniela; Thompson, Christopher; Lowry, Gregory V.
2016-05-17
Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch reactor system with dual spectroscopic detectors: a near infrared spectrometer for measuring the organic analyte in the CO2 phase, and a UV detector for quantifying the analyte inmore » the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly-parameter linear free energy relationship and to develop five new linear free energy relationships for predicting water-sc-CO2 partitioning coefficients. Four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than the model built for the entire dataset.« less
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Does History Repeat Itself? Wavelets and the Phylodynamics of Influenza A
Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.
2012-01-01
Unprecedented global surveillance of viruses will result in massive sequence data sets that require new statistical methods. These data sets press the limits of Bayesian phylogenetics as the high-dimensional parameters that comprise a phylogenetic tree increase the already sizable computational burden of these techniques. This burden often results in partitioning the data set, for example, by gene, and inferring the evolutionary dynamics of each partition independently, a compromise that results in stratified analyses that depend only on data within a given partition. However, parameter estimates inferred from these stratified models are likely strongly correlated, considering they rely on data from a single data set. To overcome this shortfall, we exploit the existing Monte Carlo realizations from stratified Bayesian analyses to efficiently estimate a nonparametric hierarchical wavelet-based model and learn about the time-varying parameters of effective population size that reflect levels of genetic diversity across all partitions simultaneously. Our methods are applied to complete genome influenza A sequences that span 13 years. We find that broad peaks and trends, as opposed to seasonal spikes, in the effective population size history distinguish individual segments from the complete genome. We also address hypotheses regarding intersegment dynamics within a formal statistical framework that accounts for correlation between segment-specific parameters. PMID:22160768
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
Krajewski, C; Fain, M G; Buckley, L; King, D G
1999-11-01
ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.
PAQ: Partition Analysis of Quasispecies.
Baccam, P; Thompson, R J; Fedrigo, O; Carpenter, S; Cornette, J L
2001-01-01
The complexities of genetic data may not be accurately described by any single analytical tool. Phylogenetic analysis is often used to study the genetic relationship among different sequences. Evolutionary models and assumptions are invoked to reconstruct trees that describe the phylogenetic relationship among sequences. Genetic databases are rapidly accumulating large amounts of sequences. Newly acquired sequences, which have not yet been characterized, may require preliminary genetic exploration in order to build models describing the evolutionary relationship among sequences. There are clustering techniques that rely less on models of evolution, and thus may provide nice exploratory tools for identifying genetic similarities. Some of the more commonly used clustering methods perform better when data can be grouped into mutually exclusive groups. Genetic data from viral quasispecies, which consist of closely related variants that differ by small changes, however, may best be partitioned by overlapping groups. We have developed an intuitive exploratory program, Partition Analysis of Quasispecies (PAQ), which utilizes a non-hierarchical technique to partition sequences that are genetically similar. PAQ was used to analyze a data set of human immunodeficiency virus type 1 (HIV-1) envelope sequences isolated from different regions of the brain and another data set consisting of the equine infectious anemia virus (EIAV) regulatory gene rev. Analysis of the HIV-1 data set by PAQ was consistent with phylogenetic analysis of the same data, and the EIAV rev variants were partitioned into two overlapping groups. PAQ provides an additional tool which can be used to glean information from genetic data and can be used in conjunction with other tools to study genetic similarities and genetic evolution of viral quasispecies.
Padró, Juan M; Ponzinibbio, Agustín; Mesa, Leidy B Agudelo; Reta, Mario
2011-03-01
The partition coefficients, P(IL/w), for different probe molecules as well as for compounds of biological interest between the room-temperature ionic liquids (RTILs) 1-butyl-3-methylimidazolium hexafluorophosphate, [BMIM][PF(6)], 1-hexyl-3-methylimidazolium hexafluorophosphate, [HMIM][PF(6)], 1-octyl-3-methylimidazolium tetrafluoroborate, [OMIM][BF(4)] and water were accurately measured. [BMIM][PF(6)] and [OMIM][BF(4)] were synthesized by adapting a procedure from the literature to a simpler, single-vessel and faster methodology, with a much lesser consumption of organic solvent. We employed the solvation-parameter model to elucidate the general chemical interactions involved in RTIL/water partitioning. With this purpose, we have selected different solute descriptor parameters that measure polarity, polarizability, hydrogen-bond-donor and hydrogen-bond-acceptor interactions, and cavity formation for a set of specifically selected probe molecules (the training set). The obtained multiparametric equations were used to predict the partition coefficients for compounds not present in the training set (the test set), most being of biological interest. Partial solubility of the ionic liquid in water (and water into the ionic liquid) was taken into account to explain the obtained results. This fact has not been deeply considered up to date. Solute descriptors were obtained from the literature, when available, or else calculated through commercial software. An excellent agreement between calculated and experimental log P(IL/w) values was obtained, which demonstrated that the resulting multiparametric equations are robust and allow predicting partitioning for any organic molecule in the biphasic systems studied.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.
NASA Technical Reports Server (NTRS)
Medard, E.; Martin, A. M.; Righter, K.; Malouta, A.; Lee, C.-T.
2017-01-01
Most siderophile element concentrations in planetary mantles can be explained by metal/ silicate equilibration at high temperature and pressure during core formation. Highly siderophile elements (HSE = Au, Re, and the Pt-group elements), however, usually have higher mantle abundances than predicted by partitioning models, suggesting that their concentrations have been set by late accretion of material that did not equilibrate with the core. The partitioning of HSE at the low oxygen fugacities relevant for core formation is however poorly constrained due to the lack of sufficient experimental constraints to describe the variations of partitioning with key variables like temperature, pressure, and oxygen fugacity. To better understand the relative roles of metal/silicate partitioning and late accretion, we performed a self-consistent set of experiments that parameterizes the influence of oxygen fugacity, temperature and melt composition on the partitioning of Pt, one of the HSE, between metal and silicate melts. The major outcome of this project is the fact that Pt dissolves in an anionic form in silicate melts, causing a dependence of partitioning on oxygen fugacity opposite to that reported in previous studies.
Platinum Partitioning at Low Oxygen Fugacity: Implications for Core Formation Processes
NASA Technical Reports Server (NTRS)
Medard, E.; Martin, A. M.; Righter, K.; Lanziroti, A.; Newville, M.
2016-01-01
Highly siderophile elements (HSE = Au, Re, and the Pt-group elements) are tracers of silicate / metal interactions during planetary processes. Since most core-formation models involve some state of equilibrium between liquid silicate and liquid metal, understanding the partioning of highly siderophile elements (HSE) between silicate and metallic melts is a key issue for models of core / mantle equilibria and for core formation scenarios. However, partitioning models for HSE are still inaccurate due to the lack of sufficient experimental constraints to describe the variations of partitioning with key variable like temperature, pressure, and oxygen fugacity. In this abstract, we describe a self-consistent set of experiments aimed at determining the valence of platinum, one of the HSE, in silicate melts. This is a key information required to parameterize the evolution of platinum partitioning with oxygen fugacity.
Partitioning of polar and non-polar neutral organic chemicals into human and cow milk.
Geisler, Anett; Endo, Satoshi; Goss, Kai-Uwe
2011-10-01
The aim of this work was to develop a predictive model for milk/water partition coefficients of neutral organic compounds. Batch experiments were performed for 119 diverse organic chemicals in human milk and raw and processed cow milk at 37°C. No differences (<0.3 log units) in the partition coefficients of these types of milk were observed. The polyparameter linear free energy relationship model fit the calibration data well (SD=0.22 log units). An experimental validation data set including hormones and hormone active compounds was predicted satisfactorily by the model. An alternative modelling approach based on log K(ow) revealed a poorer performance. The model presented here provides a significant improvement in predicting enrichment of potentially hazardous chemicals in milk. In combination with physiologically based pharmacokinetic modelling this improvement in the estimation of milk/water partitioning coefficients may allow a better risk assessment for a wide range of neutral organic chemicals. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sharpe, Jennifer B.; Soong, David T.
2015-01-01
This study used the National Land Cover Dataset (NLCD) and developed an automated process for determining the area of the three land cover types, thereby allowing faster updating of future models, and for evaluating land cover changes by use of historical NLCD datasets. The study also carried out a raingage partitioning analysis so that the segmentation of land cover and rainfall in each modeled unit is directly applicable to the HSPF modeling. Historical and existing impervious, grass, and forest land acreages partitioned by percentages covered by two sets of raingages for the Lake Michigan diversion SCAs, gaged basins, and ungaged basins are presented.
Architecture Aware Partitioning Algorithms
2006-01-19
follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj
Daniel, J B; Friggens, N C; van Laar, H; Ingvartsen, K L; Sauvant, D
2018-06-01
The control of nutrient partitioning is complex and affected by many factors, among them physiological state and production potential. Therefore, the current model aims to provide for dairy cows a dynamic framework to predict a consistent set of reference performance patterns (milk component yields, body composition change, dry-matter intake) sensitive to physiological status across a range of milk production potentials (within and between breeds). Flows and partition of net energy toward maintenance, growth, gestation, body reserves and milk components are described in the model. The structure of the model is characterized by two sub-models, a regulating sub-model of homeorhetic control which sets dynamic partitioning rules along the lactation, and an operating sub-model that translates this into animal performance. The regulating sub-model describes lactation as the result of three driving forces: (1) use of previously acquired resources through mobilization, (2) acquisition of new resources with a priority of partition towards milk and (3) subsequent use of resources towards body reserves gain. The dynamics of these three driving forces were adjusted separately for fat (milk and body), protein (milk and body) and lactose (milk). Milk yield is predicted from lactose and protein yields with an empirical equation developed from literature data. The model predicts desired dry-matter intake as an outcome of net energy requirements for a given dietary net energy content. The parameters controlling milk component yields and body composition changes were calibrated using two data sets in which the diet was the same for all animals. Weekly data from Holstein dairy cows was used to calibrate the model within-breed across milk production potentials. A second data set was used to evaluate the model and to calibrate it for breed differences (Holstein, Danish Red and Jersey) on the mobilization/reconstitution of body composition and on the yield of individual milk components. These calibrations showed that the model framework was able to adequately simulate milk yield, milk component yields, body composition changes and dry-matter intake throughout lactation for primiparous and multiparous cows differing in their production level.
NASA Astrophysics Data System (ADS)
Hopcroft, Peter O.; Gallagher, Kerry; Pain, Christopher C.
2009-08-01
Collections of suitably chosen borehole profiles can be used to infer large-scale trends in ground-surface temperature (GST) histories for the past few hundred years. These reconstructions are based on a large database of carefully selected borehole temperature measurements from around the globe. Since non-climatic thermal influences are difficult to identify, representative temperature histories are derived by averaging individual reconstructions to minimize the influence of these perturbing factors. This may lead to three potentially important drawbacks: the net signal of non-climatic factors may not be zero, meaning that the average does not reflect the best estimate of past climate; the averaging over large areas restricts the useful amount of more local climate change information available; and the inversion methods used to reconstruct the past temperatures at each site must be mathematically identical and are therefore not necessarily best suited to all data sets. In this work, we avoid these issues by using a Bayesian partition model (BPM), which is computed using a trans-dimensional form of a Markov chain Monte Carlo algorithm. This then allows the number and spatial distribution of different GST histories to be inferred from a given set of borehole data by partitioning the geographical area into discrete partitions. Profiles that are heavily influenced by non-climatic factors will be partitioned separately. Conversely, profiles with climatic information, which is consistent with neighbouring profiles, will then be inferred to lie in the same partition. The geographical extent of these partitions then leads to information on the regional extent of the climatic signal. In this study, three case studies are described using synthetic and real data. The first demonstrates that the Bayesian partition model method is able to correctly partition a suite of synthetic profiles according to the inferred GST history. In the second, more realistic case, a series of temperature profiles are calculated using surface air temperatures of a global climate model simulation. In the final case, 23 real boreholes from the United Kingdom, previously used for climatic reconstructions, are examined and the results compared with a local instrumental temperature series and the previous estimate derived from the same borehole data. The results indicate that the majority (17) of the 23 boreholes are unsuitable for climatic reconstruction purposes, at least without including other thermal processes in the forward model.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems
2005-05-01
Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT
A discrete scattering series representation for lattice embedded models of chain cyclization
NASA Astrophysics Data System (ADS)
Fraser, Simon J.; Winnik, Mitchell A.
1980-01-01
In this paper we develop a lattice based model of chain cyclization in the presence of a set of occupied sites V in the lattice. We show that within the approximation of a Markovian chain propagator the effect of V on the partition function for the system can be written as a time-ordered exponential series in which V behaves like a scattering potential and chainlength is the timelike parameter. The discrete and finite nature of this model allows us to obtain rigorous upper and lower bounds to the series limit. We adapt these formulas to calculation of the partition functions and cyclization probabilities of terminally and globally cyclizing chains. Two classes of cyclization are considered: in the first model the target set H may be visited repeatedly (the Markovian model); in the second case vertices in H may be visited at most once(the non-Markovian or taboo model). This formulation depends on two fundamental combinatorial structures, namely the inclusion-exclusion principle and the set of subsets of a set. We have tried to interpret these abstract structures with physical analogies throughout the paper.
Partitioning error components for accuracy-assessment of near-neighbor methods of imputation
Albert R. Stage; Nicholas L. Crookston
2007-01-01
Imputation is applied for two quite different purposes: to supply missing data to complete a data set for subsequent modeling analyses or to estimate subpopulation totals. Error properties of the imputed values have different effects in these two contexts. We partition errors of imputation derived from similar observation units as arising from three sources:...
Two-lattice models of trace element behavior: A response
NASA Astrophysics Data System (ADS)
Ellison, Adam J. G.; Hess, Paul C.
1990-08-01
Two-lattice melt components of Bottinga and Weill (1972), Nielsen and Drake (1979), and Nielsen (1985) are applied to major and trace element partitioning between coexisting immiscible liquids studied by RYERSON and Hess (1978) and Watson (1976). The results show that (1) the set of components most successful in one system is not necessarily portable to another system; (2) solution non-ideality within a sublattice severely limits applicability of two-lattice models; (3) rigorous application of two-lattice melt components may yield effective partition coefficients for major element components with no physical interpretation; and (4) the distinction between network-forming and network-modifying components in the sense of the two-lattice models is not clear cut. The algebraic description of two-lattice models is such that they will most successfully limit the compositional dependence of major and trace element solution behavior when the effective partition coefficient of the component of interest is essentially the same as the bulk partition coefficient of all other components within its sublattice.
The "p"-Median Model as a Tool for Clustering Psychological Data
ERIC Educational Resources Information Center
Kohn, Hans-Friedrich; Steinley, Douglas; Brusco, Michael J.
2010-01-01
The "p"-median clustering model represents a combinatorial approach to partition data sets into disjoint, nonhierarchical groups. Object classes are constructed around "exemplars", that is, manifest objects in the data set, with the remaining instances assigned to their closest cluster centers. Effective, state-of-the-art implementations of…
NASA Astrophysics Data System (ADS)
Abatzoglou, John T.; Ficklin, Darren L.
2017-09-01
The geographic variability in the partitioning of precipitation into surface runoff (Q) and evapotranspiration (ET) is fundamental to understanding regional water availability. The Budyko equation suggests this partitioning is strictly a function of aridity, yet observed deviations from this relationship for individual watersheds impede using the framework to model surface water balance in ungauged catchments and under future climate and land use scenarios. A set of climatic, physiographic, and vegetation metrics were used to model the spatial variability in the partitioning of precipitation for 211 watersheds across the contiguous United States (CONUS) within Budyko's framework through the free parameter ω. A generalized additive model found that four widely available variables, precipitation seasonality, the ratio of soil water holding capacity to precipitation, topographic slope, and the fraction of precipitation falling as snow, explained 81.2% of the variability in ω. The ω model applied to the Budyko equation explained 97% of the spatial variability in long-term Q for an independent set of watersheds. The ω model was also applied to estimate the long-term water balance across the CONUS for both contemporary and mid-21st century conditions. The modeled partitioning of observed precipitation to Q and ET compared favorably across the CONUS with estimates from more sophisticated land-surface modeling efforts. For mid-21st century conditions, the model simulated an increase in the fraction of precipitation used by ET across the CONUS with declines in Q for much of the eastern CONUS and mountainous watersheds across the western United States.
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
NASA Astrophysics Data System (ADS)
McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.
2017-03-01
We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.
NASA Astrophysics Data System (ADS)
Kassem, M.; Soize, C.; Gagliardini, L.
2011-02-01
In a recent work [ Journal of Sound and Vibration 323 (2009) 849-863] the authors presented an energy-density field approach for the vibroacoustic analysis of complex structures in the low and medium frequency ranges. In this approach, a local vibroacoustic energy model as well as a simplification of this model were constructed. In this paper, firstly an extension of the previous theory is performed in order to include the case of general input forces and secondly, a structural partitioning methodology is presented along with a set of tools used for the construction of a partitioning. Finally, an application is presented for an automotive vehicle.
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
Various forms of indexing HDMR for modelling multivariate classification problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksu, Çağrı; Tunga, M. Alper
2014-12-10
The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less
Reppas-Chrysovitsinos, Efstathios; Sobek, Anna; MacLeod, Matthew
2016-06-15
Polymeric materials flowing through the technosphere are repositories of organic chemicals throughout their life cycle. Equilibrium partition ratios of organic chemicals between these materials and air (KMA) or water (KMW) are required for models of fate and transport, high-throughput exposure assessment and passive sampling. KMA and KMW have been measured for a growing number of chemical/material combinations, but significant data gaps still exist. We assembled a database of 363 KMA and 910 KMW measurements for 446 individual compounds and nearly 40 individual polymers and biopolymers, collected from 29 studies. We used the EPI Suite and ABSOLV software packages to estimate physicochemical properties of the compounds and we employed an empirical correlation based on Trouton's rule to adjust the measured KMA and KMW values to a standard reference temperature of 298 K. Then, we used a thermodynamic triangle with Henry's law constant to calculate a complete set of 1273 KMA and KMW values. Using simple linear regression, we developed a suite of single parameter linear free energy relationship (spLFER) models to estimate KMA from the EPI Suite-estimated octanol-air partition ratio (KOA) and KMW from the EPI Suite-estimated octanol-water (KOW) partition ratio. Similarly, using multiple linear regression, we developed a set of polyparameter linear free energy relationship (ppLFER) models to estimate KMA and KMW from ABSOLV-estimated Abraham solvation parameters. We explored the two LFER approaches to investigate (1) their performance in estimating partition ratios, and (2) uncertainties associated with treating all different polymers as a single "bulk" polymeric material compartment. The models we have developed are suitable for screening assessments of the tendency for organic chemicals to be emitted from materials, and for use in multimedia models of the fate of organic chemicals in the indoor environment. In screening applications we recommend that KMA and KMW be modeled as 0.06 ×KOA and 0.06 ×KOW respectively, with an uncertainty range of a factor of 15.
Votano, Joseph R; Parham, Marc; Hall, L Mark; Hall, Lowell H; Kier, Lemont B; Oloff, Scott; Tropsha, Alexander
2006-11-30
Four modeling techniques, using topological descriptors to represent molecular structure, were employed to produce models of human serum protein binding (% bound) on a data set of 1008 experimental values, carefully screened from publicly available sources. To our knowledge, this data is the largest set on human serum protein binding reported for QSAR modeling. The data was partitioned into a training set of 808 compounds and an external validation test set of 200 compounds. Partitioning was accomplished by clustering the compounds in a structure descriptor space so that random sampling of 20% of the whole data set produced an external test set that is a good representative of the training set with respect to both structure and protein binding values. The four modeling techniques include multiple linear regression (MLR), artificial neural networks (ANN), k-nearest neighbors (kNN), and support vector machines (SVM). With the exception of the MLR model, the ANN, kNN, and SVM QSARs were ensemble models. Training set correlation coefficients and mean absolute error ranged from r2=0.90 and MAE=7.6 for ANN to r2=0.61 and MAE=16.2 for MLR. Prediction results from the validation set yielded correlation coefficients and mean absolute errors which ranged from r2=0.70 and MAE=14.1 for ANN to a low of r2=0.59 and MAE=18.3 for the SVM model. Structure descriptors that contribute significantly to the models are discussed and compared with those found in other published models. For the ANN model, structure descriptor trends with respect to their affects on predicted protein binding can assist the chemist in structure modification during the drug design process.
NASA Astrophysics Data System (ADS)
Lowe, Douglas; Topping, David; McFiggans, Gordon
2017-04-01
Gas to particle partitioning of atmospheric compounds occurs through disequilibrium mass transfer rather than through instantaneous equilibrium. However, it is common to treat only the inorganic compounds as partitioning dynamically whilst organic compounds, represented by the Volatility Basis Set (VBS), are partitioned instantaneously. In this study we implement a more realistic dynamic partitioning of organic compounds in a regional framework and assess impact on aerosol mass and microphysics. It is also common to assume condensed phase water is only associated with inorganic components. We thus also assess sensitivity to assuming all organics are hygroscopic according to their prescribed molecular weight. For this study we use WRF-Chem v3.4.1, focusing on anthropogenic dominated North-Western Europe. Gas-phase chemistry is represented using CBM-Z whilst aerosol dynamics are simulated using the 8-section MOSAIC scheme, including a 9-bin VBS treatment of organic aerosol. Results indicate that predicted mass loadings can vary significantly. Without gas phase ageing of higher volatility compounds, dynamic partitioning always results in lower mass loadings downwind of emission sources. The inclusion of condensed phase water in both partitioning models increases the predicted PM mass, resulting from a larger contribution from higher volatility organics, if present. If gas phase ageing of VBS compounds is allowed to occur in a dynamic model, this can often lead to higher predicted mass loadings, contrary to expected behaviour from a simple non-reactive gas phase box model. As descriptions of aerosol phase processes improve within regional models, the baseline descriptions of partitioning should retain the ability to treat dynamic partitioning of organics compounds. Using our simulations, we discuss whether derived sensitivities to aerosol processes in existing models may be inherently biased. This work was supported by the Natural Environment Research Council within the RONOCO (NE/F004656/1) and CCN-Vol (NE/L007827/1) projects.
On the star partition dimension of comb product of cycle and path
NASA Astrophysics Data System (ADS)
Alfarisi, Ridho; Darmaji
2017-08-01
Let G = (V, E) be a connected graphs with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk }. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S1, S2, S3, …, Sk } is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si, 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V(G) is the star partition dimension of G, denoted by spd(G). Finding the star partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of cycle and path namely Cm⊳Pn and Pn⊳Cm for n ≥ 2 and m ≥ 3.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
Harnessing the Bethe free energy†
Bapst, Victor
2016-01-01
ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178
NASA Astrophysics Data System (ADS)
OgéE, J.; Peylin, P.; Ciais, P.; Bariac, T.; Brunet, Y.; Berbigier, P.; Roche, C.; Richard, P.; Bardoux, G.; Bonnefond, J.-M.
2003-06-01
The current emphasis on global climate studies has led the scientific community to set up a number of sites for measuring the long-term biosphere-atmosphere net CO2 exchange (net ecosystem exchange, NEE). Partitioning this flux into its elementary components, net assimilation (FA), and respiration (FR), remains necessary in order to get a better understanding of biosphere functioning and design better surface exchange models. Noting that FR and FA have different isotopic signatures, we evaluate the potential of isotopic 13CO2 measurements in the air (combined with CO2 flux and concentration measurements) to partition NEE into FR and FA on a routine basis. The study is conducted at a temperate coniferous forest where intensive isotopic measurements in air, soil, and biomass were performed in summer 1997. The multilayer soil-vegetation-atmosphere transfer model MuSICA is adapted to compute 13CO2 flux and concentration profiles. Using MuSICA as a "perfect" simulator and taking advantage of the very dense spatiotemporal resolution of the isotopic data set (341 flasks over a 24-hour period) enable us to test each hypothesis and estimate the performance of the method. The partitioning works better in midafternoon when isotopic disequilibrium is strong. With only 15 flasks, i.e., two 13CO2 nighttime profiles (to estimate the isotopic signature of FR) and five daytime measurements (to perform the partitioning) we get mean daily estimates of FR and FA that agree with the model within 15-20%. However, knowledge of the mesophyll conductance seems crucial and may be a limitation to the method.
Huhn, Carolin; Pyell, Ute
2008-07-11
It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.
Liang, Chao; Han, Shu-ying; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin
2014-11-01
A strategy to utilize neutral model compounds for lipophilicity measurement of ionizable basic compounds by reversed-phase high-performance liquid chromatography is proposed in this paper. The applicability of the novel protocol was justified by theoretical derivation. Meanwhile, the linear relationships between logarithm of apparent n-octanol/water partition coefficients (logKow '') and logarithm of retention factors corresponding to the 100% aqueous fraction of mobile phase (logkw ) were established for a basic training set, a neutral training set and a mixed training set of these two. As proved in theory, the good linearity and external validation results indicated that the logKow ''-logkw relationships obtained from a neutral model training set were always reliable regardless of mobile phase pH. Afterwards, the above relationships were adopted to determine the logKow of harmaline, a weakly dissociable alkaloid. As far as we know, this is the first report on experimental logKow data for harmaline (logKow = 2.28 ± 0.08). Introducing neutral compounds into a basic model training set or using neutral model compounds alone is recommended to measure the lipophilicity of weakly ionizable basic compounds especially those with high hydrophobicity for the advantages of more suitable model compound choices and convenient mobile phase pH control. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Finding and testing network communities by lumped Markov chains.
Piccardi, Carlo
2011-01-01
Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition that is based on a quality threshold. By means of a lumped Markov chain model of a random walker, a quality measure called "persistence probability" is associated to a cluster, which is then defined as an "α-community" if such a probability is not smaller than α. Consistently, a partition composed of α-communities is an "α-partition." These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired α-level allows one to immediately select the α-partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the quality of each single community. Given its ability in individually assessing each single cluster, this approach can also disclose single well-defined communities even in networks that overall do not possess a definite clusterized structure.
Hou, Tingjun; Xu, Xiaojie
2002-12-01
In this study, the relationships between the brain-blood concentration ratio of 96 structurally diverse compounds with a large number of structurally derived descriptors were investigated. The linear models were based on molecular descriptors that can be calculated for any compound simply from a knowledge of its molecular structure. The linear correlation coefficients of the models were optimized by genetic algorithms (GAs), and the descriptors used in the linear models were automatically selected from 27 structurally derived descriptors. The GA optimizations resulted in a group of linear models with three or four molecular descriptors with good statistical significance. The change of descriptor use as the evolution proceeds demonstrates that the octane/water partition coefficient and the partial negative solvent-accessible surface area multiplied by the negative charge are crucial to brain-blood barrier permeability. Moreover, we found that the predictions using multiple QSPR models from GA optimization gave quite good results in spite of the diversity of structures, which was better than the predictions using the best single model. The predictions for the two external sets with 37 diverse compounds using multiple QSPR models indicate that the best linear models with four descriptors are sufficiently effective for predictive use. Considering the ease of computation of the descriptors, the linear models may be used as general utilities to screen the blood-brain barrier partitioning of drugs in a high-throughput fashion.
Toward prediction of alkane/water partition coefficients.
Toulmin, Anita; Wood, J Matthew; Kenny, Peter W
2008-07-10
Partition coefficients were measured for 47 compounds in the hexadecane/water ( P hxd) and 1-octanol/water ( P oct) systems. Some types of hydrogen bond acceptor presented by these compounds to the partitioning systems are not well represented in the literature of alkane/water partitioning. The difference, DeltalogP, between logP oct and logP hxd is a measure of the hydrogen bonding potential of a molecule and is identified as a target for predictive modeling. Minimized molecular electrostatic potential ( V min) was shown to be an effective predictor of the contribution of hydrogen bond acceptors to DeltalogP. Carbonyl oxygen atoms were found to be stronger hydrogen bond acceptors for their electrostatic potential than heteroaromatic nitrogen or oxygen bound to hypervalent sulfur or nitrogen. Values of V min calculated for hydrogen-bonded complexes were used to explore polarization effects. Predicted logP hxd and DeltalogP were shown to be more effective than logP oct for modeling brain penetration for a data set of 18 compounds.
The Development of the Speaker Independent ARM Continuous Speech Recognition System
1992-01-01
spokeTi airborne reconnaissance reports u-ing a speech recognition system based on phoneme-level hidden Markov models (HMMs). Previous versions of the ARM...will involve automatic selection from multiple model sets, corresponding to different speaker types, and that the most rudimen- tary partition of a...The vocabulary size for the ARM task is 497 words. These words are related to the phoneme-level symbols corresponding to the models in the model set
Hardware Index to Set Partition Converter
2013-01-01
Brisk, J.G. de Figueiredo Coutinho, P.C. Diniz (Eds.): ARC 2013, LNCS 7806, pp. 72–83, 2013. c© Springer-Verlag Berlin Heidelberg 2013 Report...374 (1990) 13. Orlov, M.: Efficient generation of set partitions (March 2002), http://www.cs.bgu.ac.il/~orlovm/papers/partitions.pdf 14. Reingold, E
Design of a Dual Waveguide Normal Incidence Tube (DWNIT) Utilizing Energy and Modal Methods
NASA Technical Reports Server (NTRS)
Betts, Juan F.; Jones, Michael G. (Technical Monitor)
2002-01-01
This report investigates the partition design of the proposed Dual Waveguide Normal Incidence Tube (DWNIT). Some advantages provided by the DWNIT are (1) Assessment of coupling relationships between resonators in close proximity, (2) Evaluation of "smart liners", (3) Experimental validation for parallel element models, and (4) Investigation of effects of simulated angles of incidence of acoustic waves. Energy models of the two chambers were developed to determine the Sound Pressure Level (SPL) drop across the two chambers, through the use of an intensity transmission function for the chamber's partition. The models allowed the chamber's lengthwise end samples to vary. The initial partition design (2" high, 16" long, 0.25" thick) was predicted to provide at least 160 dB SPL drop across the partition with a compressive model, and at least 240 dB SPL drop with a bending model using a damping loss factor of 0.01. The end chamber sample transmissions coefficients were set to 0.1. Since these results predicted more SPL drop than required, a plate thickness optimization algorithm was developed. The results of the algorithm routine indicated that a plate with the same height and length, but with a thickness of 0.1" and 0.05 structural damping loss, would provide an adequate SPL isolation between the chambers.
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
Evolving bipartite authentication graph partitions
Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.
2017-01-16
As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less
Evolving bipartite authentication graph partitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.
As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less
On the partition dimension of comb product of path and complete graph
NASA Astrophysics Data System (ADS)
Darmaji, Alfarisi, Ridho
2017-08-01
For a vertex v of a connected graph G(V, E) with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk}. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). Finding the partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of path and complete graph. The results show that comb product of complete grapph Km and path Pn namely p d (Km⊳Pn)=m where m ≥ 3 and n ≥ 2 and p d (Pn⊳Km)=m where m ≥ 3, n ≥ 2 and m ≥ n.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
NASA Astrophysics Data System (ADS)
Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud
2017-08-01
Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
Estimating Grass-Soil Bioconcentration of Munitions Compounds from Molecular Structure.
Torralba Sanchez, Tifany L; Liang, Yuzhen; Di Toro, Dominic M
2017-10-03
A partitioning-based model is presented to estimate the bioconcentration of five munitions compounds and two munition-like compounds in grasses. The model uses polyparameter linear free energy relationships (pp-LFERs) to estimate the partition coefficients between soil organic carbon and interstitial water and between interstitial water and the plant cuticle, a lipid-like plant component. Inputs for the pp-LFERs are a set of numerical descriptors computed from molecular structure only that characterize the molecular properties that determine the interaction with soil organic carbon, interstitial water, and plant cuticle. The model is validated by predicting concentrations measured in the whole plant during independent uptake experiments with a root-mean-square error (log predicted plant concentration-log observed plant concentration) of 0.429. This highlights the dominant role of partitioning between the exposure medium and the plant cuticle in the bioconcentration of these compounds. The pp-LFERs can be used to assess the environmental risk of munitions compounds and munition-like compounds using only their molecular structure as input.
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
Improving RNA nearest neighbor parameters for helices by going beyond the two-state model.
Spasic, Aleksandar; Berger, Kyle D; Chen, Jonathan L; Seetin, Matthew G; Turner, Douglas H; Mathews, David H
2018-06-01
RNA folding free energy change nearest neighbor parameters are widely used to predict folding stabilities of secondary structures. They were determined by linear regression to datasets of optical melting experiments on small model systems. Traditionally, the optical melting experiments are analyzed assuming a two-state model, i.e. a structure is either complete or denatured. Experimental evidence, however, shows that structures exist in an ensemble of conformations. Partition functions calculated with existing nearest neighbor parameters predict that secondary structures can be partially denatured, which also directly conflicts with the two-state model. Here, a new approach for determining RNA nearest neighbor parameters is presented. Available optical melting data for 34 Watson-Crick helices were fit directly to a partition function model that allows an ensemble of conformations. Fitting parameters were the enthalpy and entropy changes for helix initiation, terminal AU pairs, stacks of Watson-Crick pairs and disordered internal loops. The resulting set of nearest neighbor parameters shows a 38.5% improvement in the sum of residuals in fitting the experimental melting curves compared to the current literature set.
Allan Variance Calculation for Nonuniformly Spaced Input Data
2015-01-01
τ (tau). First, the set of gyro values is partitioned into bins of duration τ. For example, if the sampling duration τ is 2 sec and there are 4,000...Variance Calculation For each value of τ, the conventional AV calculation partitions the gyro data sets into bins with approximately τ / Δt...value of Δt. Therefore, a new way must be found to partition the gyro data sets into bins. The basic concept behind the modified AV calculation is
Billon, Alexis; Foy, Cédric; Picaut, Judicaël; Valeau, Vincent; Sakout, Anas
2008-06-01
In this paper, a modification of the diffusion model for room acoustics is proposed to account for sound transmission between two rooms, a source room and an adjacent room, which are coupled through a partition wall. A system of two diffusion equations, one for each room, together with a set of two boundary conditions, one for the partition wall and one for the other walls of a room, is obtained and numerically solved. The modified diffusion model is validated by numerical comparisons with the statistical theory for several coupled-room configurations by varying the coupling area surface, the absorption coefficient of each room, and the volume of the adjacent room. An experimental comparison is also carried out for two coupled classrooms. The modified diffusion model results agree very well with both the statistical theory and the experimental data. The diffusion model can then be used as an alternative to the statistical theory, especially when the statistical theory is not applicable, that is, when the reverberant sound field is not diffuse. Moreover, the diffusion model allows the prediction of the spatial distribution of sound energy within each coupled room, while the statistical theory gives only one sound level for each room.
A set partitioning reformulation for the multiple-choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Voß, Stefan; Lalla-Ruiz, Eduardo
2016-05-01
The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.
ERIC Educational Resources Information Center
Vera, J. Fernando; Macias, Rodrigo; Heiser, Willem J.
2009-01-01
In this paper, we propose a cluster-MDS model for two-way one-mode continuous rating dissimilarity data. The model aims at partitioning the objects into classes and simultaneously representing the cluster centers in a low-dimensional space. Under the normal distribution assumption, a latent class model is developed in terms of the set of…
NASA Astrophysics Data System (ADS)
Faribault, Alexandre; Tschirhart, Hugo; Muller, Nicolas
2016-05-01
In this work we present a determinant expression for the domain-wall boundary condition partition function of rational (XXX) Richardson-Gaudin models which, in addition to N-1 spins \\frac{1}{2}, contains one arbitrarily large spin S. The proposed determinant representation is written in terms of a set of variables which, from previous work, are known to define eigenstates of the quantum integrable models belonging to this class as solutions to quadratic Bethe equations. Such a determinant can be useful numerically since systems of quadratic equations are much simpler to solve than the usual highly nonlinear Bethe equations. It can therefore offer significant gains in stability and computation speed.
Crystal-chemistry and partitioning of REE in whitlockite
NASA Technical Reports Server (NTRS)
Colson, R. O.; Jolliff, B. L.
1993-01-01
Partitioning of Rare Earth Elements (REE) in whitlockite is complicated by the fact that two or more charge-balancing substitutions are involved and by the fact that concentrations of REE in natural whitlockites are sufficiently high such that simple partition coefficients are not expected to be constant even if mixing in the system is completely ideal. The present study combines preexisting REE partitioning data in whitlockites with new experiments in the same compositional system and at the same temperature (approximately 1030 C) to place additional constraints on the complex variations of REE partition coefficients and to test theoretical models for how REE partitioning should vary with REE concentration and other compositional variables. With this data set, and by combining crystallographic and thermochemical constraints with a SAS simultaneous-equation best-fitting routine, it is possible to infer answers to the following questions: what is the speciation on the individual sites Ca(B), Mg, and Ca(IIA) (where the ideal structural formula is Ca(B)18 Mg2Ca(IIA)2P14O56); how are REE's charge-balanced in the crystal; and is mixing of REE in whitlockite ideal or non-ideal. This understanding is necessary in order to extrapolate derived partition coefficients to other compositional systems and provides a broadened understanding of the crystal chemistry of whitlockite.
Impact of Surface Roughness and Soil Texture on Mineral Dust Emission Fluxes Modeling
NASA Technical Reports Server (NTRS)
Menut, Laurent; Perez, Carlos; Haustein, Karsten; Bessagnet, Bertrand; Prigent, Catherine; Alfaro, Stephane
2013-01-01
Dust production models (DPM) used to estimate vertical fluxes of mineral dust aerosols over arid regions need accurate data on soil and surface properties. The Laboratoire Inter-Universitaire des Systemes Atmospheriques (LISA) data set was developed for Northern Africa, the Middle East, and East Asia. This regional data set was built through dedicated field campaigns and include, among others, the aerodynamic roughness length, the smooth roughness length of the erodible fraction of the surface, and the dry (undisturbed) soil size distribution. Recently, satellite-derived roughness length and high-resolution soil texture data sets at the global scale have emerged and provide the opportunity for the use of advanced schemes in global models. This paper analyzes the behavior of the ERS satellite-derived global roughness length and the State Soil Geographic data base-Food and Agriculture Organization of the United Nations (STATSGO-FAO) soil texture data set (based on wet techniques) using an advanced DPM in comparison to the LISA data set over Northern Africa and the Middle East. We explore the sensitivity of the drag partition scheme (a critical component of the DPM) and of the dust vertical fluxes (intensity and spatial patterns) to the roughness length and soil texture data sets. We also compare the use of the drag partition scheme to a widely used preferential source approach in global models. Idealized experiments with prescribed wind speeds show that the ERS and STATSGO-FAO data sets provide realistic spatial patterns of dust emission and friction velocity thresholds in the region. Finally, we evaluate a dust transport model for the period of March to July 2011 with observed aerosol optical depths from Aerosol Robotic Network sites. Results show that ERS and STATSGO-FAO provide realistic simulations in the region.
Hall, Matthew; Woolhouse, Mark; Rambaut, Andrew
2015-01-01
The use of genetic data to reconstruct the transmission tree of infectious disease epidemics and outbreaks has been the subject of an increasing number of studies, but previous approaches have usually either made assumptions that are not fully compatible with phylogenetic inference, or, where they have based inference on a phylogeny, have employed a procedure that requires this tree to be fixed. At the same time, the coalescent-based models of the pathogen population that are employed in the methods usually used for time-resolved phylogeny reconstruction are a considerable simplification of epidemic process, as they assume that pathogen lineages mix freely. Here, we contribute a new method that is simultaneously a phylogeny reconstruction method for isolates taken from an epidemic, and a procedure for transmission tree reconstruction. We observe that, if one or more samples is taken from each host in an epidemic or outbreak and these are used to build a phylogeny, a transmission tree is equivalent to a partition of the set of nodes of this phylogeny, such that each partition element is a set of nodes that is connected in the full tree and contains all the tips corresponding to samples taken from one and only one host. We then implement a Monte Carlo Markov Chain (MCMC) procedure for simultaneous sampling from the spaces of both trees, utilising a newly-designed set of phylogenetic tree proposals that also respect node partitions. We calculate the posterior probability of these partitioned trees based on a model that acknowledges the population structure of an epidemic by employing an individual-based disease transmission model and a coalescent process taking place within each host. We demonstrate our method, first using simulated data, and then with sequences taken from the H7N7 avian influenza outbreak that occurred in the Netherlands in 2003. We show that it is superior to established coalescent methods for reconstructing the topology and node heights of the phylogeny and performs well for transmission tree reconstruction when the phylogeny is well-resolved by the genetic data, but caution that this will often not be the case in practice and that existing genetic and epidemiological data should be used to configure such analyses whenever possible. This method is available for use by the research community as part of BEAST, one of the most widely-used packages for reconstruction of dated phylogenies. PMID:26717515
Certificate Revocation Using Fine Grained Certificate Space Partitioning
NASA Astrophysics Data System (ADS)
Goyal, Vipul
A new certificate revocation system is presented. The basic idea is to divide the certificate space into several partitions, the number of partitions being dependent on the PKI environment. Each partition contains the status of a set of certificates. A partition may either expire or be renewed at the end of a time slot. This is done efficiently using hash chains.
What are the structural features that drive partitioning of proteins in aqueous two-phase systems?
Wu, Zhonghua; Hu, Gang; Wang, Kui; Zaslavsky, Boris Yu; Kurgan, Lukasz; Uversky, Vladimir N
2017-01-01
Protein partitioning in aqueous two-phase systems (ATPSs) represents a convenient, inexpensive, and easy to scale-up protein separation technique. Since partition behavior of a protein dramatically depends on an ATPS composition, it would be highly beneficial to have reliable means for (even qualitative) prediction of partitioning of a target protein under different conditions. Our aim was to understand which structural features of proteins contribute to partitioning of a query protein in a given ATPS. We undertook a systematic empirical analysis of relations between 57 numerical structural descriptors derived from the corresponding amino acid sequences and crystal structures of 10 well-characterized proteins and the partition behavior of these proteins in 29 different ATPSs. This analysis revealed that just a few structural characteristics of proteins can accurately determine behavior of these proteins in a given ATPS. However, partition behavior of proteins in different ATPSs relies on different structural features. In other words, we could not find a unique set of protein structural features derived from their crystal structures that could be used for the description of the protein partition behavior of all proteins in all ATPSs analyzed in this study. We likely need to gain better insight into relationships between protein-solvent interactions and protein structure peculiarities, in particular given limitations of the used here crystal structures, to be able to construct a model that accurately predicts protein partition behavior across all ATPSs. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Foda, O.; Welsh, T. A.
2016-04-01
We study the Andrews-Gordon-Bressoud (AGB) generalisations of the Rogers-Ramanujan q-series identities in the context of cylindric partitions. We recall the definition of r-cylindric partitions, and provide a simple proof of Borodin’s product expression for their generating functions, that can be regarded as a limiting case of an unpublished proof by Krattenthaler. We also recall the relationships between the r-cylindric partition generating functions, the principal characters of {\\hat{{sl}}}r algebras, the {{\\boldsymbol{ M }}}r r,r+d minimal model characters of {{\\boldsymbol{ W }}}r algebras, and the r-string abaci generating functions, providing simple proofs for each. We then set r = 2, and use two-cylindric partitions to re-derive the AGB identities as follows. Firstly, we use Borodin’s product expression for the generating functions of the two-cylindric partitions with infinitely long parts, to obtain the product sides of the AGB identities, times a factor {(q;q)}∞ -1, which is the generating function of ordinary partitions. Next, we obtain a bijection from the two-cylindric partitions, via two-string abaci, into decorated versions of Bressoud’s restricted lattice paths. Extending Bressoud’s method of transforming between restricted paths that obey different restrictions, we obtain sum expressions with manifestly non-negative coefficients for the generating functions of the two-cylindric partitions which contains a factor {(q;q)}∞ -1. Equating the product and sum expressions of the same two-cylindric partitions, and canceling a factor of {(q;q)}∞ -1 on each side, we obtain the AGB identities.
NASA Technical Reports Server (NTRS)
Schwandt, C. S.; McKay, G. A.
1996-01-01
Determining the petrogenesis of eucrites (basaltic achondrites) and diogenites (orthopyroxenites) and the possible links between the meteorite types was initiated 30 years ago by Mason. Since then, most investigators have worked on this question. A few contrasting theories have emerged, with the important distinction being whether or not there is a direct genetic link between eucrites and diogenites. One theory suggests that diogenites are cumulates resulting from the fractional crystallization of a parent magma with the eucrites crystallizing, from the residual magma after separation from the diogenite cumulates. Another model proposes that diogenites are cumulates formed from partial melts derived from a source region depleted by the prior generation of eucrite melts. It has also been proposed that the diogenites may not be directly linked to the eucrites and that they are cumulates derived from melts that are more orthopyroxene normative than the eucrites. This last theory has recently received more analytical and experimental support. One of the difficulties with petrogenetic modeling is that it requires appropriate partition coefficients for modeling because they are dependent on temperature, pressure, and composition. For this reason, we set out to determine minor- and trace-element partition coefficients for diogenite-like orthopyroxene. We have accomplished this task and now have enstatite/melt partition coefficients for Al, Cr, Ti, La, Ce, Nd, Sm, Eu, Dy, Er, Yb, and La.
NASA Astrophysics Data System (ADS)
Wei, Xiao-Ran; Zhang, Yu-He; Geng, Guo-Hua
2016-09-01
In this paper, we examined how printing the hollow objects without infill via fused deposition modeling, one of the most widely used 3D-printing technologies, by partitioning the objects to shell parts. More specifically, we linked the partition to the exact cover problem. Given an input watertight mesh shape S, we developed region growing schemes to derive a set of surfaces that had inside surfaces that were printable without support on the mesh for the candidate parts. We then employed Monte Carlo tree search over the candidate parts to obtain the optimal set cover. All possible candidate subsets of exact cover from the optimal set cover were then obtained and the bounded tree was used to search the optimal exact cover. We oriented each shell part to the optimal position to guarantee the inside surface was printed without support, while the outside surface was printed with minimum support. Our solution can be applied to a variety of models, closed-hollowed or semi-closed, with or without holes, as evidenced by experiments and performance evaluation on our proposed algorithm.
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.
Wang, Thanh; Han, Shanlong; Yuan, Bo; Zeng, Lixi; Li, Yingming; Wang, Yawei; Jiang, Guibin
2012-12-01
Short chain chlorinated paraffins (SCCPs) are semi-volatile chemicals that are considered persistent in the environment, potential toxic and subject to long-range transport. This study investigates the concentrations and gas-particle partitioning of SCCPs at an urban site in Beijing during summer and wintertime. The total atmospheric SCCP levels ranged 1.9-33.0 ng/m(3) during wintertime. Significantly higher levels were found during the summer (range 112-332 ng/m(3)). The average fraction of total SCCPs in the particle phase (ϕ) was 0.67 during wintertime but decreased significantly during the summer (ϕ = 0.06). The ten and eleven carbon chain homologues with five to eight chlorine atoms were the predominant SCCP formula groups in air. Significant linear correlations were found between the gas-particle partition coefficients and the predicted subcooled vapor pressures and octanol-air partition coefficients. The gas-particle partitioning of SCCPs was further investigated and compared with both the Junge-Pankow adsorption and K(oa)-based absorption models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hydraulic geometry of the Platte River in south-central Nebraska
Eschner, T.R.
1982-01-01
At-a-station hydraulic-geometry of the Platte River in south-central Nebraska is complex. The range of exponents of simple power-function relations is large, both between different reaches of the river, and among different sections within a given reach. The at-a-station exponents plot in several fields of the b-f-m diagram, suggesting that morphologic and hydraulic changes with increasing discharge vary considerably. Systematic changes in the plotting positions of the exponents with time indicate that in general, the width exponent has decreased, although trends are not readily apparent in the other exponents. Plots of the hydraulic-geometry relations indicate that simple power functions are not the proper model in all instances. For these sections, breaks in the slopes of the hydraulic geometry relations serve to partition the data sets. Power functions fit separately to the partitioned data described the width-, depth-, and velocity-discharge relations more accurately than did a single power function. Plotting positions of the exponents from hydraulic geometry relations of partitioned data sets on b-f-m diagrams indicate that much of the apparent variations of plotting positions of single power functions results because the single power functions compromise both subsets of partitioned data. For several sections, the shape of the channel primarily accounts for the better fit of two-power functions to partitioned data than a single power function over the entire range of data. These non-log linear relations may have significance for channel maintenance. (USGS)
Zeng, Xiao-Lan; Wang, Hong-Jun; Wang, Yan
2012-02-01
The possible molecular geometries of 134 halogenated methyl-phenyl ethers were optimized at B3LYP/6-31G(*) level with Gaussian 98 program. The calculated structural parameters were taken as theoretical descriptors to establish two new novel QSPR models for predicting aqueous solubility (-lgS(w,l)) and n-octanol/water partition coefficient (lgK(ow)) of halogenated methyl-phenyl ethers. The two models achieved in this work both contain three variables: energy of the lowest unoccupied molecular orbital (E(LUMO)), most positive atomic partial charge in molecule (q(+)), and quadrupole moment (Q(yy) or Q(zz)), of which R values are 0.992 and 0.970 respectively, their standard errors of estimate in modeling (SD) are 0.132 and 0.178, respectively. The results of leave-one-out (LOO) cross-validation for training set and validation with external test sets both show that the models obtained exhibited optimum stability and good predictive power. We suggests that two QSPR models derived here can be used to predict S(w,l) and K(ow) accurately for non-tested halogenated methyl-phenyl ethers congeners. Copyright © 2011 Elsevier Ltd. All rights reserved.
The partition dimension of cycle books graph
NASA Astrophysics Data System (ADS)
Santoso, Jaya; Darmaji
2018-03-01
Let G be a nontrivial and connected graph with vertex set V(G), edge set E(G) and S ⊆ V(G) with v ∈ V(G), the distance between v and S is d(v,S) = min{d(v,x)|x ∈ S}. For an ordered partition ∏ = {S 1, S 2, S 3,…, Sk } of V(G), the representation of v with respect to ∏ is defined by r(v|∏) = (d(v, S 1), d(v, S 2),…, d(v, Sk )). The partition ∏ is called a resolving partition of G if all representations of vertices are distinct. The partition dimension pd(G) is the smallest integer k such that G has a resolving partition set with k members. In this research, we will determine the partition dimension of Cycle Books {B}{Cr,m}. Cycle books graph {B}{Cr,m} is a graph consisting of m copies cycle Cr with the common path P 2. It is shown that the partition dimension of cycle books graph, pd({B}{C3,m}) is 3 for m = 2, 3, and m for m ≥ 4. pd({B}{C4,m}) is 3 + 2k for m = 3k + 2, 4 + 2(k ‑ 1) for m = 3k + 1, and 3 + 2(k ‑ 1) for m = 3k. pd({B}{C5,m}) is m + 1.
Drug Distribution. Part 1. Models to Predict Membrane Partitioning.
Nagar, Swati; Korzekwa, Ken
2017-03-01
Tissue partitioning is an important component of drug distribution and half-life. Protein binding and lipid partitioning together determine drug distribution. Two structure-based models to predict partitioning into microsomal membranes are presented. An orientation-based model was developed using a membrane template and atom-based relative free energy functions to select drug conformations and orientations for neutral and basic drugs. The resulting model predicts the correct membrane positions for nine compounds tested, and predicts the membrane partitioning for n = 67 drugs with an average fold-error of 2.4. Next, a more facile descriptor-based model was developed for acids, neutrals and bases. This model considers the partitioning of neutral and ionized species at equilibrium, and can predict membrane partitioning with an average fold-error of 2.0 (n = 92 drugs). Together these models suggest that drug orientation is important for membrane partitioning and that membrane partitioning can be well predicted from physicochemical properties.
Direct optimization, affine gap costs, and node stability.
Aagesen, Lone
2005-09-01
The outcome of a phylogenetic analysis based on DNA sequence data is highly dependent on the homology-assignment step and may vary with alignment parameter costs. Robustness to changes in parameter costs is therefore a desired quality of a data set because the final conclusions will be less dependent on selecting a precise optimal cost set. Here, node stability is explored in relationship to separate versus combined analysis in three different data sets, all including several data partitions. Robustness to changes in cost sets is measured as number of successive changes that can be made in a given cost set before a specific clade is lost. The changes are in all cases base change cost, gap penalties, and adding/removing/changing affine gap costs. When combining data partitions, the number of clades that appear in the entire parameter space is not remarkably increased, in some cases this number even decreased. However, when combining data partitions the trees from cost sets including affine gap costs were always more similar than the trees were from cost sets without affine gap costs. This was not the case when the data partitions were analyzed independently. When data sets were combined approximately 80% of the clades found under cost sets including affine gap costs resisted at least one change to the cost set.
Prosperi, Mattia C F; De Luca, Andrea; Di Giambenedetto, Simona; Bracciale, Laura; Fabbiani, Massimiliano; Cauda, Roberto; Salemi, Marco
2010-10-25
Phylogenetic methods produce hierarchies of molecular species, inferring knowledge about taxonomy and evolution. However, there is not yet a consensus methodology that provides a crisp partition of taxa, desirable when considering the problem of intra/inter-patient quasispecies classification or infection transmission event identification. We introduce the threshold bootstrap clustering (TBC), a new methodology for partitioning molecular sequences, that does not require a phylogenetic tree estimation. The TBC is an incremental partition algorithm, inspired by the stochastic Chinese restaurant process, and takes advantage of resampling techniques and models of sequence evolution. TBC uses as input a multiple alignment of molecular sequences and its output is a crisp partition of the taxa into an automatically determined number of clusters. By varying initial conditions, the algorithm can produce different partitions. We describe a procedure that selects a prime partition among a set of candidate ones and calculates a measure of cluster reliability. TBC was successfully tested for the identification of type-1 human immunodeficiency and hepatitis C virus subtypes, and compared with previously established methodologies. It was also evaluated in the problem of HIV-1 intra-patient quasispecies clustering, and for transmission cluster identification, using a set of sequences from patients with known transmission event histories. TBC has been shown to be effective for the subtyping of HIV and HCV, and for identifying intra-patient quasispecies. To some extent, the algorithm was able also to infer clusters corresponding to events of infection transmission. The computational complexity of TBC is quadratic in the number of taxa, lower than other established methods; in addition, TBC has been enhanced with a measure of cluster reliability. The TBC can be useful to characterise molecular quasipecies in a broad context.
Braun, Katharina; Böhnke, Frank; Stark, Thomas
2012-06-01
We present a complete geometric model of the human cochlea, including the segmentation and reconstruction of the fluid-filled chambers scala tympani and scala vestibuli, the lamina spiralis ossea and the vibrating structure (cochlear partition). Future fluid-structure coupled simulations require a reliable geometric model of the cochlea. The aim of this study was to present an anatomical model of the human cochlea, which can be used for further numerical calculations. Using high resolution micro-computed tomography (µCT), we obtained images of a cut human temporal bone with a spatial resolution of 5.9 µm. Images were manually segmented to obtain the three-dimensional reconstruction of the cochlea. Due to the high resolution of the µCT data, a detailed examination of the geometry of the twisted cochlear partition near the oval and the round window as well as the precise illustration of the helicotrema was possible. After reconstruction of the lamina spiralis ossea, the cochlear partition and the curved geometry of the scala vestibuli and the scala tympani were presented. The obtained data sets were exported as standard lithography (stl) files. These files represented a complete framework for future numerical simulations of mechanical (acoustic) wave propagation on the cochlear partition in the form of mathematical mechanical cochlea models. Additional quantitative information concerning heights, lengths and volumes of the scalae was found and compared with previous results.
Pirkle, Catherine M; Wu, Yan Yan; Zunzunegui, Maria-Victoria; Gómez, José Fernando
2018-01-01
Objective Conceptual models underpinning much epidemiological research on ageing acknowledge that environmental, social and biological systems interact to influence health outcomes. Recursive partitioning is a data-driven approach that allows for concurrent exploration of distinct mixtures, or clusters, of individuals that have a particular outcome. Our aim is to use recursive partitioning to examine risk clusters for metabolic syndrome (MetS) and its components, in order to identify vulnerable populations. Study design Cross-sectional analysis of baseline data from a prospective longitudinal cohort called the International Mobility in Aging Study (IMIAS). Setting IMIAS includes sites from three middle-income countries—Tirana (Albania), Natal (Brazil) and Manizales (Colombia)—and two from Canada—Kingston (Ontario) and Saint-Hyacinthe (Quebec). Participants Community-dwelling male and female adults, aged 64–75 years (n=2002). Primary and secondary outcome measures We apply recursive partitioning to investigate social and behavioural risk factors for MetS and its components. Model-based recursive partitioning (MOB) was used to cluster participants into age-adjusted risk groups based on variabilities in: study site, sex, education, living arrangements, childhood adversities, adult occupation, current employment status, income, perceived income sufficiency, smoking status and weekly minutes of physical activity. Results 43% of participants had MetS. Using MOB, the primary partitioning variable was participant sex. Among women from middle-incomes sites, the predicted proportion with MetS ranged from 58% to 68%. Canadian women with limited physical activity had elevated predicted proportions of MetS (49%, 95% CI 39% to 58%). Among men, MetS ranged from 26% to 41% depending on childhood social adversity and education. Clustering for MetS components differed from the syndrome and across components. Study site was a primary partitioning variable for all components except HDL cholesterol. Sex was important for most components. Conclusion MOB is a promising technique for identifying disease risk clusters (eg, vulnerable populations) in modestly sized samples. PMID:29500203
[Analytic methods for seed models with genotype x environment interactions].
Zhu, J
1996-01-01
Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.
Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.
Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian
2015-03-01
We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling.
In this study, modeled gas- and aerosol phase ammonia, nitric acid, and hydrogen chloride are compared to measurements taken during a field campaign conducted in northern Colorado in February and March 2011. We compare the modeled and observed gas-particle partitioning, and assess potential reasons for discrepancies between the model and measurements. This data set contains scripts and data used for each figure in the associated manuscript. Figures are generated using the R project statistical programming language. Data files are in either comma-separated value (CSV) format or netCDF, a standard self-describing binary data format commonly used in the earth and atmospheric sciences. This dataset is associated with the following publication:Kelly , J., K. Baker , C. Nolte, S. Napelenok , W.C. Keene, and A.A.P. Pszenny. Simulating the phase partitioning of NH3, HNO3, and HCl with size-resolved particles over northern Colorado in winter. ATMOSPHERIC ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 131: 67-77, (2016).
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.
Henneberger, Luise; Goss, Kai-Uwe; Endo, Satoshi
2016-07-05
The in vivo partitioning behavior of ionogenic organic chemicals (IOCs) is of paramount importance for their toxicokinetics and bioaccumulation. Among other proteins, structural proteins including muscle proteins could be an important sorption phase for IOCs, because of their high quantity in the human and other animals' body and their polar nature. Binding data for IOCs to structural proteins are, however, severely limited. Therefore, in this study muscle protein-water partition coefficients (KMP/w) of 51 systematically selected organic anions and cations were determined experimentally. A comparison of the measured KMP/w with bovine serum albumin (BSA)-water partition coefficients showed that anionic chemicals sorb more strongly to BSA than to muscle protein (by up to 3.5 orders of magnitude), while cations sorb similarly to both proteins. Sorption isotherms of selected IOCs to muscle protein are linear (i.e., KMP/w is concentration independent), and KMP/w is only marginally influenced by pH value and salt concentration. Using the obtained data set of KMP/w a polyparameter linear free energy relationship (PP-LFER) model was established. The derived equation fits the data well (R(2) = 0.89, RMSE = 0.29). Finally, it was demonstrated that the in vitro measured KMP/w values of this study have the potential to be used to evaluate tissue-plasma partitioning of IOCs in vivo.
Optimal Partitioning of a Data Set Based on the "p"-Median Model
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2008-01-01
Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…
NASA Astrophysics Data System (ADS)
Gibbard, Philip L.; Lewin, John
2016-11-01
We review the historical purposes and procedures for stratigraphical division and naming within the Quaternary, and summarize the current requirements for formal partitioning through the International Commission on Stratigraphy (ICS). A raft of new data and evidence has impacted traditional approaches: quasi-continuous records from ocean sediments and ice cores, new numerical dating techniques, and alternative macro-models, such as those provided through Sequence Stratigraphy and Earth-System Science. The practical usefulness of division remains, but there is now greater appreciation of complex Quaternary detail and the modelling of time continua, the latter also extending into the future. There are problems both of commission (what is done, but could be done better) and of omission (what gets left out) in partitioning the Quaternary. These include the challenge set by the use of unconformities as stage boundaries, how to deal with multiphase records in ocean and terrestrial sediments, what happened at the 'Early-Mid- (Middle) Pleistocene Transition', dealing with trends that cross phase boundaries, and the current controversial focus on how to subdivide the Holocene and formally define an 'Anthropocene'.
Partitioning behavior of aromatic components in jet fuel into diverse membrane-coated fibers.
Baynes, Ronald E; Xia, Xin-Rui; Barlow, Beth M; Riviere, Jim E
2007-11-01
Jet fuel components are known to partition into skin and produce occupational irritant contact dermatitis (OICD) and potentially adverse systemic effects. The purpose of this study was to determine how jet fuel components partition (1) from solvent mixtures into diverse membrane-coated fibers (MCFs) and (2) from biological media into MCFs to predict tissue distribution. Three diverse MCFs, polydimethylsiloxane (PDMS, lipophilic), polyacrylate (PA, polarizable), and carbowax (CAR, polar), were selected to simulate the physicochemical properties of skin in vivo. Following an appropriate equilibrium time between the MCF and dosing solutions, the MCF was injected directly into a gas chromatograph/mass spectrometer (GC-MS) to quantify the amount that partitioned into the membrane. Three vehicles (water, 50% ethanol-water, and albumin-containing media solution) were studied for selected jet fuel components. The more hydrophobic the component, the greater was the partitioning into the membranes across all MCF types, especially from water. The presence of ethanol as a surrogate solvent resulted in significantly reduced partitioning into the MCFs with discernible differences across the three fibers based on their chemistries. The presence of a plasma substitute (media) also reduced partitioning into the MCF, with the CAR MCF system being better correlated to the predicted partitioning of aromatic components into skin. This study demonstrated that a single or multiple set of MCF fibers may be used as a surrogate for octanol/water systems and skin to assess partitioning behavior of nine aromatic components frequently formulated with jet fuels. These diverse inert fibers were able to assess solute partitioning from a blood substitute such as media into a membrane possessing physicochemical properties similar to human skin. This information may be incorporated into physiologically based pharmacokinetic (PBPK) models to provide a more accurate assessment of tissue dosimetry of related toxicants.
Fu, Zhiqiang; Chen, Jingwen; Li, Xuehua; Wang, Ya'nan; Yu, Haiying
2016-04-01
The octanol-air partition coefficient (KOA) is needed for assessing multimedia transport and bioaccumulability of organic chemicals in the environment. As experimental determination of KOA for various chemicals is costly and laborious, development of KOA estimation methods is necessary. We investigated three methods for KOA prediction, conventional quantitative structure-activity relationship (QSAR) models based on molecular structural descriptors, group contribution models based on atom-centered fragments, and a novel model that predicts KOA via solvation free energy from air to octanol phase (ΔGO(0)), with a collection of 939 experimental KOA values for 379 compounds at different temperatures (263.15-323.15 K) as validation or training sets. The developed models were evaluated with the OECD guidelines on QSAR models validation and applicability domain (AD) description. Results showed that although the ΔGO(0) model is theoretically sound and has a broad AD, the prediction accuracy of the model is the poorest. The QSAR models perform better than the group contribution models, and have similar predictability and accuracy with the conventional method that estimates KOA from the octanol-water partition coefficient and Henry's law constant. One QSAR model, which can predict KOA at different temperatures, was recommended for application as to assess the long-range transport potential of chemicals. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the star partition dimension of comb product of cycle and complete graph
NASA Astrophysics Data System (ADS)
Alfarisi, Ridho; Darmaji; Dafik
2017-06-01
Let G = (V, E) be a connected graphs with vertex set V (G), edge set E(G) and S ⊆ V (G). For an ordered partition Π = {S 1, S 2, S 3, …, Sk } of V (G), the representation of a vertex v ∈ V (G) with respect to Π is the k-vectors r(v|Π) = (d(v, S 1), d(v, S 2), …, d(v, Sk )), where d(v, Sk ) represents the distance between the vertex v and the set Sk , defined by d(v, Sk ) = min{d(v, x)|x ∈ Sk}. The partition Π of V (G) is a resolving partition if the k-vektors r(v|Π), v ∈ V (G) are distinct. The minimum resolving partition Π is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S 1, S 2, S 3, …, Sk} is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si , 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V (G) is the star partition dimension of G, denoted by spd(G). Finding a star partition dimension of G is classified to be a NP-Hard problem. Furthermore, the comb product between G and H, denoted by G ⊲ H, is a graph obtained by taking one copy of G and |V (G)| copies of H and grafting the i-th copy of H at the vertex o to the i-th vertex of G. By definition of comb product, we can say that V (G ⊲ H) = {(a, u)|a ∈ V (G), u ∈ V (H)} and (a, u)(b, v) ∈ E(G ⊲ H) whenever a = b and uv ∈ E(H), or ab ∈ E(G) and u = v = o. In this paper, we will study the star partition dimension of comb product of cycle and complete graph, namely Cn ⊲ Km and Km ⊲ Cn for n ≥ 3 and m ≥ 3.
Feller, Chrystel; Favre, Patrick; Janka, Ales; Zeeman, Samuel C; Gabriel, Jean-Pierre; Reinhardt, Didier
2015-01-01
Plants are highly plastic in their potential to adapt to changing environmental conditions. For example, they can selectively promote the relative growth of the root and the shoot in response to limiting supply of mineral nutrients and light, respectively, a phenomenon that is referred to as balanced growth or functional equilibrium. To gain insight into the regulatory network that controls this phenomenon, we took a systems biology approach that combines experimental work with mathematical modeling. We developed a mathematical model representing the activities of the root (nutrient and water uptake) and the shoot (photosynthesis), and their interactions through the exchange of the substrates sugar and phosphate (Pi). The model has been calibrated and validated with two independent experimental data sets obtained with Petunia hybrida. It involves a realistic environment with a day-and-night cycle, which necessitated the introduction of a transitory carbohydrate storage pool and an endogenous clock for coordination of metabolism with the environment. Our main goal was to grasp the dynamic adaptation of shoot:root ratio as a result of changes in light and Pi supply. The results of our study are in agreement with balanced growth hypothesis, suggesting that plants maintain a functional equilibrium between shoot and root activity based on differential growth of these two compartments. Furthermore, our results indicate that resource partitioning can be understood as the emergent property of many local physiological processes in the shoot and the root without explicit partitioning functions. Based on its encouraging predictive power, the model will be further developed as a tool to analyze resource partitioning in shoot and root crops.
Fragment-based prediction of skin sensitization using recursive partitioning
NASA Astrophysics Data System (ADS)
Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian
2011-09-01
Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.
2010-01-01
Background Comparative genomics methods such as phylogenetic profiling can mine powerful inferences from inherently noisy biological data sets. We introduce Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL), a method that applies the Partial Phylogenetic Profiling (PPP) approach locally within a protein sequence to discover short sequence signatures associated with functional sites. The approach is based on the basic scoring mechanism employed by PPP, namely the use of binomial distribution statistics to optimize sequence similarity cutoffs during searches of partitioned training sets. Results Here we illustrate and validate the ability of the SIMBAL method to find functionally relevant short sequence signatures by application to two well-characterized protein families. In the first example, we partitioned a family of ABC permeases using a metabolic background property (urea utilization). Thus, the TRUE set for this family comprised members whose genome of origin encoded a urea utilization system. By moving a sliding window across the sequence of a permease, and searching each subsequence in turn against the full set of partitioned proteins, the method found which local sequence signatures best correlated with the urea utilization trait. Mapping of SIMBAL "hot spots" onto crystal structures of homologous permeases reveals that the significant sites are gating determinants on the cytosolic face rather than, say, docking sites for the substrate-binding protein on the extracellular face. In the second example, we partitioned a protein methyltransferase family using gene proximity as a criterion. In this case, the TRUE set comprised those methyltransferases encoded near the gene for the substrate RF-1. SIMBAL identifies sequence regions that map onto the substrate-binding interface while ignoring regions involved in the methyltransferase reaction mechanism in general. Neither method for training set construction requires any prior experimental characterization. Conclusions SIMBAL shows that, in functionally divergent protein families, selected short sequences often significantly outperform their full-length parent sequence for making functional predictions by sequence similarity, suggesting avenues for improved functional classifiers. When combined with structural data, SIMBAL affords the ability to localize and model functional sites. PMID:20102603
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Fang, Hongwei; Xu, Xingya
Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). We formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions, tomore » obtain statistical descriptions of the P concentration and retention in the TGR. The correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. This study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Fang, Hongwei; Xu, Xingya
Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). Here, we formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions,more » to obtain statistical descriptions of the P concentration and retention in the TGR. Furthermore, the correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. Our study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less
Huang, Lei; Fang, Hongwei; Xu, Xingya; ...
2017-08-01
Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). Here, we formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions,more » to obtain statistical descriptions of the P concentration and retention in the TGR. Furthermore, the correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. Our study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less
Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J
2016-04-01
Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.
Cell-autonomous-like silencing of GFP-partitioned transgenic Nicotiana benthamiana.
Sohn, Seong-Han; Frost, Jennifer; Kim, Yoon-Hee; Choi, Seung-Kook; Lee, Yi; Seo, Mi-Suk; Lim, Sun-Hyung; Choi, Yeonhee; Kim, Kook-Hyung; Lomonossoff, George
2014-08-01
We previously reported the novel partitioning of regional GFP-silencing on leaves of 35S-GFP transgenic plants, coining the term "partitioned silencing". We set out to delineate the mechanism of partitioned silencing. Here, we report that the partitioned plants were hemizygous for the transgene, possessing two direct-repeat copies of 35S-GFP. The detection of both siRNA expression (21 and 24 nt) and DNA methylation enrichment specifically at silenced regions indicated that both post-transcriptional gene silencing (PTGS) and transcriptional gene silencing (TGS) were involved in the silencing mechanism. Using in vivo agroinfiltration of 35S-GFP/GUS and inoculation of TMV-GFP RNA, we demonstrate that PTGS, not TGS, plays a dominant role in the partitioned silencing, concluding that the underlying mechanism of partitioned silencing is analogous to RNA-directed DNA methylation (RdDM). The initial pattern of partitioned silencing was tightly maintained in a cell-autonomous manner, although partitioned-silenced regions possess a potential for systemic spread. Surprisingly, transcriptome profiling through next-generation sequencing demonstrated that expression levels of most genes involved in the silencing pathway were similar in both GFP-expressing and silenced regions although a diverse set of region-specific transcripts were detected.This suggests that partitioned silencing can be triggered and regulated by genes other than the genes involved in the silencing pathway. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Drake, Michael J.; Rubie, David C.; Mcfarlane, Elisabeth A.
1992-01-01
The partitioning of elements amongst lower mantle phases and silicate melts is of interest in unraveling the early thermal history of the Earth. Because of the technical difficulty in carrying out such measurements, only one direct set of measurements was reported previously, and these results as well as interpretations based on them have generated controversy. Here we report what are to our knowledge only the second set of directly measured trace element partition coefficients for a natural system (KLB-1).
PiTS-1: Carbon Partitioning in Loblolly Pine after 13C Labeling and Shade Treatments
Warren, J. M.; Iversen, C. M.; Garten, Jr., C. T.; Norby, R. J.; Childs, J.; Brice, D.; Evans, R. M.; Gu, L.; Thornton, P.; Weston, D. J.
2013-01-01
The PiTS task was established with the objective of improving the C partitioning routines in existing ecosystem models by exploring mechanistic model representations of partitioning tested against field observations. We used short-term field manipulations of C flow, through 13CO2 labeling, canopy shading and stem girdling, to dramatically alter C partitioning, and resultant data are being used to test model representation of C partitioning processes in the Community Land Model (CLM4 or CLM4.5).
Mordell integrals and Giveon-Kutasov duality
NASA Astrophysics Data System (ADS)
Giasemidis, Georgios; Tierz, Miguel
2016-01-01
We solve, for finite N, the matrix model of supersymmetric U( N) Chern-Simons theory coupled to N f massive hypermultiplets of R-charge 1/2 , together with a Fayet-Iliopoulos term. We compute the partition function by identifying it with a determinant of a Hankel matrix, whose entries are parametric derivatives (of order N f - 1) of Mordell integrals. We obtain finite Gauss sums expressions for the partition functions. We also apply these results to obtain an exhaustive test of Giveon-Kutasov (GK) duality in the N=3 setting, by systematic computation of the matrix models involved. The phase factor that arises in the duality is then obtained explicitly. We give an expression characterized by modular arithmetic (mod 4) behavior that holds for all tested values of the parameters (checked up to N f = 12 flavours).
Task partitioning in a ponerine ant.
Theraulaz, Guy; Bonabeau, Eric; Sole, Ricard V; Schatz, Bertrand; Deneubourg, Jean-Louis
2002-04-21
This paper reports a study of the task partitioning observed in the ponerine ant Ectatomma ruidum, where prey-foraging behaviour can be subdivided into two categories: stinging and transporting. Stingers kill live prey and transporters carry prey corpses back to the nest. Stinging and transporting behaviours are released by certain stimuli through response thresholds; the respective stimuli for stinging and transporting appear to be the number of live prey and the number of prey corpses. A response threshold model, the parameters of which are all measured empirically, reproduces a set of non-trivial colony-level dynamical patterns observed in the experiments. This combination of modelling and empirical work connects explicitly the level of individual behaviour with colony-level patterns of work organization. Copyright 2002 Elsevier Science Ltd. All rights reserved.
Boron Partitioning Coefficient above Unity in Laser Crystallized Silicon.
Lill, Patrick C; Dahlinger, Morris; Köhler, Jürgen R
2017-02-16
Boron pile-up at the maximum melt depth for laser melt annealing of implanted silicon has been reported in numerous papers. The present contribution examines the boron accumulation in a laser doping setting, without dopants initially incorporated in the silicon wafer. Our numerical simulation models laser-induced melting as well as dopant diffusion, and excellently reproduces the secondary ion mass spectroscopy-measured boron profiles. We determine a partitioning coefficient k p above unity with k p = 1 . 25 ± 0 . 05 and thermally-activated diffusivity D B , with a value D B ( 1687 K ) = ( 3 . 53 ± 0 . 44 ) × 10 - 4 cm 2 ·s - 1 of boron in liquid silicon. For similar laser parameters and process conditions, our model predicts the anticipated boron profile of a laser doping experiment.
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.
1984-01-01
A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.
Set Partitions and the Multiplication Principle
ERIC Educational Resources Information Center
Lockwood, Elise; Caughman, John S., IV
2016-01-01
To further understand student thinking in the context of combinatorial enumeration, we examine student work on a problem involving set partitions. In this context, we note some key features of the multiplication principle that were often not attended to by students. We also share a productive way of thinking that emerged for several students who…
Wang, Kai; Mao, Jiafu; Dickinson, Robert; ...
2013-06-05
This paper examines a land surface solar radiation partitioning scheme, i.e., that of the Community Land Model version 4 (CLM4) with coupled carbon and nitrogen cycles. Taking advantage of a unique 30-year fraction of absorbed photosynthetically active radiation (FPAR) dataset derived from the Global Inventory Modeling and Mapping Studies (GIMMS) normalized difference vegetation index (NDVI) data set, multiple other remote sensing datasets, and site level observations, we evaluated the CLM4 FPAR ’s seasonal cycle, diurnal cycle, long-term trends and spatial patterns. These findings show that the model generally agrees with observations in the seasonal cycle, long-term trends, and spatial patterns,more » but does not reproduce the diurnal cycle. Discrepancies also exist in seasonality magnitudes, peak value months, and spatial heterogeneity. Here, we identify the discrepancy in the diurnal cycle as, due to, the absence of dependence on sun angle in the model. Implementation of sun angle dependence in a one-dimensional (1-D) model is proposed. The need for better relating of vegetation to climate in the model, indicated by long-term trends, is also noted. Evaluation of the CLM4 land surface solar radiation partitioning scheme using remote sensing and site level FPAR datasets provides targets for future development in its representation of this naturally complicated process.« less
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-01-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of log BB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (log P), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental log BB data had been determined in vivo. In particular, since molecules with log BB > 0.3 cross the blood-brain barrier (BBB) readily while molecules with log BB < −1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the log BB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. PMID:20427217
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-06-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.
Combined node and link partitions method for finding overlapping communities in complex networks
Jin, Di; Gabrys, Bogdan; Dang, Jianwu
2015-01-01
Community detection in complex networks is a fundamental data analysis task in various domains, and how to effectively find overlapping communities in real applications is still a challenge. In this work, we propose a new unified model and method for finding the best overlapping communities on the basis of the associated node and link partitions derived from the same framework. Specifically, we first describe a unified model that accommodates node and link communities (partitions) together, and then present a nonnegative matrix factorization method to learn the parameters of the model. Thereafter, we infer the overlapping communities based on the derived node and link communities, i.e., determine each overlapped community between the corresponding node and link community with a greedy optimization of a local community function conductance. Finally, we introduce a model selection method based on consensus clustering to determine the number of communities. We have evaluated our method on both synthetic and real-world networks with ground-truths, and compared it with seven state-of-the-art methods. The experimental results demonstrate the superior performance of our method over the competing ones in detecting overlapping communities for all analysed data sets. Improved performance is particularly pronounced in cases of more complicated networked community structures. PMID:25715829
Jang, Myoseon; Czoschke, Nadine M; Northcross, Amanda L; Cao, Gang; Shaof, David
2006-05-01
A predictive model for secondary organic aerosol (SOA) formation by both partitioning and heterogeneous reactions was developed for SOA created from ozonolysis of alpha-pinene in the presence of preexisting inorganic seed aerosols. SOA was created in a 2 m3 polytetrafluoroethylene film indoor chamber under darkness. Extensive sets of SOA experiments were conducted varying humidity, inorganic seed compositions comprising of ammonium sulfate and sulfuric acid, and amounts of inorganic seed mass. SOA mass was decoupled into partitioning (OM(P)) and heterogeneous aerosol production (OM(H)). The reaction rate constant for OM(H) production was subdivided into three categories (fast, medium, and slow) to consider different reactivity of organic products for the particle phase heterogeneous reactions. The influence of particle acidity on reaction rates was treated in a previous semiempirical model. Model OM(H) was developed with medium and strong acidic seed aerosols, and then extrapolated to OM(H) in weak acidic conditions, which are more relevant to atmospheric aerosols. To demonstrate the effects of preexisting glyoxal derivatives (e.g., glyoxal hydrate and dimer) on OM(H), SOA was created with a seed mixture comprising of aqueous glyoxal and inorganic species. Our results show that heterogeneous SOA formation was also influenced by preexisting reactive glyoxal derivatives.
Strong scaling and speedup to 16,384 processors in cardiac electro-mechanical simulations.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
High performance computing is required to make feasible simulations of whole organ models of the heart with biophysically detailed cellular models in a clinical setting. Increasing model detail by simulating electrophysiology and mechanical models increases computation demands. We present scaling results of an electro - mechanical cardiac model of two ventricles and compare them to our previously published results using an electrophysiological model only. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Fiber orientation was included. Data decomposition for the distribution onto the distributed memory system was carried out by orthogonal recursive bisection. Load weight ratios for non-tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100. The ten Tusscher et al. (2004) electrophysiological cell model was used and the Rice et al. (1999) model for the computation of the calcium transient dependent force. Scaling results for 512, 1024, 2048, 4096, 8192 and 16,384 processors were obtained for 1 ms simulation time. The simulations were carried out on an IBM Blue Gene/L supercomputer. The results show linear scaling from 512 to 16,384 processors with speedup factors between 1.82 and 2.14 between partitions. The most optimal load ratio was 1:25 for on all partitions. However, a shift towards load ratios with higher weight for the tissue elements can be recognized as can be expected when adding computational complexity to the model while keeping the same communication setup. This work demonstrates that it is potentially possible to run simulations of 0.5 s using the presented electro-mechanical cardiac model within 1.5 hours.
NASA Astrophysics Data System (ADS)
Nielsen, Roger L.; Ustunisik, Gokce; Weinsteiger, Allison B.; Tepley, Frank J.; Johnston, A. Dana; Kent, Adam J. R.
2017-09-01
Quantitative models of petrologic processes require accurate partition coefficients. Our ability to obtain accurate partition coefficients is constrained by their dependence on pressure temperature and composition, and on the experimental and analytical techniques we apply. The source and magnitude of error in experimental studies of trace element partitioning may go unrecognized if one examines only the processed published data. The most important sources of error are relict crystals, and analyses of more than one phase in the analytical volume. Because we have typically published averaged data, identification of compromised data is difficult if not impossible. We addressed this problem by examining unprocessed data from plagioclase/melt partitioning experiments, by comparing models based on that data with existing partitioning models, and evaluated the degree to which the partitioning models are dependent on the calibration data. We found that partitioning models are dependent on the calibration data in ways that result in erroneous model values, and that the error will be systematic and dependent on the value of the partition coefficient. In effect, use of different calibration datasets will result in partitioning models whose results are systematically biased, and that one can arrive at different and conflicting conclusions depending on how a model is calibrated, defeating the purpose of applying the models. Ultimately this is an experimental data problem, which can be solved if we publish individual analyses (not averages) or use a projection method wherein we use an independent compositional constraint to identify and estimate the uncontaminated composition of each phase.
SPH modelling of energy partitioning during impacts on Venus
NASA Technical Reports Server (NTRS)
Takata, T.; Ahrens, T. J.
1993-01-01
Impact cratering of the Venusian planetary surface by meteorites was investigated numerically using the Smoothed Particle Hydrodynamics (SPH) method. Venus presently has a dense atmosphere. Vigorous transfer of energy between impacting meteorites, the planetary surface, and the atmosphere is expected during impact events. The investigation concentrated on the effects of the atmosphere on energy partitioning and the flow of ejecta and gas. The SPH method is particularly suitable for studying complex motion, especially because of its ability to be extended to three dimensions. In our simulations, particles representing impactors and targets are initially set to a uniform density, and those of atmosphere are set to be in hydrostatic equilibrium. Target, impactor, and atmosphere are represented by 9800, 80, and 4200 particles, respectively. A Tillotson equation of state for granite is assumed for the target and impactor, and an ideal gas with constant specific heat ratio is used for the atmosphere. Two dimensional axisymmetric geometry was assumed and normal impacts of 10km diameter projectiles with velocities of 5, 10, 20, and 40 km/s, both with and without an atmosphere present were modeled.
Feller, Chrystel; Favre, Patrick; Janka, Ales; Zeeman, Samuel C.; Gabriel, Jean-Pierre; Reinhardt, Didier
2015-01-01
Plants are highly plastic in their potential to adapt to changing environmental conditions. For example, they can selectively promote the relative growth of the root and the shoot in response to limiting supply of mineral nutrients and light, respectively, a phenomenon that is referred to as balanced growth or functional equilibrium. To gain insight into the regulatory network that controls this phenomenon, we took a systems biology approach that combines experimental work with mathematical modeling. We developed a mathematical model representing the activities of the root (nutrient and water uptake) and the shoot (photosynthesis), and their interactions through the exchange of the substrates sugar and phosphate (Pi). The model has been calibrated and validated with two independent experimental data sets obtained with Petunia hybrida. It involves a realistic environment with a day-and-night cycle, which necessitated the introduction of a transitory carbohydrate storage pool and an endogenous clock for coordination of metabolism with the environment. Our main goal was to grasp the dynamic adaptation of shoot:root ratio as a result of changes in light and Pi supply. The results of our study are in agreement with balanced growth hypothesis, suggesting that plants maintain a functional equilibrium between shoot and root activity based on differential growth of these two compartments. Furthermore, our results indicate that resource partitioning can be understood as the emergent property of many local physiological processes in the shoot and the root without explicit partitioning functions. Based on its encouraging predictive power, the model will be further developed as a tool to analyze resource partitioning in shoot and root crops. PMID:26154262
A Novel Coarsening Method for Scalable and Efficient Mesh Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, A; Hysom, D; Gunney, B
2010-12-02
In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less
Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.
Harrington, Peter de Boves
2018-01-02
Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.
Corzo, Gerald; Solomatine, Dimitri
2007-05-01
Natural phenomena are multistationary and are composed of a number of interacting processes, so one single model handling all processes often suffers from inaccuracies. A solution is to partition data in relation to such processes using the available domain knowledge or expert judgment, to train separate models for each of the processes, and to merge them in a modular model (committee). In this paper a problem of water flow forecast in watershed hydrology is considered where the flow process can be presented as consisting of two subprocesses -- base flow and excess flow, so that these two processes can be separated. Several approaches to data separation techniques are studied. Two case studies with different forecast horizons are considered. Parameters of the algorithms responsible for data partitioning are optimized using genetic algorithms and global pattern search. It was found that modularization of ANN models using domain knowledge makes models more accurate, if compared with a global model trained on the whole data set, especially when forecast horizon (and hence the complexity of the modelled processes) is increased.
NASA Astrophysics Data System (ADS)
Odabasi, Mustafa; Cetin, Eylem; Sofuoglu, Aysun
Octanol-air partition coefficients ( KOA) for 14 polycyclic aromatic hydrocarbons (PAHs) were determined as a function of temperature using the gas chromatographic retention time method. log KOA values at 25° ranged over six orders of magnitude, between 6.34 (acenaphthylene) and 12.59 (dibenz[ a,h]anthracene). The determined KOA values were within factor of 0.7 (dibenz[ a,h]anthracene) to 15.1 (benz[ a]anthracene) of values calculated as the ratio of octanol-water partition coefficient to dimensionless Henry's law constant. Supercooled liquid vapor pressures ( PL) of 13 PAHs were also determined using the gas chromatographic retention time technique. Activity coefficients in octanol calculated using KOA and PL ranged between 3.2 and 6.2 indicating near-ideal solution behavior. Atmospheric concentrations measured in this study in Izmir, Turkey were used to investigate the partitioning of PAHs between particle and gas-phases. Experimental gas-particle partition coefficients ( Kp) were compared to the predictions of KOA absorption and KSA (soot-air partition coefficient) models. Octanol-based absorptive partitioning model predicted lower partition coefficients especially for relatively volatile PAHs. Ratios of measured/modeled partition coefficients ranged between 1.1 and 15.5 (4.5±6.0, average±SD) for KOA model. KSA model predictions were relatively better and measured to modeled ratios ranged between 0.6 and 5.6 (2.3±2.7, average±SD).
Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu
2016-06-01
Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yutong; Wang, Yuxin; Duffy, Alex H. B.
2014-11-01
Computer-based conceptual design for routine design has made great strides, yet non-routine design has not been given due attention, and it is still poorly automated. Considering that the function-behavior-structure(FBS) model is widely used for modeling the conceptual design process, a computer-based creativity enhanced conceptual design model(CECD) for non-routine design of mechanical systems is presented. In the model, the leaf functions in the FBS model are decomposed into and represented with fine-grain basic operation actions(BOA), and the corresponding BOA set in the function domain is then constructed. Choosing building blocks from the database, and expressing their multiple functions with BOAs, the BOA set in the structure domain is formed. Through rule-based dynamic partition of the BOA set in the function domain, many variants of regenerated functional schemes are generated. For enhancing the capability to introduce new design variables into the conceptual design process, and dig out more innovative physical structure schemes, the indirect function-structure matching strategy based on reconstructing the combined structure schemes is adopted. By adjusting the tightness of the partition rules and the granularity of the divided BOA subsets, and making full use of the main function and secondary functions of each basic structure in the process of reconstructing of the physical structures, new design variables and variants are introduced into the physical structure scheme reconstructing process, and a great number of simpler physical structure schemes to accomplish the overall function organically are figured out. The creativity enhanced conceptual design model presented has a dominant capability in introducing new deign variables in function domain and digging out simpler physical structures to accomplish the overall function, therefore it can be utilized to solve non-routine conceptual design problem.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
Nonequilibrium partitioning during rapid solidification of SiAs alloys
NASA Astrophysics Data System (ADS)
Kittl, J. A.; Aziz, M. J.; Brunco, D. P.; Thompson, M. O.
1995-02-01
The velocity dependence of the partition coefficient was measured for rapid solidification of polycrystalline Si-4.5 at% As and Si-9 at% As alloys induced by pulsed laser melting. The results constitute the first test of partitioning models both for the high velocity regime and for non-dilute alloys. The continuous growth model (CGM) of Aziz and Kaplan fits the data well, but with an unusually low diffusive speed of 0.46 m/s. The data show negligible dependence of partitioning on concentration, also consistent with the CGM. The predictions of the Hillert-Sundman model are inconsistent with partitioning results. Using the aperiodic stepwise growth model (ASGM) of Goldman and Aziz, an average over crystallographic orientations with parameters from independent single-crystal experiments is shown to be reasonably consistent with these polycrystalline partitioning results. The results, combined with others, indicate that the CGM without solute drag and its extension to lateral ledge motion, the ASGM, are the only models that fit the data for both solute partioning and kinetic undercooling interface response functions. No current solute drag models can match both partitioning and undercooling measurements.
The SPARC vapor pressure and activity coefficient models were coupled to estimate Henry’s Law Constant (HLC) in water and in hexadecane for a wide range of non-polar and polar solute organic compounds without modification to/or additional parameterization of the vapor pressure or...
ERIC Educational Resources Information Center
McCain, Daniel F.; Allgood, Ottie E.; Cox, Jacob T.; Falconi, Audrey E.; Kim, Michael J.; Shih, Wei-Yu
2012-01-01
Only a few pedagogical experiments have been published dealing specifically with the hydrophobic interaction though it plays a central role in biochemistry. A set of experiments is presented in which students partition a variety of colorful indicator dyes in biphasic water/organic solvent mixtures. Students monitor the partitioning visually and…
Lost in the supermarket: Quantifying the cost of partitioning memory sets in hybrid search.
Boettcher, Sage E P; Drew, Trafton; Wolfe, Jeremy M
2018-01-01
The items on a memorized grocery list are not relevant in every aisle; for example, it is useless to search for the cabbage in the cereal aisle. It might be beneficial if one could mentally partition the list so only the relevant subset was active, so that vegetables would be activated in the produce section. In four experiments, we explored observers' abilities to partition memory searches. For example, if observers held 16 items in memory, but only eight of the items were relevant, would response times resemble a search through eight or 16 items? In Experiments 1a and 1b, observers were not faster for the partition set; however, they suffered relatively small deficits when "lures" (items from the irrelevant subset) were presented, indicating that they were aware of the partition. In Experiment 2 the partitions were based on semantic distinctions, and again, observers were unable to restrict search to the relevant items. In Experiments 3a and 3b, observers attempted to remove items from the list one trial at a time but did not speed up over the course of a block, indicating that they also could not limit their memory searches. Finally, Experiments 4a, 4b, 4c, and 4d showed that observers were able to limit their memory searches when a subset was relevant for a run of trials. Overall, observers appear to be unable or unwilling to partition memory sets from trial to trial, yet they are capable of restricting search to a memory subset that remains relevant for several trials. This pattern is consistent with a cost to switching between currently relevant memory items.
NASA Astrophysics Data System (ADS)
Krieger, Ulrich; Marcolli, Claudia; Siegrist, Franziska
2015-04-01
The production of secondary organic aerosol (SOA) by gas-to-particle partitioning is generally represented by an equilibrium partitioning model. A key physical parameter which governs gas-particle partitioning is the pure component vapor pressure, which is difficult to measure for low- and semivolatile compounds. For typical atmospheric compounds like e.g. citric acid or tartaric acid, vapor pressures have been reported in the literature which differ by up to six orders of magnitude [Huisman et al., 2013]. Here, we report vapor pressures of a homologous series of polyethylene glycols (triethylene glycol to octaethylene glycol) determined by measuring the evaporation rate of single, levitated aerosol particles in an electrodynamic balance. We propose to use those as a reference data set for validating different vapor pressure measurement techniques. With each addition of a (O-CH2-CH2)-group the vapor pressure is lowered by about one order of magnitude which makes it easy to detect the lower limit of vapor pressures accessible with a particular technique down to a pressure of 10-8 Pa at room temperature. Reference: Huisman, A. J., Krieger, U. K., Zuend, A., Marcolli, C., and Peter, T., Atmos. Chem. Phys., 13, 6647-6662, 2013.
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
Analysis of flight data from a High-Incidence Research Model by system identification methods
NASA Technical Reports Server (NTRS)
Batterson, James G.; Klein, Vladislav
1989-01-01
Data partitioning and modified stepwise regression were applied to recorded flight data from a Royal Aerospace Establishment high incidence research model. An aerodynamic model structure and corresponding stability and control derivatives were determined for angles of attack between 18 and 30 deg. Several nonlinearities in angles of attack and sideslip as well as a unique roll-dominated set of lateral modes were found. All flight estimated values were compared to available wind tunnel measurements.
Statistical model of a flexible inextensible polymer chain: The effect of kinetic energy.
Pergamenshchik, V M; Vozniak, A B
2017-01-01
Because of the holonomic constraints, the kinetic energy contribution in the partition function of an inextensible polymer chain is difficult to find, and it has been systematically ignored. We present the first thermodynamic calculation incorporating the kinetic energy of an inextensible polymer chain with the bending energy. To explore the effect of the translation-rotation degrees of freedom, we propose and solve a statistical model of a fully flexible chain of N+1 linked beads which, in the limit of smooth bending, is equivalent to the well-known wormlike chain model. The partition function with the kinetic and bending energies and correlations between orientations of any pair of links and velocities of any pair of beads are found. This solution is precise in the limits of small and large rigidity-to-temperature ratio b/T. The last exact solution is essential as even very "harmless" approximation results in loss of the important effects when the chain is very rigid. For very high b/T, the orientations of different links become fully correlated. Nevertheless, the chain does not go over into a hard rod even in the limit b/T→∞: While the velocity correlation length diverges, the correlations themselves remain weak and tend to the value ∝T/(N+1). The N dependence of the partition function is essentially determined by the kinetic energy contribution. We demonstrate that to obtain the correct energy and entropy in a constrained system, the T derivative of the partition function has to be applied before integration over the constraint-setting variable.
Statistical model of a flexible inextensible polymer chain: The effect of kinetic energy
NASA Astrophysics Data System (ADS)
Pergamenshchik, V. M.; Vozniak, A. B.
2017-01-01
Because of the holonomic constraints, the kinetic energy contribution in the partition function of an inextensible polymer chain is difficult to find, and it has been systematically ignored. We present the first thermodynamic calculation incorporating the kinetic energy of an inextensible polymer chain with the bending energy. To explore the effect of the translation-rotation degrees of freedom, we propose and solve a statistical model of a fully flexible chain of N +1 linked beads which, in the limit of smooth bending, is equivalent to the well-known wormlike chain model. The partition function with the kinetic and bending energies and correlations between orientations of any pair of links and velocities of any pair of beads are found. This solution is precise in the limits of small and large rigidity-to-temperature ratio b /T . The last exact solution is essential as even very "harmless" approximation results in loss of the important effects when the chain is very rigid. For very high b /T , the orientations of different links become fully correlated. Nevertheless, the chain does not go over into a hard rod even in the limit b /T →∞ : While the velocity correlation length diverges, the correlations themselves remain weak and tend to the value ∝T /(N +1 ). The N dependence of the partition function is essentially determined by the kinetic energy contribution. We demonstrate that to obtain the correct energy and entropy in a constrained system, the T derivative of the partition function has to be applied before integration over the constraint-setting variable.
Can Moral Hazard Be Resolved by Common-Knowledge in S4n-Knowledge?
NASA Astrophysics Data System (ADS)
Matsuhisa, Takashi
This article investigates the relationship between common-knowledge and agreement in multi-agent system, and to apply the agreement result by common-knowledge to the principal-agent model under non-partition information. We treat the two problems: (1) how we capture the fact that the agents agree on an event or they get consensus on it from epistemic point of view, and (2) how the agreement theorem will be able to make progress to settle a moral hazard problem in the principal-agents model under non-partition information. We shall propose a solution program for the moral hazard in the principal-agents model under non-partition information by common-knowledge. Let us start that the agents have the knowledge structure induced from a reflexive and transitive relation associated with the multi-modal logic S4n. Each agent obtains the membership value of an event under his/her private information, so he/she considers the event as fuzzy set. Specifically consider the situation that the agents commonly know all membership values of the other agents. In this circumstance we shall show the agreement theorem that consensus on the membership values among all agents can still be guaranteed. Furthermore, under certain assumptions we shall show that the moral hazard can be resolved in the principal-agent model when all the expected marginal costs are common-knowledge among the principal and agents.
Effect of video server topology on contingency capacity requirements
NASA Astrophysics Data System (ADS)
Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.
1996-03-01
Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.
Bezold, Franziska; Weinberger, Maria E; Minceva, Mirjana
2017-03-31
Tocopherols are a class of molecules with vitamin E activity. Among those, α-tocopherol is the most important vitamin E source in the human diet. The purification of tocopherols involving biphasic liquid systems can be challenging since these vitamins are poorly soluble in water. Deep eutectic solvents (DES) can be used to form water-free biphasic systems and have already proven applicable for centrifugal partition chromatography separations. In this work, a computational solvent system screening was performed using the predictive thermodynamic model COSMO-RS. Liquid-liquid equilibria of solvent systems composed of alkanes, alcohols and DES, as well as partition coefficients of α-tocopherol, β-tocopherol, γ-tocopherol, and σ-tocopherol in these biphasic solvent systems were calculated. From the results the best suited biphasic solvent system, namely heptane/ethanol/choline chloride-1,4-butanediol, was chosen and a batch injection of a tocopherol mixture, mainly consisting of α- and γ-tocopherol, was performed using a centrifugal partition chromatography set up (SCPE 250-BIO). A separation factor of 1.74 was achieved for α- and γ-tocopherol. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Reygondeau, Gabriel; Olivier Irisson, Jean; Guieu, Cecile; Gasparini, Stephane; Ayata, Sakina; Koubbi, Philippe
2013-04-01
In recent decades, it has been found useful to ecoregionalise the pelagic environment assuming that within each partition environmental conditions are distinguishable and unique. Indeed, each partition of the ocean that is proposed aimed to delineate the main oceanographical and ecological patterns to provide a geographical framework of marine ecosystems for ecological studies and management purposes. The aim of the present work is to integrate and process existing data on the pelagic environment of the Mediterranean Sea in order to define biogeochemical regions. Open access databases including remote sensing observations, oceanographic campaign data and physical modeling simulations are used. These various dataset allow the multidisciplinary view required to understand the interactions between climate and Mediterranean marine ecosystems. The first step of our study has consisted in a statistical selection of a set of crucial environmental factors to propose the most parsimonious biogeographical approach that allows detecting the main oceanographic structure of the Mediterranean Sea. Second, based on the identified set of environmental parameters, both non-hierarchical and hierarchical clustering algorithms have been tested. Outputs from each methodology are then inter-compared to propose a robust map of the biotopes (unique range of environmental parameters) of the area. Each biotope was then modeled using a non parametric environmental niche method to infer a dynamic biogeochemical partition. Last, the seasonal, inter annual and long term spatial changes of each biogeochemical regions were investigated. The future of this work will be to perform a second partition to subdivide the biogeochemical regions according to biotic features of the Mediterranean Sea (ecoregions). This second level of division will thus be used as a geographical framework to identify ecosystems that have been altered by human activities (i.e. pollution, fishery, invasive species) for the European project PERSEUS (Protecting EuRopean Seas and borders through the intelligence US of surveillance) and the French program MERMEX (Marine Ecosystems Response in the Mediterranean Experiment).
Partition of nonionic organic compounds in aquatic systems
Smith, James A.; Witkowski, Patrick J.; Chiou, Cary T.
1988-01-01
In aqueous systems, the distribution of many nonionic organic solutes in soil-sediment, aquatic organisms, and dissolved organic matter can be explained in terms of a partition model. The nonionic organic solute is distributed between water and different organic phases that behave as bulk solvents. Factors such as polarity, composition, and molecular size of the solute and organic phase determine the relative importance of partition to the environmental distribution of the solute. This chapter reviews these factors in the context of a partition model and also examines several environmental applications of the partition model for surface- and ground-water systems.
Partitioning an object-oriented terminology schema.
Gu, H; Perl, Y; Halper, M; Geller, J; Kuo, F; Cimino, J J
2001-07-01
Controlled medical terminologies are increasingly becoming strategic components of various healthcare enterprises. However, the typical medical terminology can be difficult to exploit due to its extensive size and high density. The schema of a medical terminology offered by an object-oriented representation is a valuable tool in providing an abstract view of the terminology, enhancing comprehensibility and making it more usable. However, schemas themselves can be large and unwieldy. We present a methodology for partitioning a medical terminology schema into manageably sized fragments that promote increased comprehension. Our methodology has a refinement process for the subclass hierarchy of the terminology schema. The methodology is carried out by a medical domain expert in conjunction with a computer. The expert is guided by a set of three modeling rules, which guarantee that the resulting partitioned schema consists of a forest of trees. This makes it easier to understand and consequently use the medical terminology. The application of our methodology to the schema of the Medical Entities Dictionary (MED) is presented.
Experimental constraints on the sulfur content in the Earth's core
NASA Astrophysics Data System (ADS)
Fei, Y.; Huang, H.; Leng, C.; Hu, X.; Wang, Q.
2015-12-01
Any core formation models would lead to the incorporation of sulfur (S) into the Earth's core, based on the cosmochemical/geochemical constraints, sulfur's chemical affinity for iron (Fe), and low eutectic melting temperature in the Fe-FeS system. Preferential partitioning of S into the melt also provides petrologic constraint on the density difference between the liquid outer and solid inner cores. Therefore, the center issue is to constrain the amount of sulfur in the core. Geochemical constraints usually place 2-4 wt.% S in the core after accounting for its volatility, whereas more S is allowed in models based on mineral physics data. Here we re-examine the constraints on the S content in the core by both petrologic and mineral physics data. We have measured S partitioning between solid and liquid iron in the multi-anvil apparatus and the laser-heated diamond anvil cell, evaluating the effect of pressure on melting temperature and partition coefficient. In addition, we have conducted shockwave experiments on Fe-11.8wt%S using a two-stage light gas gun up to 211 GPa. The new shockwave experiments yield Hugoniot densities and the longitudinal sound velocities. The measurements provide the longitudinal sound velocity before melting and the bulk sound velocity of liquid. The measured sound velocities clearly show melting of the Fe-FeS mix with 11.8wt%S at a pressure between 111 and 129 GPa. The sound velocities at pressures above 129GPa represent the bulk sound velocities of Fe-11.8wt%S liquid. The combined data set including density, sound velocity, melting temperature, and S partitioning places a tight constraint on the required sulfur partition coefficient to produce the density and velocity jumps and the bulk sulfur content in the core.
NASA Astrophysics Data System (ADS)
Toropov, Andrey A.; Toropova, Alla P.
2018-06-01
Predictive model of logP for Pt(II) and Pt(IV) complexes built up with the Monte Carlo method using the CORAL software has been validated with six different splits into the training and validation sets. The improving of the predictive potential of models for six different splits has been obtained using so-called index of ideality of correlation. The suggested models give possibility to extract molecular features, which cause the increase or vice versa decrease of the logP.
Barillot, Romain; Louarn, Gaëtan; Escobar-Gutiérrez, Abraham J; Huynh, Pierre; Combes, Didier
2011-10-01
Most studies dealing with light partitioning in intercropping systems have used statistical models based on the turbid medium approach, thus assuming homogeneous canopies. However, these models could not be directly validated although spatial heterogeneities could arise in such canopies. The aim of the present study was to assess the ability of the turbid medium approach to accurately estimate light partitioning within grass-legume mixed canopies. Three contrasted mixtures of wheat-pea, tall fescue-alfalfa and tall fescue-clover were sown according to various patterns and densities. Three-dimensional plant mock-ups were derived from magnetic digitizations carried out at different stages of development. The benchmarks for light interception efficiency (LIE) estimates were provided by the combination of a light projective model and plant mock-ups, which also provided the inputs of a turbid medium model (SIRASCA), i.e. leaf area index and inclination. SIRASCA was set to gradually account for vertical heterogeneity of the foliage, i.e. the canopy was described as one, two or ten horizontal layers of leaves. Mixtures exhibited various and heterogeneous profiles of foliar distribution, leaf inclination and component species height. Nevertheless, most of the LIE was satisfactorily predicted by SIRASCA. Biased estimations were, however, observed for (1) grass species and (2) tall fescue-alfalfa mixtures grown at high density. Most of the discrepancies were due to vertical heterogeneities and were corrected by increasing the vertical description of canopies although, in practice, this would require time-consuming measurements. The turbid medium analogy could be successfully used in a wide range of canopies. However, a more detailed description of the canopy is required for mixtures exhibiting vertical stratifications and inter-/intra-species foliage overlapping. Architectural models remain a relevant tool for studying light partitioning in intercropping systems that exhibit strong vertical heterogeneities. Moreover, these models offer the possibility to integrate the effects of microclimate variations on plant growth.
Yang, Senpei; Li, Lingyi; Chen, Tao; Han, Lujia; Lian, Guoping
2018-05-14
Sebum is an important shunt pathway for transdermal permeation and targeted delivery, but there have been limited studies on its permeation properties. Here we report a measurement and modelling study of solute partition to artificial sebum. Equilibrium experiments were carried out for the sebum-water partition coefficients of 23 neutral, cationic and anionic compounds at different pH. Sebum-water partition coefficients not only depend on the hydrophobicity of the chemical but also on pH. As pH increases from 4.2 to 7.4, the partition of cationic chemicals to sebum increased rapidly. This appears to be due to increased electrostatic attraction between the cationic chemical and the fatty acids in sebum. Whereas for anionic chemicals, their sebum partition coefficients are negligibly small, which might result from their electrostatic repulsion to fatty acids. Increase in pH also resulted in a slight decrease of sebum partition of neutral chemicals. Based on the observed pH impact on the sebum-water partition of neutral, cationic and anionic compounds, a new quantitative structure-property relationship (QSPR) model has been proposed. This mathematical model considers the hydrophobic interaction and electrostatic interaction as the main mechanisms for the partition of neutral, cationic and anionic chemicals to sebum.
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Optimal partitioning of random programs across two processors
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.
Mode entanglement of Gaussian fermionic states
NASA Astrophysics Data System (ADS)
Spee, C.; Schwaiger, K.; Giedke, G.; Kraus, B.
2018-04-01
We investigate the entanglement of n -mode n -partite Gaussian fermionic states (GFS). First, we identify a reasonable definition of separability for GFS and derive a standard form for mixed states, to which any state can be mapped via Gaussian local unitaries (GLU). As the standard form is unique, two GFS are equivalent under GLU if and only if their standard forms coincide. Then, we investigate the important class of local operations assisted by classical communication (LOCC). These are central in entanglement theory as they allow one to partially order the entanglement contained in states. We show, however, that there are no nontrivial Gaussian LOCC (GLOCC) among pure n -partite (fully entangled) states. That is, any such GLOCC transformation can also be accomplished via GLU. To obtain further insight into the entanglement properties of such GFS, we investigate the richer class of Gaussian stochastic local operations assisted by classical communication (SLOCC). We characterize Gaussian SLOCC classes of pure n -mode n -partite states and derive them explicitly for few-mode states. Furthermore, we consider certain fermionic LOCC and show how to identify the maximally entangled set of pure n -mode n -partite GFS, i.e., the minimal set of states having the property that any other state can be obtained from one state inside this set via fermionic LOCC. We generalize these findings also to the pure m -mode n -partite (for m >n ) case.
The conventional Junge-Pankow adsorption model uses the sub-cooled liquid vapor pressure (pLo) as a correlation parameter for gas/particle interactions. An alternative is the octanol-air partition coefficient (Koa) absorption model. Log-log plots of the particle-gas partition c...
Calvetti, Daniela; Cheng, Yougan; Somersalo, Erkki
2016-12-01
Identifying feasible steady state solutions of a brain energy metabolism model is an inverse problem that allows infinitely many solutions. The characterization of the non-uniqueness, or the uncertainty quantification of the flux balance analysis, is tantamount to identifying the degrees of freedom of the solution. The degrees of freedom of multi-compartment mathematical models for energy metabolism of a neuron-astrocyte complex may offer a key to understand the different ways in which the energetic needs of the brain are met. In this paper we study the uncertainty in the solution, using techniques of linear algebra to identify the degrees of freedom in a lumped model, and Markov chain Monte Carlo methods in its extension to a spatially distributed case. The interpretation of the degrees of freedom in metabolic terms, more specifically, glucose and oxygen partitioning, is then leveraged to derive constraints on the free parameters to guarantee that the model is energetically feasible. We demonstrate how the model can be used to estimate the stoichiometric energy needs of the cells as well as the household energy based on the measured oxidative cerebral metabolic rate of glucose and glutamate cycling. Moreover, our analysis shows that in the lumped model the net direction of lactate dehydrogenase (LDH) in the cells can be deduced from the glucose partitioning between the compartments. The extension of the lumped model to a spatially distributed multi-compartment setting that includes diffusion fluxes from capillary to tissue increases the number of degrees of freedom, requiring the use of statistical sampling techniques. The analysis of the distributed model reveals that some of the conclusions valid for the spatially lumped model, e.g., concerning the LDH activity and glucose partitioning, may no longer hold.
Raster Data Partitioning for Supporting Distributed GIS Processing
NASA Astrophysics Data System (ADS)
Nguyen Thai, B.; Olasz, A.
2015-08-01
In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
W.J. Mattson; R. Julkunen-Tiitto; D.A. Herms
2005-01-01
Rising levels of atmospheric CO2 can alter plant growth and partitioning to secondary metabolites. The protein competition model (PCM) and the extended growth/differentiation balance model (GDBe) are similar but alternative models that address ontogenetic and environmental effects on whole-plant carbon partitioning to the...
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.
2010-01-01
Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121
Centrifuge models simulating magma emplacement during oblique rifting
NASA Astrophysics Data System (ADS)
Corti, Giacomo; Bonini, Marco; Innocenti, Fabrizio; Manetti, Piero; Mulugeta, Genene
2001-07-01
A series of centrifuge analogue experiments have been performed to model the mechanics of continental oblique extension (in the range of 0° to 60°) in the presence of underplated magma at the base of the continental crust. The experiments reproduced the main characteristics of oblique rifting, such as (1) en-echelon arrangement of structures, (2) mean fault trends oblique to the extension vector, (3) strain partitioning between different sets of faults and (4) fault dips higher than in purely normal faults (e.g. Tron, V., Brun, J.-P., 1991. Experiments on oblique rifting in brittle-ductile systems. Tectonophysics 188, 71-84). The model results show that the pattern of deformation is strongly controlled by the angle of obliquity ( α), which determines the ratio between the shearing and stretching components of movement. For α⩽35°, the deformation is partitioned between oblique-slip and normal faults, whereas for α⩾45° a strain partitioning arises between oblique-slip and strike-slip faults. The experimental results show that for α⩽35°, there is a strong coupling between deformation and the underplated magma: the presence of magma determines a strain localisation and a reduced strain partitioning; deformation, in turn, focuses magma emplacement. Magmatic chambers form in the core of lower crust domes with an oblique trend to the initial magma reservoir and, in some cases, an en-echelon arrangement. Typically, intrusions show an elongated shape with a high length/width ratio. In nature, this pattern is expected to result in magmatic and volcanic belts oblique to the rift axis and arranged en-echelon, in agreement with some selected natural examples of continental rifts (i.e. Main Ethiopian Rift) and oceanic ridges (i.e. Mohns and Reykjanes Ridges).
Hernandez, J E; Epstein, L D; Rodriguez, M H; Rodriguez, A D; Rejmankova, E; Roberts, D R
1997-03-01
We propose the use of generalized tree models (GTMs) to analyze data from entomological field studies. Generalized tree models can be used to characterize environments with different mosquito breeding capacity. A GTM simultaneously analyzes a set of predictor variables (e.g., vegetation coverage) in relation to a response variable (e.g., counts of Anopheles albimanus larvae), and how it varies with respect to a set of criterion variables (e.g., presence of predators). The algorithm produces a treelike graphical display with its root at the top and 2 branches stemming down from each node. At each node, conditions on the value of predictors partition the observations into subgroups (environments) in which the relation between response and criterion variables is most homogeneous.
Space Partitioning for Privacy Enabled 3D City Models
NASA Astrophysics Data System (ADS)
Filippovska, Y.; Wichmann, A.; Kada, M.
2016-10-01
Due to recent technological progress, data capturing and processing of highly detailed (3D) data has become extensive. And despite all prospects of potential uses, data that includes personal living spaces and public buildings can also be considered as a serious intrusion into people's privacy and a threat to security. It becomes especially critical if data is visible by the general public. Thus, a compromise is needed between open access to data and privacy requirements which can be very different for each application. As privacy is a complex and versatile topic, the focus of this work particularly lies on the visualization of 3D urban data sets. For the purpose of privacy enabled visualizations of 3D city models, we propose to partition the (living) spaces into privacy regions, each featuring its own level of anonymity. Within each region, the depicted 2D and 3D geometry and imagery is anonymized with cartographic generalization techniques. The underlying spatial partitioning is realized as a 2D map generated as a straight skeleton of the open space between buildings. The resulting privacy cells are then merged according to the privacy requirements associated with each building to form larger regions, their borderlines smoothed, and transition zones established between privacy regions to have a harmonious visual appearance. It is exemplarily demonstrated how the proposed method generates privacy enabled 3D city models.
Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.
ERIC Educational Resources Information Center
Clymer, S. J.
Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…
Monkey search algorithm for ECE components partitioning
NASA Astrophysics Data System (ADS)
Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.
2018-05-01
The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.
Processing scalar implicature: a Constraint-Based approach
Degen, Judith; Tanenhaus, Michael K.
2014-01-01
Three experiments investigated the processing of the implicature associated with some using a “gumball paradigm”. On each trial participants saw an image of a gumball machine with an upper chamber with 13 gumballs and an empty lower chamber. Gumballs then dropped to the lower chamber and participants evaluated statements, such as “You got some of the gumballs”. Experiment 1 established that some is less natural for reference to small sets (1, 2 and 3 of the 13 gumballs) and unpartitioned sets (all 13 gumballs) compared to intermediate sets (6–8). Partitive some of was less natural than simple some when used with the unpartitioned set. In Experiment 2, including exact number descriptions lowered naturalness ratings for some with small sets but not for intermediate size sets and the unpartitioned set. In Experiment 3 the naturalness ratings from Experiment 2 predicted response times. The results are interpreted as evidence for a Constraint-Based account of scalar implicature processing and against both two-stage, Literal-First models and pragmatic Default models. PMID:25265993
Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
B. Hendrickson; T.G. Kolda
1998-09-01
A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.
Modeling of influencing parameters in active noise control on an enclosure wall
NASA Astrophysics Data System (ADS)
Tarabini, Marco; Roure, Alain
2008-04-01
This paper investigates, by means of a numerical model, the possibility of using an active noise barrier to virtually reduce the acoustic transparency of a partition wall inside an enclosure. The room is modeled with the image method as a rectangular enclosure with a stationary point source; the active barrier is set up by an array of loudspeakers and error microphones and is meant to minimize the squared sound pressure on a wall with the use of a decentralized control. Simulations investigate the effects of the enclosure characteristics and of the barrier geometric parameters on the sound pressure attenuation on the controlled partition, on the whole enclosure potential energy and on the diagonal control stability. Performances are analyzed in a frequency range of 25-300 Hz at discrete 25 Hz steps. Influencing parameters and their effects on the system performances are identified with a statistical inference procedure. Simulation results have shown that it is possible to averagely reduce the sound pressure on the controlled partition. In the investigated configuration, the surface attenuation and the diagonal control stability are mainly driven by the distance between the loudspeakers and the error microphones and by the loudspeakers directivity; minor effects are due to the distance between the error microphones and the wall, by the wall reflectivity and by the active barrier grid meshing. Room dimensions and source position have negligible effects. Experimental results point out the validity of the model and the efficiency of the barrier in the reduction of the wall acoustic transparency.
Electoral Susceptibility and Entropically Driven Interactions
NASA Astrophysics Data System (ADS)
Caravan, Bassir; Levine, Gregory
2013-03-01
In the United States electoral system the election is usually decided by the electoral votes cast by a small number of ``swing states'' where the two candidates historically have roughly equal probabilities of winning. The effective value of a swing state is determined not only by the number of its electoral votes but by the frequency of its appearance in the set of winning partitions of the electoral college. Since the electoral vote values of swing states are not identical, the presence or absence of a state in a winning partition is generally correlated with the frequency of appearance of other states and, hence, their effective values. We quantify the effective value of states by an electoral susceptibility, χj, the variation of the winning probability with the ``cost'' of changing the probability of winning state j. Associating entropy with the logarithm of the number of appearances of a state within the set of winning partitions, the entropy per state (in effect, the chemical potential) is not additive and the states may be said to ``interact.'' We study χj for a simple model with a Zipf's law type distribution of electoral votes. We show that the susceptibility for small states is largest in ``one-sided'' electoral contests and smallest in close contests. This research was supported by Department of Energy DE-FG02-08ER64623, Research Corporation CC6535 (GL) and HHMI Scholar Program (BC)
Panagopoulos, Dimitri; Jahnke, Annika; Kierkegaard, Amelie; MacLeod, Matthew
2015-10-20
The sorption of cyclic volatile methyl siloxanes (cVMS) to organic matter has a strong influence on their fate in the aquatic environment. We report new measurements of the partition ratios between freshwater sediment organic carbon and water (KOC) and between Aldrich humic acid dissolved organic carbon and water (KDOC) for three cVMS, and for three polychlorinated biphenyls (PCBs) that were used as reference chemicals. Our measurements were made using a purge-and-trap method that employs benchmark chemicals to calibrate mass transfer at the air/water interface in a fugacity-based multimedia model. The measured log KOC of octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), and dodecamethylcyclohexasiloxane (D6) were 5.06, 6.12, and 7.07, and log KDOC were 5.05, 6.13, and 6.79. To our knowledge, our measurements for KOC of D6 and KDOC of D4 and D6 are the first reported. Polyparameter linear free energy relationships (PP-LFERs) derived from training sets of empirical data that did not include cVMS generally did not predict our measured partition ratios of cVMS accurately (root-mean-squared-error (RMSE) for logKOC 0.76 and for logKDOC 0.73). We constructed new PP-LFERs that accurately describe partition ratios for the cVMS as well as for other chemicals by including our new measurements in the existing training sets (logKOC RMSEcVMS: 0.09, logKDOC RMSEcVMS: 0.12). The PP-LFERs we have developed here should be further evaluated and perhaps recalibrated when experimental data for other siloxanes become available.
Springer, M S; Amrine, H M; Burk, A; Stanhope, M J
1999-03-01
We concatenated sequences for four mitochondrial genes (12S rRNA, tRNA valine, 16S rRNA, cytochrome b) and four nuclear genes [aquaporin, alpha 2B adrenergic receptor (A2AB), interphotoreceptor retinoid-binding protein (IRBP), von Willebrand factor (vWF)] into a multigene data set representing 11 eutherian orders (Artiodactyla, Hyracoidea, Insectivora, Lagomorpha, Macroscelidea, Perissodactyla, Primates, Proboscidea, Rodentia, Sirenia, Tubulidentata). Within this data set, we recognized nine mitochondrial partitions (both stems and loops, for each of 12S rRNA, tRNA valine, and 16S rRNA; and first, second, and third codon positions of cytochrome b) and 12 nuclear partitions (first, second, and third codon positions, respectively, of each of the four nuclear genes). Four of the 21 partitions (third positions of cytochrome b, A2AB, IRBP, and vWF) showed significant heterogeneity in base composition across taxa. Phylogenetic analyses (parsimony, minimum evolution, maximum likelihood) based on sequences for all 21 partitions provide 99-100% bootstrap support for Afrotheria and Paenungulata. With the elimination of the four partitions exhibiting heterogeneity in base composition, there is also high bootstrap support (89-100%) for cow + horse. Statistical tests reject Altungulata, Anagalida, and Ungulata. Data set heterogeneity between mitochondrial and nuclear genes is most evident when all partitions are included in the phylogenetic analyses. Mitochondrial-gene trees associate cow with horse, whereas nuclear-gene trees associate cow with hedgehog and these two with horse. However, after eliminating third positions of A2AB, IRBP, and vWF, nuclear data agree with mitochondrial data in supporting cow + horse. Nuclear genes provide stronger support for both Afrotheria and Paenungulata. Removal of third positions of cytochrome b results in improved performance for the mitochondrial genes in recovering these clades.
NASA Astrophysics Data System (ADS)
Wagstaff, Kiri L.
2012-03-01
On obtaining a new data set, the researcher is immediately faced with the challenge of obtaining a high-level understanding from the observations. What does a typical item look like? What are the dominant trends? How many distinct groups are included in the data set, and how is each one characterized? Which observable values are common, and which rarely occur? Which items stand out as anomalies or outliers from the rest of the data? This challenge is exacerbated by the steady growth in data set size [11] as new instruments push into new frontiers of parameter space, via improvements in temporal, spatial, and spectral resolution, or by the desire to "fuse" observations from different modalities and instruments into a larger-picture understanding of the same underlying phenomenon. Data clustering algorithms provide a variety of solutions for this task. They can generate summaries, locate outliers, compress data, identify dense or sparse regions of feature space, and build data models. It is useful to note up front that "clusters" in this context refer to groups of items within some descriptive feature space, not (necessarily) to "galaxy clusters" which are dense regions in physical space. The goal of this chapter is to survey a variety of data clustering methods, with an eye toward their applicability to astronomical data analysis. In addition to improving the individual researcher’s understanding of a given data set, clustering has led directly to scientific advances, such as the discovery of new subclasses of stars [14] and gamma-ray bursts (GRBs) [38]. All clustering algorithms seek to identify groups within a data set that reflect some observed, quantifiable structure. Clustering is traditionally an unsupervised approach to data analysis, in the sense that it operates without any direct guidance about which items should be assigned to which clusters. There has been a recent trend in the clustering literature toward supporting semisupervised or constrained clustering, in which some partial information about item assignments or other components of the resulting output are already known and must be accommodated by the solution. Some algorithms seek a partition of the data set into distinct clusters, while others build a hierarchy of nested clusters that can capture taxonomic relationships. Some produce a single optimal solution, while others construct a probabilistic model of cluster membership. More formally, clustering algorithms operate on a data set X composed of items represented by one or more features (dimensions). These could include physical location, such as right ascension and declination, as well as other properties such as brightness, color, temporal change, size, texture, and so on. Let D be the number of dimensions used to represent each item, xi ∈ RD. The clustering goal is to produce an organization P of the items in X that optimizes an objective function f : P -> R, which quantifies the quality of solution P. Often f is defined so as to maximize similarity within a cluster and minimize similarity between clusters. To that end, many algorithms make use of a measure d : X x X -> R of the distance between two items. A partitioning algorithm produces a set of clusters P = {c1, . . . , ck} such that the clusters are nonoverlapping (c_i intersected with c_j = empty set, i != j) subsets of the data set (Union_i c_i=X). Hierarchical algorithms produce a series of partitions P = {p1, . . . , pn }. For a complete hierarchy, the number of partitions n’= n, the number of items in the data set; the top partition is a single cluster containing all items, and the bottom partition contains n clusters, each containing a single item. For model-based clustering, each cluster c_j is represented by a model m_j , such as the cluster center or a Gaussian distribution. The wide array of available clustering algorithms may seem bewildering, and covering all of them is beyond the scope of this chapter. Choosing among them for a particular application involves considerations of the kind of data being analyzed, algorithm runtime efficiency, and how much prior knowledge is available about the problem domain, which can dictate the nature of clusters sought. Fundamentally, the clustering method and its representations of clusters carries with it a definition of what a cluster is, and it is important that this be aligned with the analysis goals for the problem at hand. In this chapter, I emphasize this point by identifying for each algorithm the cluster representation as a model, m_j , even for algorithms that are not typically thought of as creating a “model.” This chapter surveys a basic collection of clustering methods useful to any practitioner who is interested in applying clustering to a new data set. The algorithms include k-means (Section 25.2), EM (Section 25.3), agglomerative (Section 25.4), and spectral (Section 25.5) clustering, with side mentions of variants such as kernel k-means and divisive clustering. The chapter also discusses each algorithm’s strengths and limitations and provides pointers to additional in-depth reading for each subject. Section 25.6 discusses methods for incorporating domain knowledge into the clustering process. This chapter concludes with a brief survey of interesting applications of clustering methods to astronomy data (Section 25.7). The chapter begins with k-means because it is both generally accessible and so widely used that understanding it can be considered a necessary prerequisite for further work in the field. EM can be viewed as a more sophisticated version of k-means that uses a generative model for each cluster and probabilistic item assignments. Agglomerative clustering is the most basic form of hierarchical clustering and provides a basis for further exploration of algorithms in that vein. Spectral clustering permits a departure from feature-vector-based clustering and can operate on data sets instead represented as affinity, or similarity matrices—cases in which only pairwise information is known. The list of algorithms covered in this chapter is representative of those most commonly in use, but it is by no means comprehensive. There is an extensive collection of existing books on clustering that provide additional background and depth. Three early books that remain useful today are Anderberg’s Cluster Analysis for Applications [3], Hartigan’s Clustering Algorithms [25], and Gordon’s Classification [22]. The latter covers basics on similarity measures, partitioning and hierarchical algorithms, fuzzy clustering, overlapping clustering, conceptual clustering, validations methods, and visualization or data reduction techniques such as principal components analysis (PCA),multidimensional scaling, and self-organizing maps. More recently, Jain et al. provided a useful and informative survey [27] of a variety of different clustering algorithms, including those mentioned here as well as fuzzy, graph-theoretic, and evolutionary clustering. Everitt’s Cluster Analysis [19] provides a modern overview of algorithms, similarity measures, and evaluation methods.
Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping
2015-10-20
Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.
Unsupervised segmentation of MRI knees using image partition forests
NASA Astrophysics Data System (ADS)
Marčan, Marija; Voiculescu, Irina
2016-03-01
Nowadays many people are affected by arthritis, a condition of the joints with limited prevention measures, but with various options of treatment the most radical of which is surgical. In order for surgery to be successful, it can make use of careful analysis of patient-based models generated from medical images, usually by manual segmentation. In this work we show how to automate the segmentation of a crucial and complex joint -- the knee. To achieve this goal we rely on our novel way of representing a 3D voxel volume as a hierarchical structure of partitions which we have named Image Partition Forest (IPF). The IPF contains several partition layers of increasing coarseness, with partitions nested across layers in the form of adjacency graphs. On the basis of a set of properties (size, mean intensity, coordinates) of each node in the IPF we classify nodes into different features. Values indicating whether or not any particular node belongs to the femur or tibia are assigned through node filtering and node-based region growing. So far we have evaluated our method on 15 MRI knee images. Our unsupervised segmentation compared against a hand-segmented gold standard has achieved an average Dice similarity coefficient of 0.95 for femur and 0.93 for tibia, and an average symmetric surface distance of 0.98 mm for femur and 0.73 mm for tibia. The paper also discusses ways to introduce stricter morphological and spatial conditioning in the bone labelling process.
Independence polynomial and matching polynomial of the Koch network
NASA Astrophysics Data System (ADS)
Liao, Yunhua; Xie, Xiaoliang
2015-11-01
The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.
Short-Term Global Horizontal Irradiance Forecasting Based on Sky Imaging and Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Feng, Cong; Cui, Mingjian
Accurate short-term forecasting is crucial for solar integration in the power grid. In this paper, a classification forecasting framework based on pattern recognition is developed for 1-hour-ahead global horizontal irradiance (GHI) forecasting. Three sets of models in the forecasting framework are trained by the data partitioned from the preprocessing analysis. The first two sets of models forecast GHI for the first four daylight hours of each day. Then the GHI values in the remaining hours are forecasted by an optimal machine learning model determined based on a weather pattern classification model in the third model set. The weather pattern ismore » determined by a support vector machine (SVM) classifier. The developed framework is validated by the GHI and sky imaging data from the National Renewable Energy Laboratory (NREL). Results show that the developed short-term forecasting framework outperforms the persistence benchmark by 16% in terms of the normalized mean absolute error and 25% in terms of the normalized root mean square error.« less
Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems
2015-05-01
of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC
Liu, Huihui; Wei, Mengbi; Yang, Xianhai; Yin, Cen; He, Xiao
2017-01-01
Partition coefficients are vital parameters for measuring accurately the chemicals concentrations by passive sampling devices. Given the wide use of low density polyethylene (LDPE) film in passive sampling, we developed a theoretical linear solvation energy relationship (TLSER) model and a quantitative structure-activity relationship (QSAR) model for the prediction of the partition coefficient of chemicals between LDPE and water (K pew ). For chemicals with the octanol-water partition coefficient (log K ow ) <8, a TLSER model with V x (McGowan volume) and qA - (the most negative charge on O, N, S, X atoms) as descriptors was developed, but the model had relatively low determination coefficient (R 2 ) and cross-validated coefficient (Q 2 ). In order to further explore the theoretical mechanisms involved in the partition process, a QSAR model with four descriptors (MLOGP (Moriguchi octanol-water partition coeff.), P_VSA_s_3 (P_VSA-like on I-state, bin 3), Hy (hydrophilic factor) and NssO (number of atoms of type ssO)) was established, and statistical analysis indicated that the model had satisfactory goodness-of-fit, robustness and predictive ability. For chemicals with log K OW >8, a TLSER model with V x and a QSAR model with MLOGP as descriptor were developed. This is the first paper to explore the models for highly hydrophobic chemicals. The applicability domain of the models, characterized by the Euclidean distance-based method and Williams plot, covered a large number of structurally diverse chemicals, which included nearly all the common hydrophobic organic compounds. Additionally, through mechanism interpretation, we explored the structural features those governing the partition behavior of chemicals between LDPE and water. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Reygondeau, Gabriel; Guieu, Cécile; Benedetti, Fabio; Irisson, Jean-Olivier; Ayata, Sakina-Dorothée; Gasparini, Stéphane; Koubbi, Philippe
2017-02-01
When dividing the ocean, the aim is generally to summarise a complex system into a representative number of units, each representing a specific environment, a biological community or a socio-economical specificity. Recently, several geographical partitions of the global ocean have been proposed using statistical approaches applied to remote sensing or observations gathered during oceanographic cruises. Such geographical frameworks defined at a macroscale appear hardly applicable to characterise the biogeochemical features of semi-enclosed seas that are driven by smaller-scale chemical and physical processes. Following the Longhurst's biogeochemical partitioning of the pelagic realm, this study investigates the environmental divisions of the Mediterranean Sea using a large set of environmental parameters. These parameters were informed in the horizontal and the vertical dimensions to provide a 3D spatial framework for environmental management (12 regions found for the epipelagic, 12 for the mesopelagic, 13 for the bathypelagic and 26 for the seafloor). We show that: (1) the contribution of the longitudinal environmental gradient to the biogeochemical partitions decreases with depth; (2) the partition of the surface layer cannot be extrapolated to other vertical layers as the partition is driven by a different set of environmental variables. This new partitioning of the Mediterranean Sea has strong implications for conservation as it highlights that management must account for the differences in zoning with depth at a regional scale.
NASA Astrophysics Data System (ADS)
Song, Lisheng; Kustas, William P.; Liu, Shaomin; Colaizzi, Paul D.; Nieto, Hector; Xu, Ziwei; Ma, Yanfei; Li, Mingsong; Xu, Tongren; Agam, Nurit; Tolk, Judy A.; Evett, Steven R.
2016-09-01
In this study ground measured soil and vegetation component temperatures and composite temperature from a high spatial resolution thermal camera and a network of thermal-IR sensors collected in an irrigated maize field and in an irrigated cotton field are used to assess and refine the component temperature partitioning approach in the Two-Source Energy Balance (TSEB) model. A refinement to TSEB using a non-iterative approach based on the application of the Priestley-Taylor formulation for surface temperature partitioning and estimating soil evaporation from soil moisture observations under advective conditions (TSEB-A) was developed. This modified TSEB formulation improved the agreement between observed and modeled soil and vegetation temperatures. In addition, the TSEB-A model output of evapotranspiration (ET) and the components evaporation (E), transpiration (T) when compared to ground observations using the stable isotopic method and eddy covariance (EC) technique from the HiWATER experiment and with microlysimeters and a large monolithic weighing lysimeter from the BEAREX08 experiment showed good agreement. Difference between the modeled and measured ET measurements were less than 10% and 20% on a daytime basis for HiWATER and BEAREX08 data sets, respectively. The TSEB-A model was found to accurately reproduce the temporal dynamics of E, T and ET over a full growing season under the advective conditions existing for these irrigated crops located in arid/semi-arid climates. With satellite data this TSEB-A modeling framework could potentially be used as a tool for improving water use efficiency and conservation practices in water limited regions. However, TSEB-A requires soil moisture information which is not currently available routinely from satellite at the field scale.
Cenozoic tectonics of western North America controlled by evolving width of Farallon slab.
Schellart, W P; Stegman, D R; Farrington, R J; Freeman, J; Moresi, L
2010-07-16
Subduction of oceanic lithosphere occurs through two modes: subducting plate motion and trench migration. Using a global subduction zone data set and three-dimensional numerical subduction models, we show that slab width (W) controls these modes and the partitioning of subduction between them. Subducting plate velocity scales with W(2/3), whereas trench velocity scales with 1/W. These findings explain the Cenozoic slowdown of the Farallon plate and the decrease in subduction partitioning by its decreasing slab width. The change from Sevier-Laramide orogenesis to Basin and Range extension in North America is also explained by slab width; shortening occurred during wide-slab subduction and overriding-plate-driven trench retreat, whereas extension occurred during intermediate to narrow-slab subduction and slab-driven trench retreat.
JTRS/SCA and Custom/SDR Waveform Comparison
NASA Technical Reports Server (NTRS)
Oldham, Daniel R.; Scardelletti, Maximilian C.
2007-01-01
This paper compares two waveform implementations generating the same RF signal using the same SDR development system. Both waveforms implement a satellite modem using QPSK modulation at 1M BPS data rates with one half rate convolutional encoding. Both waveforms are partitioned the same across the general purpose processor (GPP) and the field programmable gate array (FPGA). Both waveforms implement the same equivalent set of radio functions on the GPP and FPGA. The GPP implements the majority of the radio functions and the FPGA implements the final digital RF modulator stage. One waveform is implemented directly on the SDR development system and the second waveform is implemented using the JTRS/SCA model. This paper contrasts the amount of resources to implement both waveforms and demonstrates the importance of waveform partitioning across the SDR development system.
Applications of CCSDS recommendations to Integrated Ground Data Systems (IGDS)
NASA Technical Reports Server (NTRS)
Mizuta, Hiroshi; Martin, Daniel; Kato, Hatsuhiko; Ihara, Hirokazu
1993-01-01
This paper describes an application of the CCSDS Principle Network (CPH) service model to communications network elements of a postulated Integrated Ground Data System (IGDS). Functions are drawn principally from COSMICS (Cosmic Information and Control System), an integrated space control infrastructure, and the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). From functional requirements, this paper derives a set of five communications network partitions which, taken together, support proposed space control infrastructures and data distribution systems. Our functional analysis indicates that the five network partitions derived in this paper should effectively interconnect the users, centers, processors, and other architectural elements of an IGDS. This paper illustrates a useful application of the CCSDS (Consultive Committee for Space Data Systems) Recommendations to ground data system development.
ERIC Educational Resources Information Center
Mathematics Teaching, 1972
1972-01-01
Topics discussed in this column include patterns of inverse multipliers in modular arithmetic; diagrams for product sets, set intersection, and set union; function notation; patterns in the number of partitions of positive integers; and tessellations. (DT)
NASA Astrophysics Data System (ADS)
Cheng, Irene; Zhang, Leiming; Blanchard, Pierrette
2014-10-01
Models describing the partitioning of atmospheric oxidized mercury (Hg(II)) between the gas and fine particulate phases were developed as a function of temperature. The models were derived from regression analysis of the gas-particle partitioning parameters, defined by a partition coefficient (Kp) and Hg(II) fraction in fine particles (fPBM) and temperature data from 10 North American sites. The generalized model, log(1/Kp) = 12.69-3485.30(1/T) (R2 = 0.55; root-mean-square error (RMSE) of 1.06 m3/µg for Kp), predicted the observed average Kp at 7 of the 10 sites. Discrepancies between the predicted and observed average Kp were found at the sites impacted by large Hg sources because the model had not accounted for the different mercury speciation profile and aerosol compositions of different sources. Site-specific equations were also generated from average Kp and fPBM corresponding to temperature interval data. The site-specific models were more accurate than the generalized Kp model at predicting the observations at 9 of the 10 sites as indicated by RMSE of 0.22-0.5 m3/µg for Kp and 0.03-0.08 for fPBM. Both models reproduced the observed monthly average values, except for a peak in Hg(II) partitioning observed during summer at two locations. Weak correlations between the site-specific model Kp or fPBM and observations suggest the role of aerosol composition, aerosol water content, and relative humidity factors on Hg(II) partitioning. The use of local temperature data to parameterize Hg(II) partitioning in the proposed models potentially improves the estimation of mercury cycling in chemical transport models and elsewhere.
NASA Technical Reports Server (NTRS)
Palopo, Kee; Lee, Hak-Tae; Chatterji, Gano
2011-01-01
The concept of re-partitioning the airspace into a new set of sectors for allocating capacity rather than delaying flights to comply with the capacity constraints of a static set of sectors is being explored. The reduction in delay, a benefit, achieved by this concept needs to be greater than the cost of controllers and equipment needed for the additional sectors. Therefore, tradeoff studies are needed for benefits assessment of this concept.
Sources and atmospheric transformations of semivolatile organic aerosols
NASA Astrophysics Data System (ADS)
Grieshop, Andrew P.
Fine atmospheric particulate matter (PM2.5) is associated with increased mortality, a fact which led the EPA to promulgate a National Ambient Air Quality Standard (NAAQS) for PM2.5 in 1997. Organic material contributes a substantial portion of the PM2.5 mass; organic aerosols (OA) are either directly emitted (primary OA or POA) or formed via the atmospheric oxidation of volatile precursor compounds as secondary OA (SOA). The relative contributions of POA and SOA to atmospheric OA are uncertain, as are the contributions from various source classes (e.g. motor vehicles, biomass burning). This dissertation first assesses the importance of organic PM within the context of current US air pollution regulations. Most control efforts to date have focused on the inorganic component of PM. Although growing evidence strongly implicates OA, especially which from motor vehicles, in the health effects of PM, uncertain and complex source-receptor relationships for OA discourage its direct control for NAAQS compliance. Analysis of both ambient data and chemical transport modeling results indicate that OA does not play a dominant role in NAAQS violations in most areas of the country under current and likely future regulations. Therefore, new regulatory approaches will likely be required to directly address potential health impacts associated with OA. To help develop the scientific understanding needed to better regulate OA, this dissertation examined the evolution of organic aerosol emitted by combustion systems. The current conceptual model of POA is that it is non-volatile and non-reactive. Both of these assumptions were experimental investigated in this dissertation. Novel dilution measurements were carried out to investigate the gas-particle partitioning of OA at atmospherically-relevant conditions. The results demonstrate that POA from combustion sources is semivolatile. Therefore its gas-particle partitioning depends on temperature and atmospheric concentrations; heating and dilution both cause it to evaporate. Gas-particle partitioning was parameterized using absorptive partitioning theory and the volatility basis-set framework. The dynamics of particle evaporation proved to be much slower than expected and measurements of aerosol composition indicate that particle composition varies with partitioning. These findings have major implications for the measurement and modeling of POA from combustion sources. Source tests need to be conducted at atmospheric concentrations and temperatures. Upon entering the atmosphere, organic aerosol emissions are aged via photochemical reactions. Experiments with dilute wood-smoke demonstrate the dramatic evolution these emissions undergo within hours of emission. Aging produced substantial new OA (doubling or tripling OA levels within hours) and changed particle composition and volatility. These changes are consistent with model predictions based on the partitioning and aging (via gas-phase photochemistry) of semi-volatile species represented with the basis-set framework. Aging of wood-smoke OA made created a much more oxygenated aerosol and formed material spectrally similar to oxygenated OA found widely in the atmosphere. The oxygenated aerosol is also similar that formed with similar experiments conducted with diesel engine emissions. Therefore, aging of emissions from diverse sources may produce chemically similar OA, complicating the establishment of robust source-receptor relationships.
New gap-filling and partitioning technique for H2O eddy fluxes measured over forests
NASA Astrophysics Data System (ADS)
Kang, Minseok; Kim, Joon; Malla Thakuri, Bindu; Chun, Junghwa; Cho, Chunho
2018-01-01
The continuous measurement of H2O fluxes using the eddy covariance (EC) technique is still challenging for forests because of large amounts of wet canopy evaporation (EWC), which occur during and following rain events when the EC systems rarely work correctly. We propose a new gap-filling and partitioning technique for the H2O fluxes: a model-statistics hybrid (MSH) method. It enables the recovery of the missing EWC in the traditional gap-filling method and the partitioning of the evapotranspiration (ET) into transpiration and (wet canopy) evaporation. We tested and validated the new method using the data sets from two flux towers, which are located at forests in hilly and complex terrains. The MSH reasonably recovered the missing EWC of 16-41 mm yr-1 and separated it from the ET (14-23 % of the annual ET). Additionally, we illustrated certain advantages of the proposed technique which enable us to understand better how ET responds to environmental changes and how the water cycle is connected to the carbon cycle in a forest ecosystem.
Stroganov, Oleg V; Novikov, Fedor N; Zeifman, Alexey A; Stroylov, Viktor S; Chilov, Ghermes G
2011-09-01
A new graph-theoretical approach called thermodynamic sampling of amino acid residues (TSAR) has been elaborated to explicitly account for the protein side chain flexibility in modeling conformation-dependent protein properties. In TSAR, a protein is viewed as a graph whose nodes correspond to structurally independent groups and whose edges connect the interacting groups. Each node has its set of states describing conformation and ionization of the group, and each edge is assigned an array of pairwise interaction potentials between the adjacent groups. By treating the obtained graph as a belief-network-a well-established mathematical abstraction-the partition function of each node is found. In the current work we used TSAR to calculate partition functions of the ionized forms of protein residues. A simplified version of a semi-empirical molecular mechanical scoring function, borrowed from our Lead Finder docking software, was used for energy calculations. The accuracy of the resulting model was validated on a set of 486 experimentally determined pK(a) values of protein residues. The average correlation coefficient (R) between calculated and experimental pK(a) values was 0.80, ranging from 0.95 (for Tyr) to 0.61 (for Lys). It appeared that the hydrogen bond interactions and the exhaustiveness of side chain sampling made the most significant contribution to the accuracy of pK(a) calculations. Copyright © 2011 Wiley-Liss, Inc.
Partitioning and lipophilicity in quantitative structure-activity relationships.
Dearden, J C
1985-01-01
The history of the relationship of biological activity to partition coefficient and related properties is briefly reviewed. The dominance of partition coefficient in quantitation of structure-activity relationships is emphasized, although the importance of other factors is also demonstrated. Various mathematical models of in vivo transport and binding are discussed; most of these involve partitioning as the primary mechanism of transport. The models describe observed quantitative structure-activity relationships (QSARs) well on the whole, confirming that partitioning is of key importance in in vivo behavior of a xenobiotic. The partition coefficient is shown to correlate with numerous other parameters representing bulk, such as molecular weight, volume and surface area, parachor and calculated indices such as molecular connectivity; this is especially so for apolar molecules, because for polar molecules lipophilicity factors into both bulk and polar or hydrogen bonding components. The relationship of partition coefficient to chromatographic parameters is discussed, and it is shown that such parameters, which are often readily obtainable experimentally, can successfully supplant partition coefficient in QSARs. The relationship of aqueous solubility with partition coefficient is examined in detail. Correlations are observed, even with solid compounds, and these can be used to predict solubility. The additive/constitutive nature of partition coefficient is discussed extensively, as are the available schemes for the calculation of partition coefficient. Finally the use of partition coefficient to provide structural information is considered. It is shown that partition coefficient can be a valuable structural tool, especially if the enthalpy and entropy of partitioning are available. PMID:3905374
A novel partitioning method for block-structured adaptive meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtainmore » the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.« less
A novel partitioning method for block-structured adaptive meshes
NASA Astrophysics Data System (ADS)
Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-07-01
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
Recurrence relations in one-dimensional Ising models.
da Conceição, C M Silva; Maia, R N P
2017-09-01
The exact finite-size partition function for the nonhomogeneous one-dimensional (1D) Ising model is found through an approach using algebra operators. Specifically, in this paper we show that the partition function can be computed through a trace from a linear second-order recurrence relation with nonconstant coefficients in matrix form. A relation between the finite-size partition function and the generalized Lucas polynomials is found for the simple homogeneous model, thus establishing a recursive formula for the partition function. This is an important property and it might indicate the possible existence of recurrence relations in higher-dimensional Ising models. Moreover, assuming quenched disorder for the interactions within the model, the quenched averaged magnetic susceptibility displays a nontrivial behavior due to changes in the ferromagnetic concentration probability.
Constitutive Modelling and Deformation Band Angle Predictions for High Porosity Sandstones
NASA Astrophysics Data System (ADS)
Richards, M. C.; Issen, K. A.; Ingraham, M. D.
2017-12-01
The development of a field-scale deformation model requires a constitutive framework that is capable of representing known material behavior and able to be calibrated using available mechanical response data. This work employs the principle of hyperplasticity (e.g., Houlsby and Puzrin, 2006) to develop such a constitutive framework for high porosity sandstone. Adapting the works of Zimmerman et al. (1986) and Collins and Houlsby (1997), the mechanical data set of Ingraham et al. (2013 a, b) was used to develop a specific constitutive framework for Castlegate sandstone, a high porosity fluvial-deposited reservoir analog rock. Using the mechanical data set of Ingraham et al. (2013 a, b), explicit expressions and material parameters of the elastic moduli and strain tensors were obtained. With these expressions, analytical and numerical techniques were then employed to partition the total mechanical strain into elastic, coupled, and plastic strain components. With the partitioned strain data, yield surfaces in true-stress space, coefficients of internal friction, dilatancy factors, along with the theorectical predictions of the deformation band angles were obtained. These results were also evaluated against band angle values obtained from a) measurements on specimen jackets (Ingraham et al., 2013a), b) plane fits through located acoustic emissions (AE) events (Ingraham et al. 2013b), and c) X-ray micro-computed tomography (micro-CT) calculations.
Abe, Toshikazu; Tokuda, Yasuharu; Cook, E Francis
2011-01-01
Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR) for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA) have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648) who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance) or two (moderate cerebral disability) of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605) to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC); and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043), 209/2,292 (9.1%) in all patients with the acceptable time intervals and 1,388/2,706 (52.1%) in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.
Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina
2015-01-01
Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.
Pyron, R Alexander
2017-01-01
Here, I combine previously underutilized models and priors to perform more biologically realistic phylogenetic inference from morphological data, with an example from squamate reptiles. When coding morphological characters, it is often possible to denote ordered states with explicit reference to observed or hypothetical ancestral conditions. Using this logic, we can integrate across character-state labels and estimate meaningful rates of forward and backward transitions from plesiomorphy to apomorphy. I refer to this approach as MkA, for “asymmetric.” The MkA model incorporates the biological reality of limited reversal for many phylogenetically informative characters, and significantly increases likelihoods in the empirical data sets. Despite this, the phylogeny of Squamata remains contentious. Total-evidence analyses using combined morphological and molecular data and the MkA approach tend toward recent consensus estimates supporting a nested Iguania. However, support for this topology is not unambiguous across data sets or analyses, and no mechanism has been proposed to explain the widespread incongruence between partitions, or the hidden support for various topologies in those partitions. Furthermore, different morphological data sets produced by different authors contain both different characters and different states for the same or similar characters, resulting in drastically different placements for many important fossil lineages. Effort is needed to standardize ontology for morphology, resolve incongruence, and estimate a robust phylogeny. The MkA approach provides a preliminary avenue for investigating morphological evolution while accounting for temporal evidence and asymmetry in character-state changes.
Estimated effects of temperature on secondary organic aerosol concentrations.
Sheehan, P E; Bowman, F M
2001-06-01
The temperature-dependence of secondary organic aerosol (SOA) concentrations is explored using an absorptive-partitioning model under a variety of simplified atmospheric conditions. Experimentally determined partitioning parameters for high yield aromatics are used. Variation of vapor pressures with temperature is assumed to be the main source of temperature effects. Known semivolatile products are used to define a modeling range of vaporization enthalpy of 10-25 kcal/mol-1. The effect of diurnal temperature variations on model predictions for various assumed vaporization enthalpies, precursor emission rates, and primary organic concentrations is explored. Results show that temperature is likely to have a significant influence on SOA partitioning and resulting SOA concentrations. A 10 degrees C decrease in temperature is estimated to increase SOA yields by 20-150%, depending on the assumed vaporization enthalpy. In model simulations, high daytime temperatures tend to reduce SOA concentrations by 16-24%, while cooler nighttime temperatures lead to a 22-34% increase, compared to constant temperature conditions. Results suggest that currently available constant temperature partitioning coefficients do not adequately represent atmospheric SOA partitioning behavior. Air quality models neglecting the temperature dependence of partitioning are expected to underpredict peak SOA concentrations as well as mistime their occurrence.
Reyes, Elisabeth; Nadot, Sophie; von Balthazar, Maria; Schönenberger, Jürg; Sauquet, Hervé
2018-06-21
Ancestral state reconstruction is an important tool to study morphological evolution and often involves estimating transition rates among character states. However, various factors, including taxonomic scale and sampling density, may impact transition rate estimation and indirectly also the probability of the state at a given node. Here, we test the influence of rate heterogeneity using maximum likelihood methods on five binary perianth characters, optimized on a phylogenetic tree of angiosperms including 1230 species sampled from all families. We compare the states reconstructed by an equal-rate (Mk1) and a two-rate model (Mk2) fitted either with a single set of rates for the whole tree or as a partitioned model, allowing for different rates on five partitions of the tree. We find strong signal for rate heterogeneity among the five subdivisions for all five characters, but little overall impact of the choice of model on reconstructed ancestral states, which indicates that most of our inferred ancestral states are the same whether heterogeneity is accounted for or not.
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
Asset surveillance system: apparatus and method
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2007-01-01
System and method for providing surveillance of an asset comprised of numerically fitting at least one mathematical model to obtained residual data correlative to asset operation; storing at least one mathematical model in a memory; obtaining a current set of signal data from the asset; retrieving at least one mathematical model from the memory, using the retrieved mathematical model in a sequential hypothesis test for determining if the current set of signal data is indicative of a fault condition; determining an asset fault cause correlative to a determined indication of a fault condition; providing an indication correlative to a determined fault cause, and an action when warranted. The residual data can be mode partitioned, a current mode of operation can be determined from the asset, and at least one mathematical model can be retrieved from the memory as a function of the determined mode of operation.
NASA Astrophysics Data System (ADS)
Díaz-Azpiroz, M.; Barcos, L.; Balanyá, J. C.; Fernández, C.; Expósito, I.; Czeck, D. M.
2014-11-01
Oblique convergence and subsequent transpression kinematics can be considered as the general situation in most convergent and strike-slip tectonic boundaries. To better understand such settings, progressively more complex kinematic models have been proposed, which need to be tested against natural shear zones using standardized procedures that minimise subjectivism. In this work, a protocol to test a general triclinic transpression model is applied to the Torcal de Antequera massif (TAM), an essentially brittle shear zone. Our results, given as kinematic parameters of the transpressive flow (transpression obliquity, ϕ; extrusion obliquity, υ; and kinematic vorticity number, Wk), suggest that the bulk triclinic transpressive flow imposed on the TAM was partitioned into two different flow fields, following a general partitioning type. As such, one flow field produced narrow structural domains located at the limits of the TAM, where mainly dextral strike-slip simple-shear-dominated transpression took place (Outer domains, ODs). In contrast, the remaining part of the bulk flow produced pure-shear-dominated dextral triclinic transpression at the inner part of the TAM (Inner domain, ID). A graphical method relating internal (ϕ, Wk) to far-field (dip of the shear zone boundary, δ; angle of oblique convergence, α) transpression parameters is proposed to obtain the theoretical horizontal velocity vector (V→), which in the case of the TAM, ranges between 099 and 118. These results support the applicability of kinematic models of triclinic transpression to brittle-ductile shear zones and the potential utility of the proposed protocol.
Barillot, Romain; Escobar-Gutiérrez, Abraham J.; Fournier, Christian; Huynh, Pierre; Combes, Didier
2014-01-01
Background and Aims Predicting light partitioning in crop mixtures is a critical step in improving the productivity of such complex systems, and light interception has been shown to be closely linked to plant architecture. The aim of the present work was to analyse the relationships between plant architecture and light partitioning within wheat–pea (Triticum aestivum–Pisum sativum) mixtures. An existing model for wheat was utilized and a new model for pea morphogenesis was developed. Both models were then used to assess the effects of architectural variations in light partitioning. Methods First, a deterministic model (L-Pea) was developed in order to obtain dynamic reconstructions of pea architecture. The L-Pea model is based on L-systems formalism and consists of modules for ‘vegetative development’ and ‘organ extension’. A tripartite simulator was then built up from pea and wheat models interfaced with a radiative transfer model. Architectural parameters from both plant models, selected on the basis of their contribution to leaf area index (LAI), height and leaf geometry, were then modified in order to generate contrasting architectures of wheat and pea. Key results By scaling down the analysis to the organ level, it could be shown that the number of branches/tillers and length of internodes significantly determined the partitioning of light within mixtures. Temporal relationships between light partitioning and the LAI and height of the different species showed that light capture was mainly related to the architectural traits involved in plant LAI during the early stages of development, and in plant height during the onset of interspecific competition. Conclusions In silico experiments enabled the study of the intrinsic effects of architectural parameters on the partitioning of light in crop mixtures of wheat and pea. The findings show that plant architecture is an important criterion for the identification/breeding of plant ideotypes, particularly with respect to light partitioning. PMID:24907314
NASA Astrophysics Data System (ADS)
Bernard, Julien; Eychenne, Julia; Le Pennec, Jean-Luc; Narváez, Diego
2016-08-01
How and how much the mass of juvenile magma is split between vent-derived tephra, PDC deposits and lavas (i.e., mass partition) is related to eruption dynamics and style. Estimating such mass partitioning budgets may reveal important for hazard evaluation purposes. We calculated the volume of each product emplaced during the August 2006 paroxysmal eruption of Tungurahua volcano (Ecuador) and converted it into masses using high-resolution grainsize, componentry and density data. This data set is one of the first complete descriptions of mass partitioning associated with a VEI 3 andesitic event. The scoria fall deposit, near-vent agglutinate and lava flow include 28, 16 and 12 wt. % of the erupted juvenile mass, respectively. Much (44 wt. %) of the juvenile material fed Pyroclastic Density Currents (i.e., dense flows, dilute surges and co-PDC plumes), highlighting that tephra fall deposits do not depict adequately the size and fragmentation processes of moderate PDC-forming event. The main parameters controlling the mass partitioning are the type of magmatic fragmentation, conditions of magma ascent, and crater area topography. Comparisons of our data set with other PDC-forming eruptions of different style and magma composition suggest that moderate andesitic eruptions are more prone to produce PDCs, in proportions, than any other eruption type. This finding may be explained by the relatively low magmatic fragmentation efficiency of moderate andesitic eruptions. These mass partitioning data reveal important trends that may be critical for hazard assessment, notably at frequently active andesitic edifices.
Chen, Ying; Cai, Xiaoyu; Jiang, Long; Li, Yu
2016-02-01
Based on the experimental data of octanol-air partition coefficients (KOA) for 19 polychlorinated biphenyl (PCB) congeners, two types of QSAR methods, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA), are used to establish 3D-QSAR models using the structural parameters as independent variables and using logKOA values as the dependent variable with the Sybyl software to predict the KOA values of the remaining 190 PCB congeners. The whole data set (19 compounds) was divided into a training set (15 compounds) for model generation and a test set (4 compounds) for model validation. As a result, the cross-validation correlation coefficient (q(2)) obtained by the CoMFA and CoMSIA models (shuffled 12 times) was in the range of 0.825-0.969 (>0.5), the correlation coefficient (r(2)) obtained was in the range of 0.957-1.000 (>0.9), and the SEP (standard error of prediction) of test set was within the range of 0.070-0.617, indicating that the models were robust and predictive. Randomly selected from a set of models, CoMFA analysis revealed that the corresponding percentages of the variance explained by steric and electrostatic fields were 23.9% and 76.1%, respectively, while CoMSIA analysis by steric, electrostatic and hydrophobic fields were 0.6%, 92.6%, and 6.8%, respectively. The electrostatic field was determined as a primary factor governing the logKOA. The correlation analysis of the relationship between the number of Cl atoms and the average logKOA values of PCBs indicated that logKOA values gradually increased as the number of Cl atoms increased. Simultaneously, related studies on PCB detection in the Arctic and Antarctic areas revealed that higher logKOA values indicate a stronger PCB migration ability. From CoMFA and CoMSIA contour maps, logKOA decreased when substituents possessed electropositive groups at the 2-, 3-, 3'-, 5- and 6- positions, which could reduce the PCB migration ability. These results are expected to be beneficial in predicting logKOA values of PCB homologues and derivatives and in providing a theoretical foundation for further elucidation of the global migration behaviour of PCBs. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Barcos, L.; Díaz-Azpiroz, M.; Balanyá, J. C.; Expósito, I.; Jiménez-Bonilla, A.; Faccenna, C.
2016-07-01
The combination of analytical and analogue models gives new opportunities to better understand the kinematic parameters controlling the evolution of transpression zones. In this work, we carried out a set of analogue models using the kinematic parameters of transpressional deformation obtained by applying a general triclinic transpression analytical model to a tabular-shaped shear zone in the external Betic Chain (Torcal de Antequera massif). According to the results of the analytical model, we used two oblique convergence angles to reproduce the main structural and kinematic features of structural domains observed within the Torcal de Antequera massif (α = 15° for the outer domains and α = 30° for the inner domain). Two parallel inclined backstops (one fixed and the other mobile) reproduce the geometry of the shear zone walls of the natural case. Additionally, we applied digital particle image velocimetry (PIV) method to calculate the velocity field of the incremental deformation. Our results suggest that the spatial distribution of the main structures observed in the Torcal de Antequera massif reflects different modes of strain partitioning and strain localization between two domain types, which are related to the variation in the oblique convergence angle and the presence of steep planar velocity - and rheological - discontinuities (the shear zone walls in the natural case). In the 15° model, strain partitioning is simple and strain localization is high: a single narrow shear zone is developed close and parallel to the fixed backstop, bounded by strike-slip faults and internally deformed by R and P shears. In the 30° model, strain partitioning is strong, generating regularly spaced oblique-to-the backstops thrusts and strike-slip faults. At final stages of the 30° experiment, deformation affects the entire model box. Our results show that the application of analytical modelling to natural transpressive zones related to upper crustal deformation facilitates to constrain the geometrical parameters of analogue models.
The total position-spread tensor: Spin partition
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Khatib, Muammar, E-mail: elkhatib@irsamc.ups-tlse.fr; Evangelisti, Stefano, E-mail: stefano@irsamc.ups-tlse.fr; Leininger, Thierry, E-mail: Thierry.Leininger@irsamc.ups-tlse.fr
2015-03-07
The Total Position Spread (TPS) tensor, defined as the second moment cumulant of the position operator, is a key quantity to describe the mobility of electrons in a molecule or an extended system. In the present investigation, the partition of the TPS tensor according to spin variables is derived and discussed. It is shown that, while the spin-summed TPS gives information on charge mobility, the spin-partitioned TPS tensor becomes a powerful tool that provides information about spin fluctuations. The case of the hydrogen molecule is treated, both analytically, by using a 1s Slater-type orbital, and numerically, at Full Configuration Interactionmore » (FCI) level with a V6Z basis set. It is found that, for very large inter-nuclear distances, the partitioned tensor growths quadratically with the distance in some of the low-lying electronic states. This fact is related to the presence of entanglement in the wave function. Non-dimerized open chains described by a model Hubbard Hamiltonian and linear hydrogen chains H{sub n} (n ≥ 2), composed of equally spaced atoms, are also studied at FCI level. The hydrogen systems show the presence of marked maxima for the spin-summed TPS (corresponding to a high charge mobility) when the inter-nuclear distance is about 2 bohrs. This fact can be associated to the presence of a Mott transition occurring in this region. The spin-partitioned TPS tensor, on the other hand, has a quadratical growth at long distances, a fact that corresponds to the high spin mobility in a magnetic system.« less
NASA Astrophysics Data System (ADS)
Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.
2015-10-01
In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.
Abraham, Michael H; Gola, Joelle M R; Ibrahim, Adam; Acree, William E; Liu, Xiangli
2014-07-01
There is considerable interest in the blood-tissue distribution of agrochemicals, and a number of researchers have developed experimental methods for in vitro distribution. These methods involve the determination of saline-blood and saline-tissue partitions; not only are they indirect, but they do not yield the required in vivo distribution. The authors set out equations for gas-tissue and blood-tissue distribution, for partition from water into skin and for permeation from water through human skin. Together with Abraham descriptors for the agrochemicals, these equations can be used to predict values for all of these processes. The present predictions compare favourably with experimental in vivo blood-tissue distribution where available. The predictions require no more than simple arithmetic. The present method represents a much easier and much more economic way of estimating blood-tissue partitions than the method that uses saline-blood and saline-tissue partitions. It has the added advantages of yielding the required in vivo partitions and being easily extended to the prediction of partition of agrochemicals from water into skin and permeation from water through skin. © 2013 Society of Chemical Industry.
3d expansions of 5d instanton partition functions
NASA Astrophysics Data System (ADS)
Nieri, Fabrizio; Pan, Yiwen; Zabzine, Maxim
2018-04-01
We propose a set of novel expansions of Nekrasov's instanton partition functions. Focusing on 5d supersymmetric pure Yang-Mills theory with unitary gauge group on C_{q,{t}^{-1}}^2× S^1 , we show that the instanton partition function admits expansions in terms of partition functions of unitary gauge theories living on the 3d subspaces C_q× S^1 , C_{t^{-1}}× S^1 and their intersection along S^1 . These new expansions are natural from the BPS/CFT viewpoint, as they can be matched with W q,t correlators involving an arbitrary number of screening charges of two kinds. Our constructions generalize and interpolate existing results in the literature.
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
NASA Astrophysics Data System (ADS)
Clesi, V.; Bouhifd, M. A.; Bolfan-Casanova, N.; Manthilake, G.; Fabbrizio, A.; Andrault, D.
2016-11-01
This study investigates the metal-silicate partitioning of Ni, Co, V, Cr, Mn and Fe during core mantle differentiation of terrestrial planets under hydrous conditions. For this, we equilibrated a molten hydrous CI chondrite model composition with various Fe-rich alloys in the system Fe-C-Ni-Co-Si-S in a multi-anvil over a range of P, T, fO2 and water content (5-20 GPa, 2073-2500 K, from 1 to 5 log units below the iron-wüstite (IW) buffer and for XH2O varying from 500 ppm to 1.5 wt%). By comparing the present experiments with the available data sets on dry systems, we observes that the effect of water on the partition coefficients of moderately siderophile elements is only moderate. For example, for iron we observed a decrease in the partition coefficient of Fe (Dmet/silFe) from 9.5 to 4.3, with increasing water content of the silicate melt, from 0 to 1.44 wt%, respectively. The evolution of metal-silicate partition coefficients of Ni, Co, V, Cr, Mn and Fe are modelled based on sets of empirical parameters. These empirical models are then used to refine the process of core segregation during accretion of Mars and the Earth. It appears that the likely presence of 3.5 wt% water on Mars during the core-mantle segregation could account for ∼74% of the FeO content of the Martian mantle. In contrast, water does not play such an important role for the Earth; only 4-6% of the FeO content of its mantle could be due to the water-induced Fe-oxidation, for a likely initial water concentration of 1.8 wt%. Thus, in order to reproduce the present-day FeO content of 8 wt% in the mantle, the Earth could initially have been accreted from a large fraction (between 85% and 90%) of reducing bodies (similar to EH chondrites), with 10-15% of the Earth's mass likely made of more oxidized components that introduced the major part of water and FeO to the Earth. This high proportion of enstatite chondrites in the original constitution of the Earth is consistent with the 17O,48Ca,50Ti,62Ni and 90Mo isotopic study by Dauphas et al. (2014). If we assume that the CI-chondrite was oxidized during accretion, its intrinsically high water content suggests a maximum initial water concentration in the range of 1.2-1.8 wt% for the Earth, and 2.5-3.5 wt% for Mars.
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
A Novel Method for Discovering Fuzzy Sequential Patterns Using the Simple Fuzzy Partition Method.
ERIC Educational Resources Information Center
Chen, Ruey-Shun; Hu, Yi-Chung
2003-01-01
Discusses sequential patterns, data mining, knowledge acquisition, and fuzzy sequential patterns described by natural language. Proposes a fuzzy data mining technique to discover fuzzy sequential patterns by using the simple partition method which allows the linguistic interpretation of each fuzzy set to be easily obtained. (Author/LRW)
Yoink: An interaction-based partitioning API.
Zheng, Min; Waller, Mark P
2018-05-15
Herein, we describe the implementation details of our interaction-based partitioning API (application programming interface) called Yoink for QM/MM modeling and fragment-based quantum chemistry studies. Interactions are detected by computing density descriptors such as reduced density gradient, density overlap regions indicator, and single exponential decay detector. Only molecules having an interaction with a user-definable QM core are added to the QM region of a hybrid QM/MM calculation. Moreover, a set of molecule pairs having density-based interactions within a molecular system can be computed in Yoink, and an interaction graph can then be constructed. Standard graph clustering methods can then be applied to construct fragments for further quantum chemical calculations. The Yoink API is licensed under Apache 2.0 and can be accessed via yoink.wallerlab.org. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Censored quantile regression with recursive partitioning-based weights
Wey, Andrew; Wang, Lan; Rudser, Kyle
2014-01-01
Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800
Application of a Model for Quenching and Partitioning in Hot Stamping of High-Strength Steel
NASA Astrophysics Data System (ADS)
Zhu, Bin; Liu, Zhuang; Wang, Yanan; Rolfe, Bernard; Wang, Liang; Zhang, Yisheng
2018-04-01
Application of quenching and partitioning process in hot stamping has proven to be an effective method to improve the plasticity of advanced high-strength steels (AHSSs). In this study, the hot stamping and partitioning process of advanced high-strength steel 30CrMnSi2Nb is investigated with a hot stamping mold. Given the specific partitioning time and temperature, the influence of quenching temperature on the volume fraction of microstructure evolution and mechanical properties of the above steel are studied in detail. In addition, a model for quenching and partitioning process is applied to predict the carbon diffusion and interface migration during partitioning, which determines the retained austenite volume fraction and final properties of the part. The predicted trends of the retained austenite volume fraction agree with the experimental results. In both cases, the volume fraction of retained austenite increases first and then decreases with the increasing quenching temperature. The optimal quenching temperature is approximately 290 °C for 30CrMnSi2Nb with the partition conditions of 425 °C and 20 seconds. It is suggested that the model can be used to help determine the process parameters to obtain retained austenite as much as possible.
Merging Surface Reconstructions of Terrestrial and Airborne LIDAR Range Data
2009-05-19
Mangan and R. Whitaker. Partitioning 3D surface meshes using watershed segmentation . IEEE Trans. on Visualization and Computer Graphics, 5(4), pp...Jain, and A. Zakhor. Data Processing Algorithms for Generating Textured 3D Building Facade Meshes from Laser Scans and Camera Images. International...acquired set of overlapping range images into a single mesh [2,9,10]. However, due to the volume of data involved in large scale urban modeling, data
NASA Astrophysics Data System (ADS)
Ise, T.; Litton, C. M.; Giardina, C. P.; Ito, A.
2009-12-01
Plant partitioning of carbon (C) to above- vs. belowground, to growth vs. respiration, and to short vs. long lived tissues exerts a large influence on ecosystem structure and function with implications for the global C budget. Importantly, outcomes of process-based terrestrial vegetation models are likely to vary substantially with different C partitioning algorithms. However, controls on C partitioning patterns remain poorly quantified, and studies have yielded variable, and at times contradictory, results. A recent meta-analysis of forest studies suggests that the ratio of net primary production (NPP) and gross primary production (GPP) is fairly conservative across large scales. To illustrate the effect of this unique meta-analysis-based partitioning scheme (MPS), we compared an application of MPS to a terrestrial satellite-based (MODIS) GPP to estimate NPP vs. two global process-based vegetation models (Biome-BGC and VISIT) to examine the influence of C partitioning on C budgets of woody plants. Due to the temperature dependence of maintenance respiration, NPP/GPP predicted by the process-based models increased with latitude while the ratio remained constant with MPS. Overall, global NPP estimated with MPS was 17 and 27% lower than the process-based models for temperate and boreal biomes, respectively, with smaller differences in the tropics. Global equilibrium biomass of woody plants was then calculated from the NPP estimates and tissue turnover rates from VISIT. Since turnover rates differed greatly across tissue types (i.e., metabolically active vs. structural), global equilibrium biomass estimates were sensitive to the partitioning scheme employed. The MPS estimate of global woody biomass was 7-21% lower than that of the process-based models. In summary, we found that model output for NPP and equilibrium biomass was quite sensitive to the choice of C partitioning schemes. Carbon use efficiency (CUE; NPP/GPP) by forest biome and the globe. Values are means for 2001-2006.
Salgado, J Cristian; Andrews, Barbara A; Ortuzar, Maria Fernanda; Asenjo, Juan A
2008-01-18
The prediction of the partition behaviour of proteins in aqueous two-phase systems (ATPS) using mathematical models based on their amino acid composition was investigated. The predictive models are based on the average surface hydrophobicity (ASH). The ASH was estimated by means of models that use the three-dimensional structure of proteins and by models that use only the amino acid composition of proteins. These models were evaluated for a set of 11 proteins with known experimental partition coefficient in four-phase systems: polyethylene glycol (PEG) 4000/phosphate, sulfate, citrate and dextran and considering three levels of NaCl concentration (0.0% w/w, 0.6% w/w and 8.8% w/w). The results indicate that such prediction is feasible even though the quality of the prediction depends strongly on the ATPS and its operational conditions such as the NaCl concentration. The ATPS 0 model which use the three-dimensional structure obtains similar results to those given by previous models based on variables measured in the laboratory. In addition it maintains the main characteristics of the hydrophobic resolution and intrinsic hydrophobicity reported before. Three mathematical models, ATPS I-III, based only on the amino acid composition were evaluated. The best results were obtained by the ATPS I model which assumes that all of the amino acids are completely exposed. The performance of the ATPS I model follows the behaviour reported previously, i.e. its correlation coefficients improve as the NaCl concentration increases in the system and, therefore, the effect of the protein hydrophobicity prevails over other effects such as charge or size. Its best predictive performance was obtained for the PEG/dextran system at high NaCl concentration. An increase in the predictive capacity of at least 54.4% with respect to the models which use the three-dimensional structure of the protein was obtained for that system. In addition, the ATPS I model exhibits high correlation coefficients in that system being higher than 0.88 on average. The ATPS I model exhibited correlation coefficients higher than 0.67 for the rest of the ATPS at high NaCl concentration. Finally, we tested our best model, the ATPS I model, on the prediction of the partition coefficient of the protein invertase. We found that the predictive capacities of the ATPS I model are better in PEG/dextran systems, where the relative error of the prediction with respect to the experimental value is 15.6%.
Off-diagonal series expansion for quantum partition functions
NASA Astrophysics Data System (ADS)
Hen, Itay
2018-05-01
We derive an integral-free thermodynamic perturbation series expansion for quantum partition functions which enables an analytical term-by-term calculation of the series. The expansion is carried out around the partition function of the classical component of the Hamiltonian with the expansion parameter being the strength of the off-diagonal, or quantum, portion. To demonstrate the usefulness of the technique we analytically compute to third order the partition functions of the 1D Ising model with longitudinal and transverse fields, and the quantum 1D Heisenberg model.
Dynamic connectivity regression: Determining state-related changes in brain connectivity
Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.
2014-01-01
Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408
NASA Astrophysics Data System (ADS)
Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil
2001-07-01
We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.
Tanabe, Akifumi S
2011-09-01
Proportional and separate models able to apply different combination of substitution rate matrix (SRM) and among-site rate variation model (ASRVM) to each locus are frequently used in phylogenetic studies of multilocus data. A proportional model assumes that branch lengths are proportional among partitions and a separate model assumes that each partition has an independent set of branch lengths. However, the selection from among nonpartitioned (i.e., a common combination of models is applied to all-loci concatenated sequences), proportional and separate models is usually based on the researcher's preference rather than on any information criteria. This study describes two programs, 'Kakusan4' (for DNA sequences) and 'Aminosan' (for amino-acid sequences), which allow the selection of evolutionary models based on several types of information criteria. The programs can handle both multilocus and single-locus data, in addition to providing an easy-to-use wizard interface and a noninteractive command line interface. In the case of multilocus data, SRMs and ASRVMs are compared at each locus and at all-loci concatenated sequences, after which nonpartitioned, proportional and separate models are compared based on information criteria. The programs also provide model configuration files for mrbayes, paup*, phyml, raxml and Treefinder to support further phylogenetic analysis using a selected model. When likelihoods are optimized by Treefinder, the best-fit models were found to differ depending on the data set. Furthermore, differences in the information criteria among nonpartitioned, proportional and separate models were much larger than those among the nonpartitioned models. These findings suggest that selecting from nonpartitioned, proportional and separate models results in a better phylogenetic tree. Kakusan4 and Aminosan are available at http://www.fifthdimension.jp/. They are licensed under gnugpl Ver.2, and are able to run on Windows, MacOS X and Linux. © 2011 Blackwell Publishing Ltd.
Model for the partition of neutral compounds between n-heptane and formamide.
Karunasekara, Thushara; Poole, Colin F
2010-04-01
Partition coefficients for 84 varied compounds were determined for n-heptane-formamide biphasic partition system and used to derive a model for the distribution of neutral compounds between the n-heptane-rich and formamide-rich layers. The partition coefficients, log K(p), were correlated through the solvation parameter model giving log K(p)=0.083+0.559E-2.244S-3.250A-1.614B+2.387V with a multiple correlation coefficient of 0.996, standard error of the estimate 0.139, and Fisher statistic 1791. In the model, the solute descriptors are excess molar refraction, E, dipolarity/polarizability, S, overall hydrogen-bond acidity, A, overall hydrogen-bond basicity, B, and McGowan's characteristic volume, V. The model is expected to be able to estimate further values of the partition coefficient to about 0.13 log units for the same descriptor space covered by the calibration compounds (E=-0.26-2.29, S=0-1.93, A=0-1.25, B=0.02-1.58, and V=0.78-2.50). The n-heptane-formamide partition system is shown to have different selectivity to other totally organic biphasic systems and to be suitable for estimating descriptor values for compounds of low water solubility and/or stability.
Mouret, Jean-Roch; Sablayrolles, Jean-Marie; Farines, Vincent
2015-04-01
The knowledge of gas-liquid partitioning of aroma compounds during winemaking fermentation could allow optimization of fermentation management, maximizing concentrations of positive markers of aroma and minimizing formation of molecules, such as hydrogen sulfide (H2S), responsible for defects. In this study, the effect of the main fermentation parameters on the gas-liquid partition coefficients (Ki) of H2S was assessed. The Ki for this highly volatile sulfur compound was measured in water by an original semistatic method developed in this work for the determination of gas-liquid partitioning. This novel method was validated and then used to determine the Ki of H2S in synthetic media simulating must, fermenting musts at various steps of the fermentation process, and wine. Ki values were found to be mainly dependent on the temperature but also varied with the composition of the medium, especially with the glucose concentration. Finally, a model was developed to quantify the gas-liquid partitioning of H2S in synthetic media simulating must to wine. This model allowed a very accurate prediction of the partition coefficient of H2S: the difference between observed and predicted values never exceeded 4%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Møyner, Olav, E-mail: olav.moyner@sintef.no; Lie, Knut-Andreas, E-mail: knut-andreas.lie@sintef.no
2016-01-01
A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructedmore » by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell-centered, conservative, finite-volume method, it is applicable to any flow model in which one can isolate a pressure equation. Herein, we only discuss single and two-phase incompressible models. Compressible flow, e.g., as modeled by the black-oil equations, is discussed in a separate paper.« less
NASA Astrophysics Data System (ADS)
Mueller, S.; Hasenclever, J.; Garbe-Schönberg, D.; Koepke, J.; Hoernle, K.
2017-12-01
The accretion mechanisms forming oceanic crust at fast spreading ridges are still under controversial discussion. Thermal, petrological, and geochemical observations predict different end-member models, i.e., the gabbro glacier and the sheeted sill model. They all bear implications for heat transport, temperature distribution, mode of crystallization and hydrothermal heat removal over crustal depth. In a typical MOR setting, temperature is the key factor driving partitioning of incompatible elements during crystallization. LA-ICP-MS data for co-genetic plagioclase and clinopyroxene in gabbros along a transect through the plutonic section of paleo-oceanic crust (Wadi Gideah Transect, Oman ophiolite) reveal that REE partitioning coefficients are relatively constant in the layered gabbro section but increase for the overlying foliated gabbros, with an enhanced offset towards HREEs. Along with a systematic enrichment of REE's with crustal height, these trends are consistent with a system dominated by in-situ crystallization for the lower gabbros and a change in crystallization mode for the upper gabbros. Sun and Liang (2017) used experimental REE partitioning data for calibrating a new REE-in-plagioclase-clinopyroxene thermometer that we used here for establishing the first crystallization-temperature depth profile through oceanic crust that facilitates a direct comparison with thermal models of crustal accretion. Our results indicate crystallization temperatures of about 1220±8°C for the layered gabbros and lower temperatures of 1175±8°C for the foliated gabbros and a thermal minimum above the layered-to-foliated gabbro transition. Our findings are consistent with a hybrid accretion model for the oceanic crust. The thermal minimum is assumed to represent a zone where the descending crystal mushes originating from the axial melt lens meet with mushes that have crystallized in situ. These results can be used to verify and test thermal models (e.g., Maclennan et al., 2004, Theissen-Krah et al., 2016) and their predictions for heat flow and temperature distribution in the crust. Maclennan, J., Hulme, T., & Singh, S. C. (2004), G3, 5(2). / Sun, C., & Liang, Y., (2017), CMP, 172(4). / Theissen-Krah, S., Rüpke, L. H., & Hasenclever, J. (2016), GRL, 43(3).
A Formal Model of Partitioning for Integrated Modular Avionics
NASA Technical Reports Server (NTRS)
DiVito, Ben L.
1998-01-01
The aviation industry is gradually moving toward the use of integrated modular avionics (IMA) for civilian transport aircraft. An important concern for IMA is ensuring that applications are safely partitioned so they cannot interfere with one another. We have investigated the problem of ensuring safe partitioning and logical non-interference among separate applications running on a shared Avionics Computer Resource (ACR). This research was performed in the context of ongoing standardization efforts, in particular, the work of RTCA committee SC-182, and the recently completed ARINC 653 application executive (APEX) interface standard. We have developed a formal model of partitioning suitable for evaluating the design of an ACR. The model draws from the mathematical modeling techniques developed by the computer security community. This report presents a formulation of partitioning requirements expressed first using conventional mathematical notation, then formalized using the language of SRI'S Prototype Verification System (PVS). The approach is demonstrated on three candidate designs, each an abstraction of features found in real systems.
The 1923 Kanto earthquake reevaluated using a newly augmented geodetic data set
Nyst, M.; Nishimura, T.; Pollitz, F.F.; Thatcher, W.
2006-01-01
This study revisits the mechanism of the 1923 Ms = 7.9 Kanto earthquake in Japan. We derive a new source model and use it to assess quantitative and qualitative aspects of the accommodation of plate motion in the Kanto region. We use a new geodetic data set that consists of displacements from leveling and angle changes from triangulation measurements obtained in surveys between 1883 and 1927. Two unique aspects of our analysis are the inclusion of a large number of second-order triangulation measurements and the application of a correction to remove interseismic deformation. The geometry of the fault planes is adopted from a recent seismic reflection study of the Kanto region. We evaluate the minimum complexity necessary in the model to fit the data optimally. Our final uniform-slip elastic dislocation model consists of two adjacent ???20?? dipping low-angle planes accommodating reverse dextral slip of 6.0 in on the larger, eastern plane and 9.5 m on the smaller, western plane with azimuths of 163?? and 121??, respectively. The earthquake was located in the Sagami trough, where the Philippine Sea plate subducts under Honshu. Compared to the highly oblique angle of plate convergence, the coseismic slip on the large fault plane has a more orthogonal orientation to the strike of the plate boundary, suggesting that slip partitioning plays a role in accommodation of plate motion. What other structure is involved in the partitioning is unclear. Uplift records of marine coastal terraces in Sagami Bay document 7500 years of earthquake activity and predict average recurrence intervals of 400 years for events with vertical displacement profiles similar to those of the 1923 earthquake. This means that the average slip deficit per recurrence interval is ???50% of the relative plate convergence. These findings of plate motion partitioning and slip deficit lead us to suggest that instead of a simple recurrence model with characteristic earthquakes, additional mechanisms are necessary to describe the accommodation of deformation in the Kanto region. So far, obvious candidates for these alternative mechanisms have not been discovered. Copyright 2006 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Mann, Ute; Frost, Daniel J.; Rubie, David C.; Becker, Harry; Audétat, Andreas
2012-05-01
The apparent overabundance of the highly siderophile elements (HSEs: Pt-group elements, Re and Au) in the mantles of Earth, Moon and Mars has not been satisfactorily explained. Although late accretion of a chondritic component seems to provide the most plausible explanation, metal-silicate equilibration in a magma ocean cannot be ruled out due to a lack of HSE partitioning data suitable for extrapolations to the relevant high pressure and high temperature conditions. We provide a new data set of partition coefficients simultaneously determined for Ru, Rh, Pd, Re, Ir and Pt over a range of 3.5-18 GPa and 2423-2773 K. In multianvil experiments, molten peridotite was equilibrated in MgO single crystal capsules with liquid Fe-alloy that contained bulk HSE concentrations of 53.2-98.9 wt% (XFe = 0.03-0.67) such that oxygen fugacities of IW - 1.5 to IW + 1.6 (i.e. logarithmic units relative to the iron-wüstite buffer) were established at run conditions. To analyse trace concentrations of the HSEs in the silicate melt with LA-ICP-MS, two silicate glass standards (1-119 ppm Ru, Rh, Pd, Re, Ir, Pt) were produced and evaluated for this study. Using an asymmetric regular solution model we have corrected experimental partition coefficients to account for the differences between HSE metal activities in the multicomponent Fe-alloys and infinite dilution. Based on the experimental data, the P and T dependence of the partition coefficients (D) was parameterized. The partition coefficients of all HSEs studied decrease with increasing pressure and to a greater extent with increasing temperature. Except for Pt, the decrease with pressure is stronger below ˜6 GPa and much weaker in the range 6-18 GPa. This change might result from pressure induced coordination changes in the silicate liquid. Extrapolating the D values over a large range of potential P-T conditions in a terrestrial magma ocean (peridotite liquidus at P ⩽ 60-80 GPa) we conclude that the P-T-induced decrease of D would not have been sufficient to explain HSE mantle abundances by metal-silicate equilibration at a common set of P-T-oxygen fugacity conditions. Therefore, the mantle concentrations of most HSEs cannot have been established during core formation. The comparatively less siderophile Pd might have been partly retained in the magma ocean if effective equilibration pressures reached 35-50 GPa. To a much smaller extent this could also apply to Pt and Rh providing that equilibration pressures reached ⩾60 GPa in the late stage of accretion. With most of the HSE partition coefficients at 60 GPa still differing by 0.5-3 orders of magnitude, metal-silicate equilibration alone cannot have produced the observed near-chondritic HSE abundances of the mantles of the Earth as well as of the Moon or Mars. Our results show that an additional process, such as the accretion of a late veneer composed of some type of chondritic material, was required. The results, therefore, support recent hybrid models, which propose that the observed HSE signatures are a combined result of both metal-silicate partitioning as well as an overprint by late accretion.
Schmickl, Thomas; Karsai, Istvan
2014-01-01
We develop a model to produce plausible patterns of task partitioning in the ponerine ant Ectatomma ruidum based on the availability of living prey and prey corpses. The model is based on the organizational capabilities of a “common stomach” through which the colony utilizes the availability of a natural (food) substance as a major communication channel to regulate the income and expenditure of the very same substance. This communication channel has also a central role in regulating task partitioning of collective hunting behavior in a supply&demand-driven manner. Our model shows that task partitioning of the collective hunting behavior in E. ruidum can be explained by regulation due to a common stomach system. The saturation of the common stomach provides accessible information to individual ants so that they can adjust their hunting behavior accordingly by engaging in or by abandoning from stinging or transporting tasks. The common stomach is able to establish and to keep stabilized an effective mix of workforce to exploit the prey population and to transport food into the nest. This system is also able to react to external perturbations in a de-centralized homeostatic way, such as to changes in the prey density or to accumulation of food in the nest. In case of stable conditions the system develops towards an equilibrium concerning colony size and prey density. Our model shows that organization of work through a common stomach system can allow Ectatomma ruidum to collectively forage for food in a robust, reactive and reliable way. The model is compared to previously published models that followed a different modeling approach. Based on our model analysis we also suggest a series of experiments for which our model gives plausible predictions. These predictions are used to formulate a set of testable hypotheses that should be investigated empirically in future experimentation. PMID:25493558
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
Generalization of multifractal theory within quantum calculus
NASA Astrophysics Data System (ADS)
Olemskoi, A.; Shuda, I.; Borisyuk, V.
2010-03-01
On the basis of the deformed series in quantum calculus, we generalize the partition function and the mass exponent of a multifractal, as well as the average of a random variable distributed over a self-similar set. For the partition function, such expansion is shown to be determined by binomial-type combinations of the Tsallis entropies related to manifold deformations, while the mass exponent expansion generalizes the known relation τq=Dq(q-1). We find the equation for the set of averages related to ordinary, escort, and generalized probabilities in terms of the deformed expansion as well. Multifractals related to the Cantor binomial set, exchange currency series, and porous-surface condensates are considered as examples.
NASA Technical Reports Server (NTRS)
Colson, R. O.; Mckay, G. A.; Taylor, L. A.
1988-01-01
This paper presents a systematic thermodynamic analysis of the effects of temperature and composition on olivine/melt and low-Ca pyroxene/melt partitioning. Experiments were conducted in several synthetic basalts with a wide range of Fe/Mg, determining partition coefficients for Eu, Ca, Mn, Fe, Ni, Sm, Cd, Y, Yb, Sc, Al, Zr, and Ti and modeling accurately the changes in free energy for trace element exchange between crystal and melt as functions of the trace element size and charge. On the basis of this model, partition coefficients for olivine/melt and low-Ca pyroxene/melt can be predicted for a wide range of elements over a variety of basaltic bulk compositions and temperatures. Moreover, variations in partition coeffeicients during crystallization or melting can be modeled on the basis of changes in temperature and major element chemistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitt, Jay P.; Bryce, David A.; Minteer, Shelley D.
The phospholipid-water partition coefficient is a commonly measured parameter that correlates with drug efficacy, small-molecule toxicity, and accumulation of molecules in biological systems in the environment. Despite the utility of this parameter, methods for measuring phospholipid-water partition coefficients are limited. This is due to the difficulty of making quantitative measurements in vesicle membranes or supported phospholipid bilayers, both of which are small-volume phases that challenge the sensitivity of many analytical techniques. In this paper, we employ in-situ confocal Raman microscopy to probe the partitioning of a model membrane-active compound, 2-(4-isobutylphenyl) propionic acid or ibuprofen, into both hybrid- and supported-phospholipid bilayersmore » deposited on the pore walls of individual chromatographic particles. The large surface-area-to-volume ratio of chromatographic silica allows interrogation of a significant lipid bilayer area within a very small volume. The local phospholipid concentration within a confocal probe volume inside the particle can be as high as 0.5 M, which overcomes the sensitivity limitations of making measurements in the limited membrane areas of single vesicles or planar supported bilayers. Quantitative determination of ibuprofen partitioning is achieved by using the phospholipid acyl-chains of the within-particle bilayer as an internal standard. This approach is tested for measurements of pH-dependent partitioning of ibuprofen into both hybrid-lipid and supported-lipid bilayers within silica particles, and the results are compared with octanol-water partitioning and with partitioning into individual optically-trapped phospholipid vesicle membranes. Finally and additionally, the impact of ibuprofen partitioning on bilayer structure is evaluated for both within-particle model membranes and compared with the structural impacts of partitioning into vesicle lipid bilayers.« less
Kitt, Jay P.; Bryce, David A.; Minteer, Shelley D.; ...
2018-05-14
The phospholipid-water partition coefficient is a commonly measured parameter that correlates with drug efficacy, small-molecule toxicity, and accumulation of molecules in biological systems in the environment. Despite the utility of this parameter, methods for measuring phospholipid-water partition coefficients are limited. This is due to the difficulty of making quantitative measurements in vesicle membranes or supported phospholipid bilayers, both of which are small-volume phases that challenge the sensitivity of many analytical techniques. In this paper, we employ in-situ confocal Raman microscopy to probe the partitioning of a model membrane-active compound, 2-(4-isobutylphenyl) propionic acid or ibuprofen, into both hybrid- and supported-phospholipid bilayersmore » deposited on the pore walls of individual chromatographic particles. The large surface-area-to-volume ratio of chromatographic silica allows interrogation of a significant lipid bilayer area within a very small volume. The local phospholipid concentration within a confocal probe volume inside the particle can be as high as 0.5 M, which overcomes the sensitivity limitations of making measurements in the limited membrane areas of single vesicles or planar supported bilayers. Quantitative determination of ibuprofen partitioning is achieved by using the phospholipid acyl-chains of the within-particle bilayer as an internal standard. This approach is tested for measurements of pH-dependent partitioning of ibuprofen into both hybrid-lipid and supported-lipid bilayers within silica particles, and the results are compared with octanol-water partitioning and with partitioning into individual optically-trapped phospholipid vesicle membranes. Finally and additionally, the impact of ibuprofen partitioning on bilayer structure is evaluated for both within-particle model membranes and compared with the structural impacts of partitioning into vesicle lipid bilayers.« less
Kitt, Jay P; Bryce, David A; Minteer, Shelley D; Harris, Joel M
2018-06-05
The phospholipid-water partition coefficient is a commonly measured parameter that correlates with drug efficacy, small-molecule toxicity, and accumulation of molecules in biological systems in the environment. Despite the utility of this parameter, methods for measuring phospholipid-water partition coefficients are limited. This is due to the difficulty of making quantitative measurements in vesicle membranes or supported phospholipid bilayers, both of which are small-volume phases that challenge the sensitivity of many analytical techniques. In this work, we employ in situ confocal Raman microscopy to probe the partitioning of a model membrane-active compound, 2-(4-isobutylphenyl) propionic acid or ibuprofen, into both hybrid- and supported-phospholipid bilayers deposited on the pore walls of individual chromatographic particles. The large surface-area-to-volume ratio of chromatographic silica allows interrogation of a significant lipid bilayer area within a very small volume. The local phospholipid concentration within a confocal probe volume inside the particle can be as high as 0.5 M, which overcomes the sensitivity limitations of making measurements in the limited membrane areas of single vesicles or planar supported bilayers. Quantitative determination of ibuprofen partitioning is achieved by using the phospholipid acyl-chains of the within-particle bilayer as an internal standard. This approach is tested for measurements of pH-dependent partitioning of ibuprofen into both hybrid-lipid and supported-lipid bilayers within silica particles, and the results are compared with octanol-water partitioning and with partitioning into individual optically trapped phospholipid vesicle membranes. Additionally, the impact of ibuprofen partitioning on bilayer structure is evaluated for both within-particle model membranes and compared with the structural impacts of partitioning into vesicle lipid bilayers.
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
Elliptic supersymmetric integrable model and multivariable elliptic functions
NASA Astrophysics Data System (ADS)
Motegi, Kohei
2017-12-01
We investigate the elliptic integrable model introduced by Deguchi and Martin [Int. J. Mod. Phys. A 7, Suppl. 1A, 165 (1992)], which is an elliptic extension of the Perk-Schultz model. We introduce and study a class of partition functions of the elliptic model by using the Izergin-Korepin analysis. We show that the partition functions are expressed as a product of elliptic factors and elliptic Schur-type symmetric functions. This result resembles recent work by number theorists in which the correspondence between the partition functions of trigonometric models and the product of the deformed Vandermonde determinant and Schur functions were established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Albert C.R., E-mail: albert@fisica.ufjf.br; Takakura, Flavio I., E-mail: takakura@fisica.ufjf.br; Abreu, Everton M.C., E-mail: evertonabreu@ufrrj.br
In this work we have obtained a higher-derivative Lagrangian for a charged fluid coupled with the electromagnetic fluid and the Dirac’s constraints analysis was discussed. A set of first-class constraints fixed by noncovariant gauge condition were obtained. The path integral formalism was used to obtain the partition function for the corresponding higher-derivative Hamiltonian and the Faddeev–Popov ansatz was used to construct an effective Lagrangian. Through the partition function, a Stefan–Boltzmann type law was obtained. - Highlights: • Higher-derivative Lagrangian for a charged fluid. • Electromagnetic coupling and Dirac’s constraint analysis. • Partition function through path integral formalism. • Stefan–Boltzmann-kind lawmore » through the partition function.« less
A New Model for Optimal Mechanical and Thermal Performance of Cement-Based Partition Wall
Huang, Shiping; Hu, Mengyu; Cui, Nannan; Wang, Weifeng
2018-01-01
The prefabricated cement-based partition wall has been widely used in assembled buildings because of its high manufacturing efficiency, high-quality surface, and simple and convenient construction process. In this paper, a general porous partition wall that is made from cement-based materials was proposed to meet the optimal mechanical and thermal performance during transportation, construction and its service life. The porosity of the proposed partition wall is formed by elliptic-cylinder-type cavities. The finite element method was used to investigate the mechanical and thermal behaviour, which shows that the proposed model has distinct advantages over the current partition wall that is used in the building industry. It is found that, by controlling the eccentricity of the elliptic-cylinder cavities, the proposed wall stiffness can be adjusted to respond to the imposed loads and to improve the thermal performance, which can be used for the optimum design. Finally, design guidance is provided to obtain the optimal mechanical and thermal performance. The proposed model could be used as a promising candidate for partition wall in the building industry. PMID:29673176
A New Model for Optimal Mechanical and Thermal Performance of Cement-Based Partition Wall.
Huang, Shiping; Hu, Mengyu; Huang, Yonghui; Cui, Nannan; Wang, Weifeng
2018-04-17
The prefabricated cement-based partition wall has been widely used in assembled buildings because of its high manufacturing efficiency, high-quality surface, and simple and convenient construction process. In this paper, a general porous partition wall that is made from cement-based materials was proposed to meet the optimal mechanical and thermal performance during transportation, construction and its service life. The porosity of the proposed partition wall is formed by elliptic-cylinder-type cavities. The finite element method was used to investigate the mechanical and thermal behaviour, which shows that the proposed model has distinct advantages over the current partition wall that is used in the building industry. It is found that, by controlling the eccentricity of the elliptic-cylinder cavities, the proposed wall stiffness can be adjusted to respond to the imposed loads and to improve the thermal performance, which can be used for the optimum design. Finally, design guidance is provided to obtain the optimal mechanical and thermal performance. The proposed model could be used as a promising candidate for partition wall in the building industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jeong
The research program reported here is focused on critical issues that represent conspicuous gaps in current understanding of rapid solidification, limiting our ability to predict and control microstructural evolution (i.e. morphological dynamics and microsegregation) at high undercooling, where conditions depart significantly from local equilibrium. More specifically, through careful application of phase-field modeling, using appropriate thin-interface and anti-trapping corrections and addressing important details such as transient effects and a velocity-dependent (i.e. adaptive) numerics, the current analysis provides a reasonable simulation-based picture of non-equilibrium solute partitioning and the corresponding oscillatory dynamics associated with single-phase rapid solidification and show that this method ismore » a suitable means for a self-consistent simulation of transient behavior and operating point selection under rapid growth conditions. Moving beyond the limitations of conventional theoretical/analytical treatments of non-equilibrium solute partitioning, these results serve to substantiate recent experimental findings and analytical treatments for single-phase rapid solidification. The departure from the equilibrium solid concentration at the solid-liquid interface was often observed during rapid solidification, and the energetic associated non-equilibrium solute partitioning has been treated in detail, providing possible ranges of interface concentrations for a given growth condition. Use of these treatments for analytical description of specific single-phase dendritic and cellular operating point selection, however, requires a model for solute partitioning under a given set of growth conditions. Therefore, analytical solute trapping models which describe the chemical partitioning as a function of steady state interface velocities have been developed and widely utilized in most of the theoretical investigations of rapid solidification. However, these solute trapping models are not rigorously verified due to the difficulty in experimentally measuring under rapid growth conditions. Moreover, since these solute trapping models include kinetic parameters which are difficult to directly measure from experiments, application of the solute trapping models or the associated analytic rapid solidification model is limited. These theoretical models for steady state rapid solidification which incorporate the solute trapping models do not describe the interdependency of solute diffusion, interface kinetics, and alloy thermodynamics. The phase-field approach allows calculating, spontaneously, the non-equilibrium growth effects of alloys and the associated time-dependent growth dynamics, without making the assumptions that solute partitioning is an explicit function of velocity, as is the current convention. In the research described here, by utilizing the phase-field model in the thin-interface limit, incorporating the anti-trapping current term, more quantitatively valid interface kinetics and solute diffusion across the interface are calculated. In order to sufficiently resolve the physical length scales (i.e. interface thickness and diffusion boundary length), grid spacings are continually adjusted in calculations. The full trajectories of transient planar growth dynamics under rapid directional solidification conditions with different pulling velocities are described. As a validation of a model, the predicted steady state conditions are consistent with the analytic approach for rapid growth. It was confirmed that rapid interface dynamics exhibits the abrupt acceleration of the planar front when the effect of the non-equilibrium solute partitioning at the interface becomes signi ficant. This is consistent with the previous linear stability analysis for the non-equilibrium interface dynamics. With an appropriate growth condition, the continuous oscillation dynamics was able to be simulated using continually adjusting grid spacings. This oscillatory dynamics including instantaneous jump of interface velocities are consistent with a previous phenomenological model by and a numerical investigation, which may cause the formation of banded structures. Additionally, the selection of the steady state growth dynamics in the highly undercooled melt is demonstrated. The transition of the growth morphology, interface velocity selection, and solute trapping phenomenon with increasing melt supersaturations was described by the phase-field simulation. The tip selection for the dendritic growth was consistent with Ivantsov's function, and the non-equilibrium chemical partitioning behavior shows good qualitative agreement with the Aziz's solute trapping model even though the model parameter(V D) remains as an arbitrary constant. This work is able to show the possibility of comprehensive description of rapid alloy growth over the entire time-dependent non-equilibrium phenomenon.« less
NASA Astrophysics Data System (ADS)
Oliphant, Andrew J.; Stoy, Paul C.
2018-03-01
Photosynthesis is more efficient under diffuse than direct beam photosynthetically active radiation (PAR) per unit PAR, but diffuse PAR is infrequently measured at research sites. We examine four commonly used semiempirical models (Erbs et al., 1982, https://doi.org/10.1016/0038-092X(82)90302-4; Gu et al., 1999, https://doi.org/10.1029/1999JD901068; Roderick, 1999, https://doi.org/10.1016/S0168-1923(99)00028-3; Weiss & Norman, 1985, https://doi.org/10.1016/0168-1923(85)90020-6) that partition PAR into diffuse and direct beam components based on the negative relationship between atmospheric transparency and scattering of PAR. Radiation observations at 58 sites (140 site years) from the La Thuille FLUXNET data set were used for model validation and coefficient testing. All four models did a reasonable job of predicting the diffuse fraction of PAR (ϕ) at the 30 min timescale, with site median r2 values ranging between 0.85 and 0.87, model efficiency coefficients (MECs) between 0.62 and 0.69, and regression slopes within 10% of unity. Model residuals were not strongly correlated with astronomical or standard meteorological variables. We conclude that the Roderick (1999, https://doi.org/10.1016/S0168-1923(99)00028-3) and Gu et al. (1999, https://doi.org/10.1029/1999JD901068) models performed better overall than the two older models. Using the basic form of these models, the data set was used to find both individual site and universal model coefficients that optimized predictive accuracy. A new universal form of the model is presented in section 5 that increased site median MEC to 0.73. Site-specific model coefficients increased median MEC further to 0.78, indicating usefulness of local/regional training of coefficients to capture the local distributions of aerosols and cloud types.
Multi-scale modularity and motif distributional effect in metabolic networks.
Gao, Shang; Chen, Alan; Rahmani, Ali; Zeng, Jia; Tan, Mehmet; Alhajj, Reda; Rokne, Jon; Demetrick, Douglas; Wei, Xiaohui
2016-01-01
Metabolism is a set of fundamental processes that play important roles in a plethora of biological and medical contexts. It is understood that the topological information of reconstructed metabolic networks, such as modular organization, has crucial implications on biological functions. Recent interpretations of modularity in network settings provide a view of multiple network partitions induced by different resolution parameters. Here we ask the question: How do multiple network partitions affect the organization of metabolic networks? Since network motifs are often interpreted as the super families of evolved units, we further investigate their impact under multiple network partitions and investigate how the distribution of network motifs influences the organization of metabolic networks. We studied Homo sapiens, Saccharomyces cerevisiae and Escherichia coli metabolic networks; we analyzed the relationship between different community structures and motif distribution patterns. Further, we quantified the degree to which motifs participate in the modular organization of metabolic networks.
Adsorption of Phthalates on Impervious Indoor Surfaces.
Wu, Yaoxing; Eichler, Clara M A; Leng, Weinan; Cox, Steven S; Marr, Linsey C; Little, John C
2017-03-07
Sorption of semivolatile organic compounds (SVOCs) onto interior surfaces, often referred to as the "sink effect", and their subsequent re-emission significantly affect the fate and transport of indoor SVOCs and the resulting human exposure. Unfortunately, experimental challenges and the large number of SVOC/surface combinations have impeded progress in understanding sorption of SVOCs on indoor surfaces. An experimental approach based on a diffusion model was thus developed to determine the surface/air partition coefficient K of di-2-ethylhexyl phthalate (DEHP) on typical impervious surfaces including aluminum, steel, glass, and acrylic. The results indicate that surface roughness plays an important role in the adsorption process. Although larger data sets are needed, the ability to predict K could be greatly improved by establishing the nature of the relationship between surface roughness and K for clean indoor surfaces. Furthermore, different surfaces exhibit nearly identical K values after being exposed to kitchen grime with values that are close to those reported for the octanol/air partition coefficient. This strongly supports the idea that interactions between gas-phase DEHP and soiled surfaces have been reduced to interactions with an organic film. Collectively, the results provide an improved understanding of equilibrium partitioning of SVOCs on impervious surfaces.
Quantitative analysis of molecular partition towards lipid membranes using surface plasmon resonance
NASA Astrophysics Data System (ADS)
Figueira, Tiago N.; Freire, João M.; Cunha-Santos, Catarina; Heras, Montserrat; Gonçalves, João; Moscona, Anne; Porotto, Matteo; Salomé Veiga, Ana; Castanho, Miguel A. R. B.
2017-03-01
Understanding the interplay between molecules and lipid membranes is fundamental when studying cellular and biotechnological phenomena. Partition between aqueous media and lipid membranes is key to the mechanism of action of many biomolecules and drugs. Quantifying membrane partition, through adequate and robust parameters, is thus essential. Surface Plasmon Resonance (SPR) is a powerful technique for studying 1:1 stoichiometric interactions but has limited application to lipid membrane partition data. We have developed and applied a novel mathematical model for SPR data treatment that enables determination of kinetic and equilibrium partition constants. The method uses two complementary fitting models for association and dissociation sensorgram data. The SPR partition data obtained for the antibody fragment F63, the HIV fusion inhibitor enfuvirtide, and the endogenous drug kyotorphin towards POPC membranes were compared against data from independent techniques. The comprehensive kinetic and partition models were applied to the membrane interaction data of HRC4, a measles virus entry inhibitor peptide, revealing its increased affinity for, and retention in, cholesterol-rich membranes. Overall, our work extends the application of SPR beyond the realm of 1:1 stoichiometric ligand-receptor binding into a new and immense field of applications: the interaction of solutes such as biomolecules and drugs with lipids.
NASA Technical Reports Server (NTRS)
Pope, L. D.; Wilby, E. G.
1982-01-01
An airplane interior noise prediction model is developed to determine the important parameters associated with sound transmission into the interiors of airplanes, and to identify apropriate noise control methods. Models for stiffened structures, and cabin acoustics with floor partition are developed. Validation studies are undertaken using three test articles: a ring stringer stiffened cylinder, an unstiffened cylinder with floor partition, and ring stringer stiffened cylinder with floor partition and sidewall trim. The noise reductions of the three test articles are computed using the heoretical models and compared to measured values. A statistical analysis of the comparison data indicates that there is no bias in the predictions although a substantial random error exists so that a discrepancy of more than five or six dB can be expected for about one out of three predictions.
NASA Astrophysics Data System (ADS)
Mahan, B. M.; Siebert, J.; Blanchard, I.; Badro, J.; Sossi, P.; Moynier, F.
2017-12-01
Volatile and moderately volatile elements display different volatilities and siderophilities, as well as varying sensitivity to thermodynamic controls (X, P, T, fO2) during metal-silicate differentiation. The experimental determination of the metal-silicate partitioning of these elements permits us to evaluate processes controlling the distribution of these elements in Earth. In this work, we have combined metal-silicate partitioning data and results for S, Sn, Zn and Cu, and input these characterizations into Earth formation models. Model parameters such as source material, timing of volatile delivery, fO2 path, and degree of impactor equilibration were varied to encompass an array of possible formation scenarios. These models were then assessed to discern plausible sets of conditions that can produce current observed element-to-element ratios (e.g. S/Zn) in the Earth's present-day mantle, while also satisfying current estimates on the S content of the core, at no more than 2 wt%. The results of our models indicate two modes of accretion that can maintain chondritic element-to-element ratios for the bulk Earth and can arrive at present-day mantle abundances of these elements. The first mode requires the late addition of Earth's entire inventory of these elements (assuming a CI-chondritic composition) and late-stage accretion that is marked by partial equilibration of large impactors. The second, possibly more intuitive mode, requires that Earth accreted - at least initially - from volatile poor material preferentially depleted in S relative to Sn, Zn, and Cu. From a chemical standpoint, this source material is most similar to type I chondrule rich (and S poor) materials (Hewins and Herzberg, 1996; Mahan et al., 2017; Amsellem et al., 2017), such as the metal-bearing carbonaceous chondrites.
NASA Astrophysics Data System (ADS)
Liu, J.; Chen, Z.; Horowitz, L. W.; Carlton, A. M. G.; Fan, S.; Cheng, Y.; Ervens, B.; Fu, T. M.; He, C.; Tao, S.
2014-12-01
Secondary organic aerosols (SOA) have a profound influence on air quality and climate, but large uncertainties exist in modeling SOA on the global scale. In this study, five SOA parameterization schemes, including a two-product model (TPM), volatility basis-set (VBS) and three cloud SOA schemes (Ervens et al. (2008, 2014), Fu et al. (2008) , and He et al. (2013)), are implemented into the global chemical transport model (MOZART-4). For each scheme, model simulations are conducted with identical boundary and initial conditions. The VBS scheme produces the highest global annual SOA production (close to 35 Tg·y-1), followed by three cloud schemes (26-30 Tg·y-1) and TPM (23 Tg·y-1). Though sharing a similar partitioning theory to the TPM scheme, the VBS approach simulates the chemical aging of multiple generations of VOCs oxidation products, resulting in a much larger SOA source, particularly from aromatic species, over Europe, the Middle East and Eastern America. The formation of SOA in VBS, which represents the net partitioning of semi-volatile organic compounds from vapor to condensed phase, is highly sensitivity to the aging and wet removal processes of vapor-phase organic compounds. The production of SOA from cloud processes (SOAcld) is constrained by the coincidence of liquid cloud water and water-soluble organic compounds. Therefore, all cloud schemes resolve a fairly similar spatial pattern over the tropical and the mid-latitude continents. The spatiotemporal diversity among SOA parameterizations is largely driven by differences in precursor inputs. Therefore, a deeper understanding of the evolution, wet removal, and phase partitioning of semi-volatile organic compounds, particularly above remote land and oceanic areas, is critical to better constrain the global-scale distribution and related climate forcing of secondary organic aerosols.
Partitioning of functional gene expression data using principal points.
Kim, Jaehee; Kim, Haseong
2017-10-12
DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system. A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and Escherichia coli data. The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.
Ferguson, V L
2009-08-01
The relative contributions of elastic, plastic, and viscous material behavior are poorly described by the separate extraction and analysis of the plane strain modulus, E('), the contact hardness, H(c) (a hybrid parameter encompassing both elastic and plastic behavior), and various viscoelastic material constants. A multiple element mechanical model enables the partitioning of a single indentation response into its fundamental elastic, plastic, and viscous deformation components. The objective of this study was to apply deformation partitioning to explore the role of hydration, tissue type, and degree of mineralization in bone and calcified cartilage. Wet, ethanol-dehydrated, and PMMA-embedded equine cortical bone samples and PMMA-embedded human femoral head tissues were analyzed for contributions of elastic, plastic and viscous deformation to the overall nanoindentation response at each site. While the alteration of hydration state had little effect on any measure of deformation, unembedded tissues demonstrated significantly greater measures of resistance to plastic deformation than PMMA-embedded tissues. The PMMA appeared to mechanically stabilize the tissues and prevent extensive permanent deformation within the bone material. Increasing mineral volume fraction correlated with positive changes in E('), H(c), and resistance to plastic deformation, H; however, the partitioned deformation components were generally unaffected by mineralization. The contribution of viscous deformation was minimal and may only play a significant role in poorly mineralized tissues. Deformation partitioning enables a detailed interpretation of the elastic, plastic, and viscous contributions to the nanomechanical behavior of mineralized tissues that is not possible when examining modulus and contact hardness alone. Varying experimental or biological factors, such as hydration or mineralization level, enables the understanding of potential mechanisms for specific mechanical behavior patterns that would otherwise be hidden within a more complex set of material property parameters.
Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition
NASA Astrophysics Data System (ADS)
Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa
2012-02-01
We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Entanglement, replicas, and Thetas
NASA Astrophysics Data System (ADS)
Mukhi, Sunil; Murthy, Sameer; Wu, Jie-Qiang
2018-01-01
We compute the single-interval Rényi entropy (replica partition function) for free fermions in 1+1d at finite temperature and finite spatial size by two methods: (i) using the higher-genus partition function on the replica Riemann surface, and (ii) using twist operators on the torus. We compare the two answers for a restricted set of spin structures, leading to a non-trivial proposed equivalence between higher-genus Siegel Θ-functions and Jacobi θ-functions. We exhibit this proposal and provide substantial evidence for it. The resulting expressions can be elegantly written in terms of Jacobi forms. Thereafter we argue that the correct Rényi entropy for modular-invariant free-fermion theories, such as the Ising model and the Dirac CFT, is given by the higher-genus computation summed over all spin structures. The result satisfies the physical checks of modular covariance, the thermal entropy relation, and Bose-Fermi equivalence.
Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin
2017-06-14
The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.
Partitioned coupling of advection-diffusion-reaction systems and Brinkman flows
NASA Astrophysics Data System (ADS)
Lenarda, Pietro; Paggi, Marco; Ruiz Baier, Ricardo
2017-09-01
We present a partitioned algorithm aimed at extending the capabilities of existing solvers for the simulation of coupled advection-diffusion-reaction systems and incompressible, viscous flow. The space discretisation of the governing equations is based on mixed finite element methods defined on unstructured meshes, whereas the time integration hinges on an operator splitting strategy that exploits the differences in scales between the reaction, advection, and diffusion processes, considering the global system as a number of sequentially linked sets of partial differential, and algebraic equations. The flow solver presents the advantage that all unknowns in the system (here vorticity, velocity, and pressure) can be fully decoupled and thus turn the overall scheme very attractive from the computational perspective. The robustness of the proposed method is illustrated with a series of numerical tests in 2D and 3D, relevant in the modelling of bacterial bioconvection and Boussinesq systems.
Plant interspecies competition for sunlight: a mathematical model of canopy partitioning.
Nevai, Andrew L; Vance, Richard R
2007-07-01
We examine the influence of canopy partitioning on the outcome of competition between two plant species that interact only by mutually shading each other. This analysis is based on a Kolmogorov-type canopy partitioning model for plant species with clonal growth form and fixed vertical leaf profiles (Vance and Nevai in J. Theor. Biol., 2007, to appear). We show that canopy partitioning is necessary for the stable coexistence of the two competing plant species. We also use implicit methods to show that, under certain conditions, the species' nullclines can intersect at most once. We use nullcline endpoint analysis to show that when the nullclines do intersect, and in such a way that they cross, then the resulting equilibrium point is always stable. We also construct surfaces that divide parameter space into regions within which the various outcomes of competition occur, and then study parameter dependence in the locations of these surfaces. The analysis presented here and in a companion paper (Nevai and Vance, The role of leaf height in plant competition for sunlight: analysis of a canopy partitioning model, in review) together shows that canopy partitioning is both necessary and, under appropriate parameter values, sufficient for the stable coexistence of two hypothetical plant species whose structure and growth are described by our model.
Surveillance system and method having an operating mode partitioned fault classification model
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.
NASA Astrophysics Data System (ADS)
Li, Yuan; Audétat, Andreas
2012-11-01
The partitioning of 15 major to trace metals between monosulfide solid solution (MSS), sulfide liquid (SL) and mafic silicate melt (SM) was determined in piston-cylinder experiments performed at 1175-1300 °C, 1.5-3.0 GPa and oxygen fugacities ranging from 3.1 log units below to 1.0 log units above the quartz-fayalite-magnetite fO2 buffer, which conditions are representative of partial melting in the upper mantle in different tectonic settings. The silicate melt was produced by partial melting of a natural, amphibole-rich mantle source rock, resulting in hydrous (˜5 wt% H2O) basanitic melts similar to low-degree partial melts of metasomatized mantle, whereas the major element composition of the starting sulfide (˜52 wt% Fe; 39 wt% S; 7 wt% Ni; 2 wt% Cu) was similar to the average composition of sulfides in this environment. SL/SM partition coefficients are high (≥100) for Au, Ni, Cu, Ag, Bi, intermediate (1-100) for Co, Pb, Sn, Sb (±As, Mo), and low (≤1) for the remaining elements. MSS/SM partition coefficients are generally lower than SL/SM partition coefficients and are high (≥100) for Ni, Cu, Au, intermediate (1-100) for Co, Ag (±Bi, Mo), and low (≤1) for the remaining elements. Most sulfide-silicate melt partition coefficients vary as a function of fO2, with Mo, Bi, As (±W) varying by a factor >10 over the investigated fO2 range, Sb, Ag, Sn (±V) varying by a factor of 3-10, and Pb, Cu, Ni, Co, Au, Zn, Mn varying by a factor of 3-10. The partitioning data were used to model the behavior of Cu, Au, Ag, and Bi during partial melting of upper mantle and during fractional crystallization of primitive MORB and arc magmas. Sulfide phase relationships and comparison of the modeling results with reported Cu, Au, Ag, and Bi concentrations from MORB and arc magmas suggest that: (i) MSS is the dominant sulfide in the source region of arc magmas, and thus that Au/Cu ratios in the silicate melt and residual sulfides may decrease with increasing degree of partial melting, (ii) both MSS and sulfide liquid are precipitated during fractional crystallization of MORB, and (iii) fractional crystallization of arc magmas is strongly dominated by MSS.
Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter; ...
2016-01-22
Earth’s terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsecmore » (~ 1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. As a result, we anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models.« less
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter; Zeng, Xubin; Troch, Peter A.; Niu, Guo-Yue; Williams, Zachary; Brunke, Michael A.; Gochis, David
2016-03-01
Earth's terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsec (˜1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. We anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models. This article was corrected on 2 FEB 2016. See the end of the full text for details.
NASA Astrophysics Data System (ADS)
Die, Qingqi; Nie, Zhiqiang; Liu, Feng; Tian, Yajun; Fang, Yanyan; Gao, Hefeng; Tian, Shulei; He, Jie; Huang, Qifei
2015-10-01
Gas and particle phase air samples were collected in summer and winter around industrial sites in Shanghai, China, to allow the concentrations, profiles, and gas-particle partitioning of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) and dioxin-like polychlorinated biphenyls (dl-PCBs) to be determined. The total 2,3,7,8-substituted PCDD/F and dl-PCB toxic equivalent (TEQ) concentrations were 14.2-182 fg TEQ/m3 (mean 56.8 fg TEQ/m3) in summer and 21.9-479 fg TEQ/m3 (mean 145 fg TEQ/m3) in winter. The PCDD/Fs tended to be predominantly in the particulate phase, while the dl-PCBs were predominantly found in the gas phase, and the proportions of all of the PCDD/F and dl-PCB congeners in the particle phase increased as the temperature decreased. The logarithms of the gas-particle partition coefficients correlated well with the subcooled liquid vapor pressures of the PCDD/Fs and dl-PCBs for most of the samples. Gas-particle partitioning of the PCDD/Fs deviated from equilibrium either in summer or winter close to local sources, and the Junge-Pankow model and predictions made using a model based on the octanol-air partition coefficient fitted the measured particulate PCDD/F fractions well, indicating that absorption and adsorption mechanism both contributed to the partitioning process. However, gas-particle equilibrium of the dl-PCBs was reached more easily in winter than in summer. The Junge-Pankow model predictions fitted the dl-PCB data better than did the predictions made using the model based on the octanol-air partition coefficient, indicating that adsorption mechanism made dominated contribution to the partitioning process.
Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong
2014-08-01
Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.
A strategy to load balancing for non-connectivity MapReduce job
NASA Astrophysics Data System (ADS)
Zhou, Huaping; Liu, Guangzong; Gui, Haixia
2017-09-01
MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.
NASA Astrophysics Data System (ADS)
Sivandran, G.; Bisht, G.; Ivanov, V. Y.; Bras, R. L.
2008-12-01
A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was applied to the semiarid Walnut Gulch Experimental Watershed in Arizona. The physically-based, distributed nature of the coupled model allows for parameterization and simulation of watershed vegetation-water-energy dynamics on timescales varying from hourly to interannual. The model also allows for explicit spatial representation of processes that vary due to complex topography, such as lateral redistribution of moisture and partitioning of radiation with respect to aspect and slope. Model parameterization and forcing was conducted using readily available databases for topography, soil types, and land use cover as well as the data from network of meteorological stations located within the Walnut Gulch watershed. In order to test the performance of the model, three sets of simulations were conducted over an 11 year period from 1997 to 2007. Two simulations focus on heavily instrumented nested watersheds within the Walnut Gulch basin; (i) Kendall watershed, which is dominated by annual grasses; and (ii) Lucky Hills watershed, which is dominated by a mixture of deciduous and evergreen shrubs. The third set of simulations cover the entire Walnut Gulch Watershed. Model validation and performance were evaluated in relation to three broad categories; (i) energy balance components: the network of meteorological stations were used to validate the key energy fluxes; (ii) water balance components: the network of flumes, rain gauges and soil moisture stations installed within the watershed were utilized to validate the manner in which the model partitions moisture; and (iii) vegetation dynamics: remote sensing products from MODIS were used to validate spatial and temporal vegetation dynamics. Model results demonstrate satisfactory spatial and temporal agreement with observed data, giving confidence that key ecohydrological processes can be adequately represented for future applications of tRIBS+VEGGIE in regional modeling of land-atmosphere interactions.
PARTITION COEFFICIENTS FOR METALS IN SURFACE WATER, SOIL, AND WASTE
This report presents metal partition coefficients for the surface water pathway and for the source model used in the Multimedia, Multi-pathway, Multi-receptor Exposure and Risk Assessment (3MRA) technology under development by the U.S. Environmental Protection Agency. Partition ...
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945
Osterberg, T; Norinder, U
2001-01-01
A method of modelling and predicting biopharmaceutical properties using simple theoretically computed molecular descriptors and multivariate statistics has been investigated for several data sets related to solubility, IAM chromatography, permeability across Caco-2 cell monolayers, human intestinal perfusion, brain-blood partitioning, and P-glycoprotein ATPase activity. The molecular descriptors (e.g. molar refractivity, molar volume, index of refraction, surface tension and density) and logP were computed with ACD/ChemSketch and ACD/logP, respectively. Good statistical models were derived that permit simple computational prediction of biopharmaceutical properties. All final models derived had R(2) values ranging from 0.73 to 0.95 and Q(2) values ranging from 0.69 to 0.86. The RMSEP values for the external test sets ranged from 0.24 to 0.85 (log scale).
Epstein, Scott A; Riipinen, Ilona; Donahue, Neil M
2010-01-15
To model the temperature-induced partitioning of semivolatile organics in laboratory experiments or atmospheric models, one must know the appropriate heats of vaporization. Current treatments typically assume a constant value of the heat of vaporization or else use specific values from a small set of surrogate compounds. With published experimental vapor-pressure data from over 800 organic compounds, we have developed a semiempirical correlation between the saturation concentration (C*, microg m(-3)) and the heat of vaporization (deltaH(VAP), kJ mol(-1)) for organics in the volatility basis set. Near room temperature, deltaH(VAP) = -11 log(10)C(300)(*) + 129. Knowledge of the relationship between C* and deltaH(VAP) constrains a free parameter in thermodenuder data analysis. A thermodenuder model using our deltaH(VAP) values agrees well with thermal behavior observed in laboratory experiments.
Knowles, L Lacey; Huang, Huateng; Sukumaran, Jeet; Smith, Stephen A
2018-03-01
Discordant gene trees are commonly encountered when sequences from thousands of loci are applied to estimate phylogenetic relationships. Several processes contribute to this discord. Yet, we have no methods that jointly model different sources of conflict when estimating phylogenies. An alternative to analyzing entire genomes or all the sequenced loci is to identify a subset of loci for phylogenetic analysis. If we can identify data partitions that are most likely to reflect descent from a common ancestor (i.e., discordant loci that indeed reflect incomplete lineage sorting [ILS], as opposed to some other process, such as lateral gene transfer [LGT]), we can analyze this subset using powerful coalescent-based species-tree approaches. Test data sets were simulated where discord among loci could arise from ILS and LGT. Data sets where analyzed using the newly developed program CLASSIPHY (Huang et al., ) to assess whether our ability to distinguish the cause of discord among loci varied when ILS and LGT occurred in the recent versus deep past and whether the accuracy of these inferences were affected by the mutational process. We show that accuracy of probabilistic classification of individual loci by the cause of discord differed when ILS and LGT events occurred more recently compared with the distant past and that the signal-to-noise ratio arising from the mutational process contributes to difficulties in inferring LGT data partitions. We discuss our findings in terms of the promise and limitations of identifying subsets of loci for species-tree inference that will not violate the underlying coalescent model (i.e., data partitions in which ILS, and not LGT, contributes to discord). We also discuss the empirical implications of our work given the many recalcitrant nodes in the tree of life (e.g., origins of angiosperms, amniotes, or Neoaves), and recent arguments for concatenating loci. © 2018 Botanical Society of America.
Kuo, Dave T F; Di Toro, Dominic M
2013-08-01
A model for whole-body in vivo biotransformation of neutral and weakly polar organic chemicals in fish is presented. It considers internal chemical partitioning and uses Abraham solvation parameters as reactivity descriptors. It assumes that only chemicals freely dissolved in the body fluid may bind with enzymes and subsequently undergo biotransformation reactions. Consequently, the whole-body biotransformation rate of a chemical is retarded by the extent of its distribution in different biological compartments. Using a randomly generated training set (n = 64), the biotransformation model is found to be: log (HLφfish ) = 2.2 (±0.3)B - 2.1 (±0.2)V - 0.6 (±0.3) (root mean square error of prediction [RMSE] = 0.71), where HL is the whole-body biotransformation half-life in days, φfish is the freely dissolved fraction in body fluid, and B and V are the chemical's H-bond acceptance capacity and molecular volume. Abraham-type linear free energy equations were also developed for lipid-water (Klipidw ) and protein-water (Kprotw ) partition coefficients needed for the computation of φfish from independent determinations. These were found to be 1) log Klipidw = 0.77E - 1.10S - 0.47A - 3.52B + 3.37V + 0.84 (in Lwat /kglipid ; n = 248, RMSE = 0.57) and 2) log Kprotw = 0.74E - 0.37S - 0.13A - 1.37B + 1.06V - 0.88 (in Lwat /kgprot ; n = 69, RMSE = 0.38), where E, S, and A quantify dispersive/polarization, dipolar, and H-bond-donating interactions, respectively. The biotransformation model performs well in the validation of HL (n = 424, RMSE = 0.71). The predicted rate constants do not exceed the transport limit due to circulatory flow. Furthermore, the model adequately captures variation in biotransformation rate between chemicals with varying log octanol-water partitioning coefficient, B, and V and exhibits high degree of independence from the choice of training chemicals. The present study suggests a new framework for modeling chemical reactivity in biological systems. Copyright © 2013 SETAC.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solarkiewicz, R. J.
2009-01-01
Presently, it is not well understood how to best model nitrogen oxides (NOx) emissions from lightning because lightning is highly variable. Peak current, channel length, channel altitude, stroke multiplicity, and the number of flashes that occur in a particular region (i.e., flash density) all influence the amount of lightning NOx produced. Moreover, these 5 variables are not the same for ground and cloud flashes; e.g., cloud flashes normally have lower peak currents, higher altitudes, and higher flash densities than ground flashes [see (Koshak, 2009) for additional details]. Because the existing satellite observations of lightning (Fig. 1) from the Lightning Imaging Sensor/Optical Transient Detector (LIS/OTD) do not distinguish between ground and cloud fashes, which produce different amounts of NOx, it is very difficult to accurately account for the regional/global production of lightning NOx. Hence, the ability to partition the LIS/OTD lightning climatology into separate ground and cloud flash distributions would substantially benefit the atmospheric chemistry modeling community. NOx indirectly influences climate because it controls the concentration of ozone and hydroxyl radicals in the atmosphere. The importance of lightning-produced NOx is empasized throughout the scientific literature (see for example, Huntrieser et al. 1998). In fact, lightning is the most important NOx source in the upper troposphere with a global production rate estimated to vary between 2 and 20 Tg (N)yr(sup -1) (Lee et al., 1997), with more recent estimates of about 6 Tg(N)yr(sup -1) (Martin et al., 2007). In order to make accurate predictions, global chemistry/climate models (as well as regional air quality modells) must more accurately account for the effects of lightning NOx. In particular, the NASA Goddard Institute for Space Studies (GISS) Model E (Schmidt et al., 2005) and the GEOS-CHEM global chemical transport model (Bey et al., 2001) would each benefit from a partitioning of the LIS/OTD lightning climatology. In this study, we introduce a new technique for retrieving the ground flash fraction in a set of N lightning observed from space and that occur within a specific latitude/longitude bin. The method is briefly described and applied to CONUS lightning that have already been partitioned into ground and cloud flashes using independent ground-based observations, in order to assess the accuracy of the retrieval method. The retrieval errors are encouragingly small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter
Earth’s terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsecmore » (~ 1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. As a result, we anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models.« less
Partition-based discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
Common y-intercept and single compound regressions of gas-particle partitioning data vs 1/T
NASA Astrophysics Data System (ADS)
Pankow, James F.
Confidence intervals are placed around the log Kp vs 1/ T correlation equations obtained using simple linear regressions (SLR) with the gas-particle partitioning data set of Yamasaki et al. [(1982) Env. Sci. Technol.16, 189-194]. The compounds and groups of compounds studied include the polycylic aromatic hydrocarbons phenanthrene + anthracene, me-phenanthrene + me-anthracene, fluoranthene, pyrene, benzo[ a]fluorene + benzo[ b]fluorene, chrysene + benz[ a]anthracene + triphenylene, benzo[ b]fluoranthene + benzo[ k]fluoranthene, and benzo[ a]pyrene + benzo[ e]pyrene (note: me = methyl). For any given compound, at equilibrium, the partition coefficient Kp equals ( F/ TSP)/ A where F is the particulate-matter associated concentration (ng m -3), A is the gas-phase concentration (ng m -3), and TSP is the concentration of particulate matter (μg m -3). At temperatures more than 10°C from the mean sampling temperature of 17°C, the confidence intervals are quite wide. Since theory predicts that similar compounds sorbing on the same particulate matter should possess very similar y-intercepts, the data set was also fitted using a special common y-intercept regression (CYIR). For most of the compounds, the CYIR equations fell inside of the SLR 95% confidence intervals. The CYIR y-intercept value is -18.48, and is reasonably close to the type of value that can be predicted for PAH compounds. The set of CYIR regression equations is probably more reliable than the set of SLR equations. For example, the CYIR-derived desorption enthalpies are much more highly correlated with vaporization enthalpies than are the SLR-derived desorption enthalpies. It is recommended that the CYIR approach be considered whenever analysing temperature-dependent gas-particle partitioning data.
From r-spin intersection numbers to Hodge integrals
NASA Astrophysics Data System (ADS)
Ding, Xiang-Mao; Li, Yuping; Meng, Lingxian
2016-01-01
Generalized Kontsevich Matrix Model (GKMM) with a certain given potential is the partition function of r-spin intersection numbers. We represent this GKMM in terms of fermions and expand it in terms of the Schur polynomials by boson-fermion correspondence, and link it with a Hurwitz partition function and a Hodge partition by operators in a widehat{GL}(∞) group. Then, from a W 1+∞ constraint of the partition function of r-spin intersection numbers, we get a W 1+∞ constraint for the Hodge partition function. The W 1+∞ constraint completely determines the Schur polynomials expansion of the Hodge partition function.
Mathematical modeling of tetrahydroimidazole benzodiazepine-1-one derivatives as an anti HIV agent
NASA Astrophysics Data System (ADS)
Ojha, Lokendra Kumar
2017-07-01
The goal of the present work is the study of drug receptor interaction via QSAR (Quantitative Structure-Activity Relationship) analysis for 89 set of TIBO (Tetrahydroimidazole Benzodiazepine-1-one) derivatives. MLR (Multiple Linear Regression) method is utilized to generate predictive models of quantitative structure-activity relationships between a set of molecular descriptors and biological activity (IC50). The best QSAR model was selected having a correlation coefficient (r) of 0.9299 and Standard Error of Estimation (SEE) of 0.5022, Fisher Ratio (F) of 159.822 and Quality factor (Q) of 1.852. This model is statistically significant and strongly favours the substitution of sulphur atom, IS i.e. indicator parameter for -Z position of the TIBO derivatives. Two other parameter logP (octanol-water partition coefficient) and SAG (Surface Area Grid) also played a vital role in the generation of best QSAR model. All three descriptor shows very good stability towards data variation in leave-one-out (LOO).
Multicomponent phase-field model for extremely large partition coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welland, Michael J.; Wolf, Dieter; Guyer, Jonathan E.
2014-01-01
We develop a multicomponent phase-field model specially formulated to robustly simulate concentration variations from molar to atomic magnitudes across an interlace, i.e., partition coefficients in excess of 10±23 such as may be the case with species which are predominant in one phase and insoluble in the other. Substitutional interdiffusion on a normal lattice and concurrent interstitial diffusion are included. The composition in the interlace follows the approach of Kim. Kim, and Suzuki [Phys. Rev. E 60, 7186 (1999)] and is compared to that of Wheeler, Boettinger, and McFadden [Phys. Rev. A 45, 7424 (1992)] in the context of large partitioning.more » The model successfully reproduces analytical solutions for binary diffusion couples and solute trapping for the demonstrated cases of extremely large partitioning.« less
A Layer Model of Ethanol Partitioning into Lipid Membranes
Nizza, David T.; Gawrisch, Klaus
2013-01-01
The effect of membrane composition on ethanol partitioning into lipid bilayers was assessed by headspace gas chromatography. A series of model membranes with different compositions have been investigated. Membranes were exposed to a physiological ethanol concentration of 20 mmol/l. The concentration of membranes was 20 wt% which roughly corresponds to values found in tissue. Partitioning depended on the chemical nature of polar groups at the lipid-water interface. Compared to phosphatidylcholine, lipids with headgroups containing phosphatidylglycerol, phosphatidylserine, and sphingomyelin showed enhanced partitioning while headgroups containing phosphatidylethanolamine resulted in a lower partition coefficient. The molar partition coefficient was independent of a membrane’s hydrophobic volume. This observation is in agreement with our previously published NMR results which showed that ethanol resides almost exclusively within the membrane-water interface. At an ethanol concentration of 20 mmol/l in water, ethanol concentrations at the lipid/water interface are in the range from 30 – 15 mmol/l, corresponding to one ethanol molecule per 100–200 lipids. PMID:19592710
A layer model of ethanol partitioning into lipid membranes.
Nizza, David T; Gawrisch, Klaus
2009-06-01
The effect of membrane composition on ethanol partitioning into lipid bilayers was assessed by headspace gas chromatography. A series of model membranes with different compositions have been investigated. Membranes were exposed to a physiological ethanol concentration of 20 mmol/l. The concentration of membranes was 20 wt% which roughly corresponds to values found in tissue. Partitioning depended on the chemical nature of polar groups at the lipid/water interface. Compared to phosphatidylcholine, lipids with headgroups containing phosphatidylglycerol, phosphatidylserine, and sphingomyelin showed enhanced partitioning while headgroups containing phosphatidylethanolamine resulted in a lower partition coefficient. The molar partition coefficient was independent of a membrane's hydrophobic volume. This observation is in agreement with our previously published NMR results which showed that ethanol resides almost exclusively within the membrane/water interface. At an ethanol concentration of 20 mmol/l in water, ethanol concentrations at the lipid/water interface are in the range from 30-15 mmol/l, corresponding to one ethanol molecule per 100-200 lipids.
A Measurement and Modeling Study of Hair Partition of Neutral, Cationic, and Anionic Chemicals.
Li, Lingyi; Yang, Senpei; Chen, Tao; Han, Lujia; Lian, Guoping
2018-04-01
Various neutral, cationic, and anionic chemicals contained in hair care products can be absorbed into hair fiber to modulate physicochemical properties such as color, strength, style, and volume. For environmental safety, there is also an interest in understanding hair absorption to wide chemical pollutants. There have been very limited studies on the absorption properties of chemicals into hair. Here, an experimental and modeling study has been carried out for the hair-water partition of a range of neutral, cationic, and anionic chemicals at different pH. The data showed that hair-water partition not only depends on the hydrophobicity of the chemical but also the pH. The partition of cationic chemicals to hair increased with pH, and this is due to their electrostatic interaction with hair increased from repulsion to attraction. For anionic chemicals, their hair-water partition coefficients decreased with increasing pH due to their electrostatic interaction with hair decreased from attraction to repulsion. Increase in pH did not change the partition of neutral chemicals significantly. Based on the new physicochemical insight of the pH effect on hair-water partition, a new quantitative structure property relationship model has been proposed, taking into account of both the hydrophobic interaction and electrostatic interaction of chemical with hair fiber. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
A combinatorial model for the Macdonald polynomials.
Haglund, J
2004-11-16
We introduce a polynomial C(mu)[Z; q, t], depending on a set of variables Z = z(1), z(2),..., a partition mu, and two extra parameters q, t. The definition of C(mu) involves a pair of statistics (maj(sigma, mu), inv(sigma, mu)) on words sigma of positive integers, and the coefficients of the z(i) are manifestly in N[q,t]. We conjecture that C(mu)[Z; q, t] is none other than the modified Macdonald polynomial H(mu)[Z; q, t]. We further introduce a general family of polynomials F(T)[Z; q, S], where T is an arbitrary set of squares in the first quadrant of the xy plane, and S is an arbitrary subset of T. The coefficients of the F(T)[Z; q, S] are in N[q], and C(mu)[Z; q, t] is a sum of certain F(T)[Z; q, S] times nonnegative powers of t. We prove F(T)[Z; q, S] is symmetric in the z(i) and satisfies other properties consistent with the conjecture. We also show how the coefficient of a monomial in F(T)[Z; q, S] can be expressed recursively. maple calculations indicate the F(T)[Z; q, S] are Schur-positive, and we present a combinatorial conjecture for their Schur coefficients when the set T is a partition with at most three columns.
Evaluation of gas-particle partitioning in a regional air quality model for organic pollutants
NASA Astrophysics Data System (ADS)
Efstathiou, Christos I.; Matejovičová, Jana; Bieser, Johannes; Lammel, Gerhard
2016-12-01
Persistent organic pollutants (POPs) are of considerable concern due to their well-recognized toxicity and their potential to bioaccumulate and engage in long-range transport. These compounds are semi-volatile and, therefore, create a partition between vapour and condensed phases in the atmosphere, while both phases can undergo chemical reactions. This work describes the extension of the Community Multiscale Air Quality (CMAQ) modelling system to POPs with a focus on establishing an adaptable framework that accounts for gaseous chemistry, heterogeneous reactions, and gas-particle partitioning (GPP). The effect of GPP is assessed by implementing a set of independent parameterizations within the CMAQ aerosol module, including the Junge-Pankow (JP) adsorption model, the Harner-Bidleman (HB) organic matter (OM) absorption model, and the dual Dachs-Eisenreich (DE) black carbon (BC) adsorption and OM absorption model. Use of these descriptors in a modified version of CMAQ for benzo[a]pyrene (BaP) results in different fate and transport patterns as demonstrated by regional-scale simulations performed for a European domain during 2006. The dual DE model predicted 24.1 % higher average domain concentrations compared to the HB model, which was in turn predicting 119.2 % higher levels compared to the baseline JP model. Evaluation with measurements from the European Monitoring and Evaluation Programme (EMEP) reveals the capability of the more extensive DE model to better capture the ambient levels and seasonal behaviour of BaP. It is found that the heterogeneous reaction of BaP with O3 may decrease its atmospheric lifetime by 25.2 % (domain and annual average) and near-ground concentrations by 18.8 %. Marginally better model performance was found for one of the six EMEP stations (Košetice) when heterogeneous BaP reactivity was included. Further analysis shows that, for the rest of the EMEP locations, the model continues to underestimate BaP levels, an observation that can be attributed to low emission estimates for such remote areas. These findings suggest that, when modelling the fate and transport of organic pollutants on large spatio-temporal scales, the selection and parameterization of GPP can be as important as degradation (reactivity).
Hoggan, James L; Bae, Keonbeom; Kibbey, Tohren C G
2007-08-15
Trapped organic solvents, in both the vadose zone and below the water table, are frequent sources of environmental contamination. A common source of organic solvent contamination is spills, leaks, and improper solvent disposal associated with dry cleaning processes. Dry cleaning solvents, such as tetrachloroethylene (PCE), are typically enhanced with the addition of surfactants to improve cleaning performance. The objective of this work was to examine the partitioning behavior of surfactants from PCE in contact with water. The relative rates of surfactants partitioning and PCE dissolution are important for modeling the behavior of waste PCE in the subsurface, in that they influence the interfacial tension of the PCE, and how (or if) interfacial tension changes over time in the subsurface. The work described here uses a flow-through system to examine simultaneous partitioning and PCE dissolution in a porous medium. Results indicate that both nonylphenol ethoxylate nonionic surfactants and a sulfosuccinate anionic surfactant partition out of residual PCE much more rapidly than the PCE dissolves, suggesting that in many cases interfacial tension changes caused by partitioning may influence infiltration and distribution of PCE in the subsurface. Non-steady-state partitioning is found to be well-described by a linear driving force model incorporating measured surfactant partition coefficients.
Karunasekara, Thushara; Poole, Colin F
2011-07-15
Partition coefficients for varied compounds were determined for the organic solvent-dimethyl sulfoxide biphasic partition system where the organic solvent is n-heptane or isopentyl ether. These partition coefficient databases are analyzed using the solvation parameter model facilitating a quantitative comparison of the dimethyl sulfoxide-based partition systems with other totally organic partition systems. Dimethyl sulfoxide is a moderately cohesive solvent, reasonably dipolar/polarizable and strongly hydrogen-bond basic. Although generally considered to be non-hydrogen-bond acidic, analysis of the partition coefficient database strongly supports reclassification as a weak hydrogen-bond acid in agreement with recent literature. The system constants for the n-heptane-dimethyl sulfoxide biphasic system provide an explanation of the mechanism for the selective isolation of polycyclic aromatic compounds from mixtures containing low-polarity hydrocarbons based on the capability of the polar interactions (dipolarity/polarizability and hydrogen-bonding) to overcome the opposing cohesive forces in dimethyl sulfoxide that are absent for the interactions with hydrocarbons of low polarity. In addition, dimethyl sulfoxide-organic solvent systems afford a complementary approach to other totally organic biphasic partition systems for descriptor measurements of compounds virtually insoluble in water. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sulis, Mauro; Langensiepen, Matthias; Shrestha, Prabhakar; Schickling, Anke; Simmer, Clemens; Kollet, Stefan
2015-04-01
Vegetation has a significant influence on the partitioning of radiative forcing, the spatial and temporal variability of soil water and soil temperature. Therefore plant physiological properties play a key role in mediating and amplifying interactions and feedback mechanisms in the soil-vegetation-atmosphere continuum. Because of the direct impact on latent heat fluxes, these properties may also influence weather generating processes, such as the evolution of the atmospheric boundary layer (ABL). In land surface models, plant physiological properties are usually obtained from literature synthesis by unifying several plant/crop species in predefined vegetation classes. In this work, crop-specific physiological characteristics, retrieved from detailed field measurements, are included in the bio-physical parameterization of the Community Land Model (CLM), which is a component of the Terrestrial Systems Modeling Platform (TerrSysMP). The measured set of parameters for two typical European mid-latitudinal crops (sugar beet and winter wheat) is validated using eddy covariance measurements (sensible heat and latent heat) over multiple years from three measurement sites located in the North Rhine-Westphalia region, Germany. We found clear improvements of CLM simulations, when using the crop-specific physiological characteristics of the plants instead of the generic crop type when compared to the measurements. In particular, the increase of latent heat fluxes in conjunction with decreased sensible heat fluxes as simulated by the two new crop-specific parameter sets leads to an improved quantification of the diurnal energy partitioning. These findings are cross-validated using estimates of gross primary production extracted from net ecosystem exchange measurements. This independent analysis reveals that the better agreement between observed and simulated latent heat using the plant-specific physiological properties largely stems from an improved simulation of the photosynthesis process owing to a better estimation of the Rubisco enzyme kinematics. Finally, to evaluate the effects of the crop-specific parameterizations on the ABL dynamics, we perform a series of semi-idealized land-atmosphere coupled simulations by hypothesizing three cropland configurations. These numerical experiments reveal different heat and moisture budgets of the ABL that clearly impact the evolution of the boundary layer when using the crop-specific physiological properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Zhi-Qiang; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu; Mewes, Jan-Michael
2015-11-28
The Marcus and Pekar partitions are common, alternative models to describe the non-equilibrium dielectric polarization response that accompanies instantaneous perturbation of a solute embedded in a dielectric continuum. Examples of such a perturbation include vertical electronic excitation and vertical ionization of a solution-phase molecule. Here, we provide a general derivation of the accompanying polarization response, for a quantum-mechanical solute described within the framework of a polarizable continuum model (PCM) of electrostatic solvation. Although the non-equilibrium free energy is formally equivalent within the two partitions, albeit partitioned differently into “fast” versus “slow” polarization contributions, discretization of the PCM integral equations failsmore » to preserve certain symmetries contained in these equations (except in the case of the conductor-like models or when the solute cavity is spherical), leading to alternative, non-equivalent matrix equations. Unlike the total equilibrium solvation energy, however, which can differ dramatically between different formulations, we demonstrate that the equivalence of the Marcus and Pekar partitions for the non-equilibrium solvation correction is preserved to high accuracy. Differences in vertical excitation and ionization energies are <0.2 eV (and often <0.01 eV), even for systems specifically selected to afford a large polarization response. Numerical results therefore support the interchangeability of the Marcus and Pekar partitions, but also caution against relying too much on the fast PCM charges for interpretive value, as these charges differ greatly between the two partitions, especially in polar solvents.« less
The Influence of Oxygen and Sulfur on Uranium Partitioning Into the Core
NASA Astrophysics Data System (ADS)
Moore, R. D., Jr.; Van Orman, J. A.; Hauck, S. A., II
2017-12-01
Uranium, along with K and Th, may provide substantial long-term heating in planetary cores, depending on the magnitude of their partitioning into the metal during differentiation. In general, non-metallic light elements are known to have a large influence on the partitioning of trace elements, and the presence of sulfur is known to enhance the partitioning of uranium into the metal. Data from the steelmaking literature indicate that oxygen also enhances the solubility of oxygen in liquid iron alloys. Here we present experimental data on the partitioning of U between immiscible liquids in the Fe-S-O system, and use these data along with published metal-silicate partitioning data to calibrate a quantitative activity model for U in the metal. We also determined partition coefficients for Th, K, Nb, Nd, Sm, and Yb, but were unable to fully constrain activity models for these elements with available data. A Monte Carlo fitting routine was used to calculate U-S, U-O, and U-S-O interaction coefficients, and their associated uncertainties. We find that the combined interaction of uranium with sulfur and oxygen is predominant, with S and O together enhancing the solubility of uranium to a far greater degree than either element in isolation. This suggests that uranium complexes with sulfite or sulfate species in the metal. For a model Mars core composition containing 14 at% S and 5 at% O, the metal/silicate partition coefficient for U is predicted to be an order of magnitude larger than for a pure Fe-Ni core.
Evaluation of Pharmacokinetic Assumptions Using a 443 ...
With the increasing availability of high-throughput and in vitro data for untested chemicals, there is a need for pharmacokinetic (PK) models for in vitro to in vivo extrapolation (IVIVE). Though some PBPK models have been created for individual compounds using in vivo data, we are now able to rapidly parameterize generic PBPK models using in vitro data to allow IVIVE for chemicals tested for bioactivity via high-throughput screening. However, these new models are expected to have limited accuracy due to their simplicity and generalization of assumptions. We evaluated the assumptions and performance of a generic PBPK model (R package “httk”) parameterized by a library of in vitro PK data for 443 chemicals. We evaluate and calibrate Schmitt’s method by comparing the predicted volume of distribution (Vd) and tissue partition coefficients to in vivo measurements. The partition coefficients are initially over predicted, likely due to overestimation of partitioning into phospholipids in tissues and the lack of lipid partitioning in the in vitro measurements of the fraction unbound in plasma. Correcting for phospholipids and plasma binding improved the predictive ability (R2 to 0.52 for partition coefficients and 0.32 for Vd). We lacked enough data to evaluate the accuracy of changing the model structure to include tissue blood volumes and/or separate compartments for richly/poorly perfused tissues, therefore we evaluated the impact of these changes on model
Flombaum, Pedro; Sala, Osvaldo E; Rastetter, Edward B
2014-02-01
Resource partitioning, facilitation, and sampling effect are the three mechanisms behind the biodiversity effect, which is depicted usually as the effect of plant-species richness on aboveground net primary production. These mechanisms operate simultaneously but their relative importance and interactions are difficult to unravel experimentally. Thus, niche differentiation and facilitation have been lumped together and separated from the sampling effect. Here, we propose three hypotheses about interactions among the three mechanisms and test them using a simulation model. The model simulated water movement through soil and vegetation, and net primary production mimicking the Patagonian steppe. Using the model, we created grass and shrub monocultures and mixtures, controlled root overlap and grass water-use efficiency (WUE) to simulate gradients of biodiversity, resource partitioning and facilitation. The presence of shrubs facilitated grass growth by increasing its WUE and in turn increased the sampling effect, whereas root overlap (resource partitioning) had, on average, no effect on sampling effect. Interestingly, resource partitioning and facilitation interacted so the effect of facilitation on sampling effect decreased as resource partitioning increased. Sampling effect was enhanced by the difference between the two functional groups in their efficiency in using resources. Morphological and physiological differences make one group outperform the other; once these differences were established further differences did not enhance the sampling effect. In addition, grass WUE and root overlap positively influence the biodiversity effect but showed no interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhil Datta-Gupta
2003-08-01
We explore the use of efficient streamline-based simulation approaches for modeling partitioning interwell tracer tests in hydrocarbon reservoirs. Specifically, we utilize the unique features of streamline models to develop an efficient approach for interpretation and history matching of field tracer response. A critical aspect here is the underdetermined and highly ill-posed nature of the associated inverse problems. We have adopted an integrated approach whereby we combine data from multiple sources to minimize the uncertainty and non-uniqueness in the interpreted results. For partitioning interwell tracer tests, these are primarily the distribution of reservoir permeability and oil saturation distribution. A novel approachmore » to multiscale data integration using Markov Random Fields (MRF) has been developed to integrate static data sources from the reservoir such as core, well log and 3-D seismic data. We have also explored the use of a finite difference reservoir simulator, UTCHEM, for field-scale design and optimization of partitioning interwell tracer tests. The finite-difference model allows us to include detailed physics associated with reactive tracer transport, particularly those related with transverse and cross-streamline mechanisms. We have investigated the potential use of downhole tracer samplers and also the use of natural tracers for the design of partitioning tracer tests. Finally, the behavior of partitioning tracer tests in fractured reservoirs is investigated using a dual-porosity finite-difference model.« less
Ali, Usman; Sweetman, Andrew James; Jones, Kevin C; Malik, Riffat Naseem
2018-06-18
This study was designed to monitor organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs) in riverine water of Lesser Himalaya along the altitude. Further, the sediment-water partitioning employing organic carbon and black carbon models were assessed. Results revealed higher water levels of organochlorine pesticides (0.07-41.4 ng L -1 ) and polychlorinated biphenyls (0.671-84.5 ng L -1 ) in Lesser Himalayan Region (LHR) of Pakistan. Spatially, elevated levels were observed in the altitudinal zone (737-975 masl) which is influenced by anthropogenic and industrial activities. Sediment-water partitioning of OCPs and PCBs were deduced using field data by employing one-carbon (f OC K OC ) and two-carbon Freundlich models (f OC K OC + f BC K BC C W nF-1 ). Results suggested improved measured vs predicted model concentrations when black carbon was induced in the model and suggested adsorption to be the dominant mechanism in phase partitioning of organochlorines in LHR.
Atomistic Models of General Anesthetics for Use in in Silico Biological Studies
2015-01-01
While small molecules have been used to induce anesthesia in a clinical setting for well over a century, a detailed understanding of the molecular mechanism remains elusive. In this study, we utilize ab initio calculations to develop a novel set of CHARMM-compatible parameters for the ubiquitous modern anesthetics desflurane, isoflurane, sevoflurane, and propofol for use in molecular dynamics (MD) simulations. The parameters generated were rigorously tested against known experimental physicochemical properties including dipole moment, density, enthalpy of vaporization, and free energy of solvation. In all cases, the anesthetic parameters were able to reproduce experimental measurements, signifying the robustness and accuracy of the atomistic models developed. The models were then used to study the interaction of anesthetics with the membrane. Calculation of the potential of mean force for inserting the molecules into a POPC bilayer revealed a distinct energetic minimum of 4–5 kcal/mol relative to aqueous solution at the level of the glycerol backbone in the membrane. The location of this minimum within the membrane suggests that anesthetics partition to the membrane prior to binding their ion channel targets, giving context to the Meyer–Overton correlation. Moreover, MD simulations of these drugs in the membrane give rise to computed membrane structural parameters, including atomic distribution, deuterium order parameters, dipole potential, and lateral stress profile, that indicate partitioning of anesthetics into the membrane at the concentration range studied here, which does not appear to perturb the structural integrity of the lipid bilayer. These results signify that an indirect, membrane-mediated mechanism of channel modulation is unlikely. PMID:25303275
Wang, Li Kun; Heng, Paul Wan Sia; Liew, Celine Valeria
2015-04-01
Bottom spray fluid-bed coating is a common technique for coating multiparticulates. Under the quality-by-design framework, particle recirculation within the partition column is one of the main variability sources affecting particle coating and coat uniformity. However, the occurrence and mechanism of particle recirculation within the partition column of the coater are not well understood. The purpose of this study was to visualize and define particle recirculation within the partition column. Based on different combinations of partition gap setting, air accelerator insert diameter, and particle size fraction, particle movements within the partition column were captured using a high-speed video camera. The particle recirculation probability and voidage information were mapped using a visiometric process analyzer. High-speed images showed that particles contributing to the recirculation phenomenon were behaving as clustered colonies. Fluid dynamics analysis indicated that particle recirculation within the partition column may be attributed to the combined effect of cluster formation and drag reduction. Both visiometric process analysis and particle coating experiments showed that smaller particles had greater propensity toward cluster formation than larger particles. The influence of cluster formation on coating performance and possible solutions to cluster formation were further discussed. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Raevsky, O A; Grigor'ev, V J; Raevskaja, O E; Schaper, K-J
2006-06-01
QSPR analyses of a data set containing experimental partition coefficients in the three systems octanol-water, water-gas, and octanol-gas for 98 chemicals have shown that it is possible to calculate any partition coefficient in the system 'gas phase/octanol/water' by three different approaches: (1) from experimental partition coefficients obtained in the corresponding two other subsystems. However, in many cases these data may not be available. Therefore, a solution may be approached (2), a traditional QSPR analysis based on e.g. HYBOT descriptors (hydrogen bond acceptor and donor factors, SigmaCa and SigmaCd, together with polarisability alpha, a steric bulk effect descriptor) and supplemented with substructural indicator variables. (3) A very promising approach which is a combination of the similarity concept and QSPR based on HYBOT descriptors. In this approach observed partition coefficients of structurally nearest neighbours of a compound-of-interest are used. In addition, contributions arising from differences in alpha, SigmaCa, and SigmaCd values between the compound-of-interest and its nearest neighbour(s), respectively, are considered. In this investigation highly significant relationships were obtained by approaches (1) and (3) for the octanol/gas phase partition coefficient (log Log).
Sound transmission through lightweight double-leaf partitions: theoretical modelling
NASA Astrophysics Data System (ADS)
Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.
2005-09-01
This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.
Lin, Junfang; Cao, Wenxi; Wang, Guifeng; Hu, Shuibo
2013-06-20
Using a data set of 1333 samples, we assess the spectral absorption relationships of different wave bands for phytoplankton (ph) and particles. We find that a nonlinear model (second-order quadratic equations) delivers good performance in describing their spectral characteristics. Based on these spectral relationships, we develop a method for partitioning the total absorption coefficient into the contributions attributable to phytoplankton [a(ph)(λ)], colored dissolved organic material [CDOM; a(CDOM)(λ)], and nonalgal particles [NAP; a(NAP)(λ)]. This method is validated using a data set that contains 550 simultaneous measurements of phytoplankton, CDOM, and NAP from the NASA bio-Optical Marine Algorithm Dataset. We find that our method is highly efficient and robust, with significant accuracy: the relative root-mean-square errors (RMSEs) are 25.96%, 38.30%, and 19.96% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively. The performance is still satisfactory when the method is applied to water samples from the northern South China Sea as a regional case. The computed and measured absorption coefficients (167 samples) agree well with the RMSEs, i.e., 18.50%, 32.82%, and 10.21% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively. Finally, the partitioning method is applied directly to an independent data set (1160 samples) derived from the Bermuda Bio-Optics Project that contains relatively low absorption values, and we also obtain good inversion accuracy [RMSEs of 32.37%, 32.57%, and 11.52% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively]. Our results indicate that this partitioning method delivers satisfactory performance for the retrieval of a(ph), a(CDOM), and a(NAP). Therefore, this may be a useful tool for extracting absorption coefficients from in situ measurements or remotely sensed ocean-color data.
Ali, Usman; Syed, Jabir Hussain; Mahmood, Adeel; Li, Jun; Zhang, Gan; Jones, Kevin C; Malik, Riffat Naseem
2015-09-01
Levels of polychlorinated biphenyls (PCBs) were assessed in surface soils and passive air samples from the Indus River Basin, and the influential role of black carbon (BC) in the soil-air partitioning process was examined. ∑26-PCBs ranged between 0.002-3.03 pg m(-3) and 0.26-1.89 ng g(-1) for passive air and soil samples, respectively. Lower chlorinated (tri- and tetra-) PCBs were abundant in both air (83.9%) and soil (92.1%) samples. Soil-air partitioning of PCBs was investigated through octanol-air partition coefficients (KOA) and black carbon-air partition coefficients (KBC-A). The results of the paired-t test revealed that both models showed statistically significant agreement between measured and predicted model values for the PCB congeners. Ratios of fBCKBC-AδOCT/fOMKOA>5 explicitly suggested the influential role of black carbon in the retention and soil-air partitioning of PCBs. Lower chlorinated PCBs were strongly adsorbed and retained by black carbon during soil-air partitioning because of their dominance at the sampling sites and planarity effect. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Robustness Testing Campaign for IMA-SP Partitioning Kernels
NASA Astrophysics Data System (ADS)
Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David
2015-09-01
With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.
Empirical Bayes Approaches to Multivariate Fuzzy Partitions.
ERIC Educational Resources Information Center
Woodbury, Max A.; Manton, Kenneth G.
1991-01-01
An empirical Bayes-maximum likelihood estimation procedure is presented for the application of fuzzy partition models in describing high dimensional discrete response data. The model describes individuals in terms of partial membership in multiple latent categories that represent bounded discrete spaces. (SLD)
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
[On the partition of acupuncture academic schools].
Yang, Pengyan; Luo, Xi; Xia, Youbing
2016-05-01
Nowadays extensive attention has been paid on the research of acupuncture academic schools, however, a widely accepted method of partition of acupuncture academic schools is still in need. In this paper, the methods of partition of acupuncture academic schools in the history have been arranged, and three typical methods of"partition of five schools" "partition of eighteen schools" and "two-stage based partition" are summarized. After adeep analysis on the disadvantages and advantages of these three methods, a new method of partition of acupuncture academic schools that is called "three-stage based partition" is proposed. In this method, after the overall acupuncture academic schools are divided into an ancient stage, a modern stage and a contemporary stage, each schoolis divided into its sub-school category. It is believed that this method of partition can remedy the weaknesses ofcurrent methods, but also explore a new model of inheritance and development under a different aspect through thedifferentiation and interaction of acupuncture academic schools at three stages.
Occurrence analysis of daily rainfalls through non-homogeneous Poissonian processes
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Ferrari, E.; de Luca, D. L.
2011-06-01
A stochastic model based on a non-homogeneous Poisson process, characterised by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. The data modelling has been performed with a partition of observed daily rainfall data into a calibration period for parameter estimation and a validation period for checking on occurrence process changes. The model has been applied to a set of rain gauges located in different geographical areas of Southern Italy. The results show a good fit for time-varying intensity of rainfall occurrence process by 2-harmonic Fourier law and no statistically significant evidence of changes in the validation period for different threshold values.
Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H
2015-01-01
The UPPER (Unified Physicochemical Property Estimation Relationships) model uses additive and non-additive parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky (2014) on a data set of 700 hydrocarbons. Recently, Admire et al. (2014) expanded the model to predict the boiling and melting points of 1288 polyhalogenated benzenes, biphenyls, dibenzo-p-dioxins, diphenyl ethers, anisoles and alkanes. In this work, 19 new group descriptors are determined and used to predict the aqueous solubilities, octanol solubilities and the octanol-water coefficients. Copyright © 2014 Elsevier Ltd. All rights reserved.
Boundary perimeter Bethe ansatz
NASA Astrophysics Data System (ADS)
Frassek, Rouven
2017-06-01
We study the partition function of the six-vertex model in the rational limit on arbitrary Baxter lattices with reflecting boundary. Every such lattice is interpreted as an invariant of the twisted Yangian. This identification allows us to relate the partition function of the vertex model to the Bethe wave function of an open spin chain. We obtain the partition function in terms of creation operators on a reference state from the algebraic Bethe ansatz and as a sum of permutations and reflections from the coordinate Bethe ansatz.
Feenstra, Peter; Brunsteiner, Michael; Khinast, Johannes
2014-10-01
The interaction between drug products and polymeric packaging materials is an important topic in the pharmaceutical industry and often associated with high costs because of the required elaborative interaction studies. Therefore, a theoretical prediction of such interactions would be beneficial. Often, material parameters such as the octanol water partition coefficient are used to predict the partitioning of migrant molecules between a solvent and a polymeric packaging material. Here, we present the investigation of the partitioning of various migrant molecules between polymers and solvents using molecular dynamics simulations for the calculation of interaction energies. Our results show that the use of a model for the interaction between the migrant and the polymer at atomistic detail can yield significantly better results when predicting the polymer solvent partitioning than a model based on the octanol water partition coefficient. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
NASA Technical Reports Server (NTRS)
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.
Two dissimilar approaches to dynamical systems on hyper MV -algebras and their information entropy
NASA Astrophysics Data System (ADS)
Mehrpooya, Adel; Ebrahimi, Mohammad; Davvaz, Bijan
2017-09-01
Measuring the flow of information that is related to the evolution of a system which is modeled by applying a mathematical structure is of capital significance for science and usually for mathematics itself. Regarding this fact, a major issue in concern with hyperstructures is their dynamics and the complexity of the varied possible dynamics that exist over them. Notably, the dynamics and uncertainty of hyper MV -algebras which are hyperstructures and extensions of a central tool in infinite-valued Lukasiewicz propositional calculus that models many valued logics are of primary concern. Tackling this problem, in this paper we focus on the subject of dynamical systems on hyper MV -algebras and their entropy. In this respect, we adopt two varied approaches. One is the set-based approach in which hyper MV -algebra dynamical systems are developed by employing set functions and set partitions. By the other method that is based on points and point partitions, we establish the concept of hyper injective dynamical systems on hyper MV -algebras. Next, we study the notion of entropy for both kinds of systems. Furthermore, we consider essential ergodic characteristics of those systems and their entropy. In particular, we introduce the concept of isomorphic hyper injective and hyper MV -algebra dynamical systems, and we demonstrate that isomorphic systems have the same entropy. We present a couple of theorems in order to help calculate entropy. In particular, we prove a contemporary version of addition and Kolmogorov-Sinai Theorems. Furthermore, we provide a comparison between the indispensable properties of hyper injective and semi-independent dynamical systems. Specifically, we present and prove theorems that draw comparisons between the entropies of such systems. Lastly, we discuss some possible relationships between the theories of hyper MV -algebra and MV -algebra dynamical systems.
Rational design of polymer-based absorbents: application to the fermentation inhibitor furfural.
Nwaneshiudu, Ikechukwu C; Schwartz, Daniel T
2015-01-01
Reducing the amount of water-soluble fermentation inhibitors like furfural is critical for downstream bio-processing steps to biofuels. A theoretical approach for tailoring absorption polymers to reduce these pretreatment contaminants would be useful for optimal bioprocess design. Experiments were performed to measure aqueous furfural partitioning into polymer resins of 5 bisphenol A diglycidyl ether (epoxy) and polydimethylsiloxane (PDMS). Experimentally measured partitioning of furfural between water and PDMS, the more hydrophobic polymer, showed poor performance, with the logarithm of PDMS-to-water partition coefficient falling between -0.62 and -0.24 (95% confidence). In contrast, the fast setting epoxy was found to effectively partition furfural with the logarithm of the epoxy-to-water partition coefficient falling between 0.41 and 0.81 (95% confidence). Flory-Huggins theory is used to predict the partitioning of furfural into diverse polymer absorbents and is useful for predicting these results. We show that Flory-Huggins theory can be adapted to guide the selection of polymer adsorbents for the separation of low molecular weight organic species from aqueous solutions. This work lays the groundwork for the general design of polymers for the separation of a wide range of inhibitory compounds in biomass pretreatment streams.
Predicting crystal growth via a unified kinetic three-dimensional partition model
NASA Astrophysics Data System (ADS)
Anderson, Michael W.; Gebbie-Rayet, James T.; Hill, Adam R.; Farida, Nani; Attfield, Martin P.; Cubillas, Pablo; Blatov, Vladislav A.; Proserpio, Davide M.; Akporiaye, Duncan; Arstad, Bjørnar; Gale, Julian D.
2017-04-01
Understanding and predicting crystal growth is fundamental to the control of functionality in modern materials. Despite investigations for more than one hundred years, it is only recently that the molecular intricacies of these processes have been revealed by scanning probe microscopy. To organize and understand this large amount of new information, new rules for crystal growth need to be developed and tested. However, because of the complexity and variety of different crystal systems, attempts to understand crystal growth in detail have so far relied on developing models that are usually applicable to only one system. Such models cannot be used to achieve the wide scope of understanding that is required to create a unified model across crystal types and crystal structures. Here we describe a general approach to understanding and, in theory, predicting the growth of a wide range of crystal types, including the incorporation of defect structures, by simultaneous molecular-scale simulation of crystal habit and surface topology using a unified kinetic three-dimensional partition model. This entails dividing the structure into ‘natural tiles’ or Voronoi polyhedra that are metastable and, consequently, temporally persistent. As such, these units are then suitable for re-construction of the crystal via a Monte Carlo algorithm. We demonstrate our approach by predicting the crystal growth of a diverse set of crystal types, including zeolites, metal-organic frameworks, calcite, urea and L-cystine.
Zamora, William J; Curutchet, Carles; Campanera, Josep M; Luque, F Javier
2017-10-26
Hydrophobicity is a key physicochemical descriptor used to understand the biological profile of (bio)organic compounds as well as a broad variety of biochemical, pharmacological, and toxicological processes. This property is estimated from the partition coefficient between aqueous and nonaqueous environments for neutral compounds (P N ) and corrected for the pH-dependence of ionizable compounds as the distribution coefficient (D). Here, we have extended the parametrization of the Miertus-Scrocco-Tomasi continuum solvation model in n-octanol to nitrogen-containing heterocyclic compounds, as they are present in many biologically relevant molecules (e.g., purines and pyrimidines bases, amino acids, and drugs), to obtain accurate log P N values for these molecules. This refinement also includes solvation calculations for ionic species in n-octanol with the aim of reproducing the experimental partition of ionic compounds (P I ). Finally, the suitability of different formalisms to estimate the distribution coefficient for a wide range of pH values has been examined for a set of small acidic and basic compounds. The results indicate that in general the simple pH-dependence model of the ionizable compound in water suffices to predict the partitioning at or around physiological pH. However, at extreme pH values, where ionic species are predominant, more elaborate models provide a better prediction of the n-octanol/water distribution coefficient, especially for amino acid analogues. Finally, the results also show that these formalisms are better suited to reproduce the experimental pH-dependent distribution curves of log D for both acidic and basic compounds as well as for amino acid analogues.
Evaluation of Hierarchical Clustering Algorithms for Document Datasets
2002-06-03
link, complete-link, and group average ( UPGMA )) and a new set of merging criteria derived from the six partitional criterion functions. Overall, we...used the single-link, complete-link, and UPGMA schemes, as well as, the various partitional criterion functions described in Section 3.1. The single-link...other (complete-link approach). The UPGMA scheme [16] (also known as group average) overcomes these problems by measuring the similarity of two clusters
Research on Crack Formation in Gypsum Partitions with Doorway by Means of FEM and Fracture Mechanics
NASA Astrophysics Data System (ADS)
Kania, Tomasz; Stawiski, Bohdan
2017-10-01
Cracking damage in non-loadbearing internal partition walls is a serious problem that frequently occurs in new buildings within the short term after putting them into service or even before completion of construction. Damage in partition walls is sometimes so great that they cannot be accepted by their occupiers. This problem was illustrated by the example of damage in a gypsum partition wall with doorway attributed to deflection of the slabs beneath and above it. In searching for the deflection which causes damage in masonry walls, fracture mechanics applied to the Finite Element Method (FEM) have been used. For a description of gypsum behaviour, the smeared cracking material model has been selected, where stresses are transferred across the narrowly opened crack until its width reaches the ultimate value. Cracks in the Finite Element models overlapped the real damage observed in the buildings. In order to avoid cracks under the deflection of large floor slabs, the model of a wall with reinforcement in the doorstep zone and a 40 mm thick elastic junction between the partition and ceiling has been analysed.
Ju, Yun-Ru; Yang, Ying-Fei; Tsai, Jeng-Wei; Cheng, Yi-Hsien; Chen, Wei-Yu; Liao, Chung-Min
2017-07-01
Fluctuation exposure of trace metal copper (Cu) is ubiquitous in aquatic environments. The purpose of this study was to investigate the impacts of chronically pulsed exposure on biodynamics and subcellular partitioning of Cu in freshwater tilapia (Oreochromis mossambicus). Long-term 28-day pulsed Cu exposure experiments were performed to explore subcellular partitioning and toxicokinetics/toxicodynamics of Cu in tilapia. Subcellular partitioning linking with a metal influx scheme was used to estimate detoxification and elimination rates. A biotic ligand model-based damage assessment model was used to take into account environmental effects and biological mechanisms of Cu toxicity. We demonstrated that the probability causing 50% of susceptibility risk in response to pulse Cu exposure in generic Taiwan aquaculture ponds was ~33% of Cu in adverse physiologically associated, metabolically active pool, implicating no significant susceptibility risk for tilapia. We suggest that our integrated ecotoxicological models linking chronic exposure measurements with subcellular partitioning can facilitate a risk assessment framework that provides a predictive tool for preventive susceptibility reduction strategies for freshwater fish exposed to pulse metal stressors.
Hahus, Ian; Migliaccio, Kati; Douglas-Mankin, Kyle; Klarenberg, Geraldine; Muñoz-Carpena, Rafael
2018-04-27
Hierarchical and partitional cluster analyses were used to compartmentalize Water Conservation Area 1, a managed wetland within the Arthur R. Marshall Loxahatchee National Wildlife Refuge in southeast Florida, USA, based on physical, biological, and climatic geospatial attributes. Single, complete, average, and Ward's linkages were tested during the hierarchical cluster analyses, with average linkage providing the best results. In general, the partitional method, partitioning around medoids, found clusters that were more evenly sized and more spatially aggregated than those resulting from the hierarchical analyses. However, hierarchical analysis appeared to be better suited to identify outlier regions that were significantly different from other areas. The clusters identified by geospatial attributes were similar to clusters developed for the interior marsh in a separate study using water quality attributes, suggesting that similar factors have influenced variations in both the set of physical, biological, and climatic attributes selected in this study and water quality parameters. However, geospatial data allowed further subdivision of several interior marsh clusters identified from the water quality data, potentially indicating zones with important differences in function. Identification of these zones can be useful to managers and modelers by informing the distribution of monitoring equipment and personnel as well as delineating regions that may respond similarly to future changes in management or climate.
NASA Astrophysics Data System (ADS)
Gan, Chee Kwan; Challacombe, Matt
2003-05-01
Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.
NASA Astrophysics Data System (ADS)
Huang, Ding Wei; Yen, Edward
1989-08-01
We propose a detailed model, combining the concepts from a partition temperature model and wounded nucleon model, to describe high-energy nucleus-nucleus collisions. One partition temperature is associated with collisions at a fixed wounded nucleon number. The (pseudo-) rapidity distributions are calculated and compared with experimental data. Predictions at higher energy are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmonds, M. J.; Yu, J. H.; Wang, Y. Q.
Simulating the implantation and thermal desorption evolution in a reaction-diffusion model requires solving a set of coupled differential equations that describe the trapping and release of atomic species in Plasma Facing Materials (PFMs). These fundamental equations are well outlined by the Tritium Migration Analysis Program (TMAP) which can model systems with no more than three active traps per atomic species. To overcome this limitation, we have developed a Pseudo Trap and Temperature Partition (PTTP) scheme allowing us to lump multiple inactive traps into one pseudo trap, simplifying the system of equations to be solved. For all temperatures, we show themore » trapping of atoms from solute is exactly accounted for when using a pseudo trap. However, a single effective pseudo trap energy can not well replicate the release from multiple traps, each with its own detrapping energy. However, atoms held in a high energy trap will remain trapped at relatively low temperatures, and thus there is a temperature range in which release from high energy traps is effectively inactive. By partitioning the temperature range into segments, a pseudo trap can be defined for each segment to account for multiple high energy traps that are actively trapping but are effectively not releasing atoms. With increasing temperature, as in controlled thermal desorption, the lowest energy trap is nearly emptied and can be removed from the set of coupled equations, while the next higher energy trap becomes an actively releasing trap. Each segment is thus calculated sequentially, with the last time step of a given segment solution being used as an initial input for the next segment as only the pseudo and actively releasing traps are modeled. This PTTP scheme is then applied to experimental thermal desorption data for tungsten (W) samples damaged with heavy ions, which display six distinct release peaks during thermal desorption. Without modifying the TMAP7 source code the PTTP scheme is shown to successfully model the D retention in all six traps. In conclusion, we demonstrate the full reconstruction from the plasma implantation phase through the controlled thermal desorption phase with detrapping energies near 0.9, 1.1, 1.4, 1.7, 1.9 and 2.1 eV for a W sample damaged at room temperature.« less
Simmonds, M. J.; Yu, J. H.; Wang, Y. Q.; ...
2018-06-04
Simulating the implantation and thermal desorption evolution in a reaction-diffusion model requires solving a set of coupled differential equations that describe the trapping and release of atomic species in Plasma Facing Materials (PFMs). These fundamental equations are well outlined by the Tritium Migration Analysis Program (TMAP) which can model systems with no more than three active traps per atomic species. To overcome this limitation, we have developed a Pseudo Trap and Temperature Partition (PTTP) scheme allowing us to lump multiple inactive traps into one pseudo trap, simplifying the system of equations to be solved. For all temperatures, we show themore » trapping of atoms from solute is exactly accounted for when using a pseudo trap. However, a single effective pseudo trap energy can not well replicate the release from multiple traps, each with its own detrapping energy. However, atoms held in a high energy trap will remain trapped at relatively low temperatures, and thus there is a temperature range in which release from high energy traps is effectively inactive. By partitioning the temperature range into segments, a pseudo trap can be defined for each segment to account for multiple high energy traps that are actively trapping but are effectively not releasing atoms. With increasing temperature, as in controlled thermal desorption, the lowest energy trap is nearly emptied and can be removed from the set of coupled equations, while the next higher energy trap becomes an actively releasing trap. Each segment is thus calculated sequentially, with the last time step of a given segment solution being used as an initial input for the next segment as only the pseudo and actively releasing traps are modeled. This PTTP scheme is then applied to experimental thermal desorption data for tungsten (W) samples damaged with heavy ions, which display six distinct release peaks during thermal desorption. Without modifying the TMAP7 source code the PTTP scheme is shown to successfully model the D retention in all six traps. In conclusion, we demonstrate the full reconstruction from the plasma implantation phase through the controlled thermal desorption phase with detrapping energies near 0.9, 1.1, 1.4, 1.7, 1.9 and 2.1 eV for a W sample damaged at room temperature.« less
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
NASA Astrophysics Data System (ADS)
Dasgupta, R.; Jego, S.; Ding, S.; Li, Y.; Lee, C. T.
2015-12-01
The behavior of chalcophile elements during mantle melting, melt extraction, and basalt differentiation is critical for formation of ore deposits and geochemical model and evolution of crust-mantle system. While chalcophile elements are strongly partitioned into sulfides, their behavior with different extent of melting, in particular, in the absence of sulfides, can only be modeled with complete knowledge of the partitioning behavior of these elements between dominant mantle minerals and basaltic melt with or without dissolved sulfide (S2-). However, experimental data on mineral-melt partitioning are lacking for many chalcophile elements. Crystallization experiments were conducted at 3 GPa and 1450-1600 °C using a piston cylinder and synthetic silicate melt compositions similar to low-degree partial melt of peridotite. Starting silicate mixes doped with 100-300 ppm of each of various chalcophile elements were loaded into Pt/graphite double capsules. To test the effect of dissolved sulfur in silicate melt on mineral-melt partitioning of chalcophile elements, experiments were conducted on both sulfur-free and sulfur-bearing (1100-1400 ppm S in melt) systems. Experimental phases were analyzed by EPMA (for major elements and S) and LA-ICP-MS (for trace elements). All experiments produced an assemblage of cpx + melt ± garnet ± olivine ± spinel and yielded new partition coefficients (D) for Sn, Zn, Mo, Sb, Bi, Pb, and Se for cpx/melt, olivine/melt, and garnet/melt pairs. Derived Ds (mineral/basalt) reveal little effect of S2- in the melt on mineral-melt partition coefficients of the measured chalcophile elements, with Ds for Zn, Mo, Bi, Pb decreasing by less than a factor of 2 from S-free to S-bearing melt systems or remaining similar, within error, between S-free and S-bearing melt systems. By combining our data with existing partitioning data between sulfide phases and silicate melt we model the fractionation of these elements during mantle melting and basalt crystallization. The model results are compared with the chalcophile element abundance in oceanic basalts. We will discuss the implications of our new partitioning data and model results on sulfur and chalcophile element geochemistry of mantle source regions of ocean floor basalts and the fate of sulfides during mantle melting.
NASA Astrophysics Data System (ADS)
Panagoulia, D.; Trichakis, I.
2012-04-01
Considering the growing interest in simulating hydrological phenomena with artificial neural networks (ANNs), it is useful to figure out the potential and limits of these models. In this study, the main objective is to examine how to improve the ability of an ANN model to simulate extreme values of flow utilizing a priori knowledge of threshold values. A three-layer feedforward ANN was trained by using the back propagation algorithm and the logistic function as activation function. By using the thresholds, the flow was partitioned in low (x < μ), medium (μ ≤ x ≤ μ + 2σ) and high (x > μ + 2σ) values. The employed ANN model was trained for high flow partition and all flow data too. The developed methodology was implemented over a mountainous river catchment (the Mesochora catchment in northwestern Greece). The ANN model received as inputs pseudo-precipitation (rain plus melt) and previous observed flow data. After the training was completed the bootstrapping methodology was applied to calculate the ANN confidence intervals (CIs) for a 95% nominal coverage. The calculated CIs included only the uncertainty, which comes from the calibration procedure. The results showed that an ANN model trained specifically for high flows, with a priori knowledge of the thresholds, can simulate these extreme values much better (RMSE is 31.4% less) than an ANN model trained with all data of the available time series and using a posteriori threshold values. On the other hand the width of CIs increases by 54.9% with a simultaneous increase by 64.4% of the actual coverage for the high flows (a priori partition). The narrower CIs of the high flows trained with all data may be attributed to the smoothing effect produced from the use of the full data sets. Overall, the results suggest that an ANN model trained with a priori knowledge of the threshold values has an increased ability in simulating extreme values compared with an ANN model trained with all the data and a posteriori knowledge of the thresholds.
Short-term carbon partitioning fertilizer responses vary among two full-sib loblolly pine clones
Jeremy P. Stovall; John R. Seiler; Thomas R. Fox
2012-01-01
We investigated the effects of fertilizer application on the partitioning of gross primary productivity (GPP) between contrasting full-sib clones of Pinus taeda (L.). Our objective was to determine if fertilizer growth responses resulted from similar short-term changes to partitioning. A modeling approach incorporating respiratory carbon (C) fluxes,...
NASA Astrophysics Data System (ADS)
Cartier, Camille; Hammouda, Tahar; Doucelance, Régis; Boyet, Maud; Devidal, Jean-Luc; Moine, Bertrand
2014-04-01
In order to investigate the influence of very reducing conditions, we report enstatite-melt trace element partition coefficients (D) obtained on enstatite chondrite material at 5 GPa and under oxygen fugacities (fO2) ranging between 0.8 and 8.2 log units below the iron-wustite (IW) buffer. Experiments were conducted in a multianvil apparatus between 1580 and 1850 °C, using doped (Sc, V, REE, HFSE, U, Th) starting materials. We used a two-site lattice strain model and a Monte-Carlo-type approach to model experimentally determined partition coefficient data. The model can fit our partitioning data, i.e. trace elements repartition in enstatite, which provides evidence for the attainment of equilibrium in our experiments. The precision on the lattice strain model parameters obtained from modelling does not enable determination of the influence of intensive parameters on crystal chemical partitioning, within our range of conditions (fO2, P, T, composition). We document the effect of variable oxygen fugacity on the partitioning of multivalent elements. Cr and V, which are trivalent in the pyroxene at around IW - 1 are reduced to 2+ state with increasingly reducing conditions, thus affecting their partition coefficients. In our range of redox conditions Ti is always present as a mixture between 4+ and 3+ states. However the Ti3+-Ti4+ ratio increases strongly with increasingly reducing conditions. Moreover in highly reducing conditions, Nb and Ta, that usually are pentavalent in magmatic systems, appear to be reduced to lower valence species, which may be Nb2+ and Ta3+. We propose a new proxy for fO2 based on D(Cr)/D(V). Our new data extend the redox range covered by previous studies and allows this proxy to be used in the whole range of redox conditions of the solar system objects. We selected trace-element literature data of six chondrules on the criterion of their equilibrium. Applying the proxy to opx-matrix systems, we estimated that three type I chondrules have equilibrated at IW - 7 ± 1, one type I chondrule at IW - 4 ± 1, and two type II chondrules at IW + 3 ± 1. This first accurate estimation of enstatite-melt fO2 for type I chondrules is very close to CAI values. Find the best-fit for trivalent elements. We set the r0M1 (3+) range to 0.55-0.75 Å, based on visual observations of the datapoints. For the other variables we have set boundary values beyond which the solutions would be unacceptable. For example, r0M2 (3+) has to be larger than r0M1 (3+). Finally we restricted the D0 range as follow: 0.2
An in situ approach to study trace element partitioning in the laser heated diamond anvil cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petitgirard, S.; Mezouar, M.; Borchert, M.
2012-01-15
Data on partitioning behavior of elements between different phases at in situ conditions are crucial for the understanding of element mobility especially for geochemical studies. Here, we present results of in situ partitioning of trace elements (Zr, Pd, and Ru) between silicate and iron melts, up to 50 GPa and 4200 K, using a modified laser heated diamond anvil cell (DAC). This new experimental set up allows simultaneous collection of x-ray fluorescence (XRF) and x-ray diffraction (XRD) data as a function of time using the high pressure beamline ID27 (ESRF, France). The technique enables the simultaneous detection of sample meltingmore » based to the appearance of diffuse scattering in the XRD pattern, characteristic of the structure factor of liquids, and measurements of elemental partitioning of the sample using XRF, before, during and after laser heating in the DAC. We were able to detect elements concentrations as low as a few ppm level (2-5 ppm) on standard solutions. In situ measurements are complimented by mapping of the chemical partitions of the trace elements after laser heating on the quenched samples to constrain the partitioning data. Our first results indicate a strong partitioning of Pd and Ru into the metallic phase, while Zr remains clearly incompatible with iron. This novel approach extends the pressure and temperature range of partitioning experiments derived from quenched samples from the large volume presses and could bring new insight to the early history of Earth.« less
Fisicaro, E; Braibanti, A; Lamb, J D; Oscarson, J L
1990-05-01
The relationships between the chemical properties of a system and the partition function algorithm as applied to the description of multiple equilibria in solution are explained. The partition functions ZM, ZA, and ZH are obtained from powers of the binary generating functions Jj = (1 + kappa j gamma j,i[Y])i tau j, where i tau j = p tau j, q tau j, or r tau j represent the maximum number of sites in sites in class j, for Y = M, A, or H, respectively. Each term of the generating function can be considered an element (ij) of a vector Jj and each power of the cooperativity factor gamma ij,i can be considered an element of a diagonal cooperativity matrix gamma j. The vectors Jj are combined in tensor product matrices L tau = (J1) [J2]...[Jj]..., thus representing different receptor-ligand combinations. The partition functions are obtained by summing elements of the tensor matrices. The relationship of the partition functions with the total chemical amounts TM, TA, and TH has been found. The aim is to describe the total chemical amounts TM, TA, and TH as functions of the site affinity constants kappa j and cooperativity coefficients bj. The total amounts are calculated from the sum of elements of tensor matrices Ll. Each set of indices (pj..., qj..., rj...) represents one element of a tensor matrix L tau and defines each term of the summation. Each term corresponds to the concentration of a chemical microspecies. The distinction between microspecies MpjAqjHrj with ligands bound on specific sites and macrospecies MpAqHR corresponding to a chemical stoichiometric composition is shown. The translation of the properties of chemical model schemes into the algorithms for the generation of partition functions is illustrated with reference to a series of examples of gradually increasing complexity. The equilibria examined concern: (1) a unique class of sites; (2) the protonation of a base with two classes of sites; (3) the simultaneous binding of ligand A and proton H to a macromolecule or receptor M with four classes of sites; and (4) the binding to a macromolecule M of ligand A which is in turn a receptor for proton H. With reference to a specific example, it is shown how a computer program for least-squares refinement of variables kappa j and bj can be organized. The chemical model from the free components M, A, and H to the saturated macrospecies MpAQHR, with possible complex macrospecies MpAq and AHR, is defined first.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Padrón, Ryan S.; Gudmundsson, Lukas; Greve, Peter; Seneviratne, Sonia I.
2017-11-01
The long-term surface water balance over land is described by the partitioning of precipitation (P) into runoff and evapotranspiration (ET), and is commonly characterized by the ratio ET/P. The ratio between potential evapotranspiration (PET) and P is explicitly considered to be the primary control of ET/P within the Budyko framework, whereas all other controls are often integrated into a single parameter, ω. Although the joint effect of these additional controlling factors of ET/P can be significant, a detailed understanding of them is yet to be achieved. This study therefore introduces a new global data set for the long-term mean partitioning of P into ET and runoff in 2,733 catchments, which is based on in situ observations and assembled from a systematic examination of peer-reviewed studies. A total of 26 controls of ET/P that are proposed in the literature are assessed using the new data set. Results reveal that: (i) factors controlling ET/P vary between regions with different climate types; (ii) controls other than PET/P explain at least 35% of the ET/P variance in all regions, and up to ˜90% in arid climates; (iii) among these, climate factors and catchment slope dominate over other landscape characteristics; and (iv) despite the high attention that vegetation-related indices receive as controls of ET/P, they are found to play a minor and often nonsignificant role. Overall, this study provides a comprehensive picture on factors controlling the partitioning of P, with valuable insights for model development, watershed management, and the assessment of water resources around the globe.
Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua
2013-01-01
Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks. PMID:24386268
Live cell interferometry quantifies dynamics of biomass partitioning during cytokinesis.
Zangle, Thomas A; Teitell, Michael A; Reed, Jason
2014-01-01
The equal partitioning of cell mass between daughters is the usual and expected outcome of cytokinesis for self-renewing cells. However, most studies of partitioning during cell division have focused on daughter cell shape symmetry or segregation of chromosomes. Here, we use live cell interferometry (LCI) to quantify the partitioning of daughter cell mass during and following cytokinesis. We use adherent and non-adherent mouse fibroblast and mouse and human lymphocyte cell lines as models and show that, on average, mass asymmetries present at the time of cleavage furrow formation persist through cytokinesis. The addition of multiple cytoskeleton-disrupting agents leads to increased asymmetry in mass partitioning which suggests the absence of active mass partitioning mechanisms after cleavage furrow positioning.
MODFLOW-CDSS, a version of MODFLOW-2005 with modifications for Colorado Decision Support Systems
Banta, Edward R.
2011-01-01
MODFLOW-CDSS is a three-dimensional, finite-difference groundwater-flow model based on MODFLOW-2005, with two modifications. The first modification is the introduction of a Partition Stress Boundaries capability, which enables the user to partition a selected subset of MODFLOW's stress-boundary packages, with each partition defined by a separate input file. Volumetric water-budget components of each partition are tracked and listed separately in the volumetric water-budget tables. The second modification enables the user to specify that execution of a simulation should continue despite failure of the solver to satisfy convergence criteria. This modification is particularly intended to be used in conjunction with automated model-analysis software; its use is not recommended for other purposes.
Novel naïve Bayes classification models for predicting the chemical Ames mutagenicity.
Zhang, Hui; Kang, Yan-Li; Zhu, Yuan-Yuan; Zhao, Kai-Xia; Liang, Jun-Yu; Ding, Lan; Zhang, Teng-Guo; Zhang, Ji
2017-06-01
Prediction of drug candidates for mutagenicity is a regulatory requirement since mutagenic compounds could pose a toxic risk to humans. The aim of this investigation was to develop a novel prediction model of mutagenicity by using a naïve Bayes classifier. The established model was validated by the internal 5-fold cross validation and external test sets. For comparison, the recursive partitioning classifier prediction model was also established and other various reported prediction models of mutagenicity were collected. Among these methods, the prediction performance of naïve Bayes classifier established here displayed very well and stable, which yielded average overall prediction accuracies for the internal 5-fold cross validation of the training set and external test set I set were 89.1±0.4% and 77.3±1.5%, respectively. The concordance of the external test set II with 446 marketed drugs was 90.9±0.3%. In addition, four simple molecular descriptors (e.g., Apol, No. of H donors, Num-Rings and Wiener) related to mutagenicity and five representative substructures of mutagens (e.g., aromatic nitro, hydroxyl amine, nitroso, aromatic amine and N-methyl-N-methylenemethanaminum) produced by ECFP_14 fingerprints were identified. We hope the established naïve Bayes prediction model can be applied to risk assessment processes; and the obtained important information of mutagenic chemicals can guide the design of chemical libraries for hit and lead optimization. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Wei; Shen, Guofeng; Yuan, Chenyi; Wang, Chen; Shen, Huizhong; Jiang, Huai; Zhang, Yanyan; Chen, Yuanchen; Su, Shu; Lin, Nan; Tao, Shu
2016-05-01
The gas/particle partitioning of nitro-polycyclic aromatic hydrocarbons (nPAHs) and oxy-PAHs (oPAHs) is pivotal to estimate their environmental fate. Simultaneously measured atmospheric concentrations of nPAHs and oPAHs in both gaseous and particulate phases at 18 sites in northern China make it possible to investigate their partitioning process in a large region. The gas/particle partitioning coefficients (Kp) in this study were higher than those measured in the emission exhausts. The Kp for most individual nPAHs was higher than those for their corresponding parent PAHs. Generally higher Kp values were found at rural field sites compared to values in the rural villages and cities. Temperature, subcooled liquid-vapor pressure (Pl0) and octanol-air partition coefficient (Koa) were all significantly correlated with Kp. The slope values between log Kp and log Pl0, ranging from - 0.54 to - 0.34, indicate that the equilibrium of gas/particle partitioning might not be reached, which could be also revealed from a positive correlation between log Kp and particulate matter (PM) concentrations. Underestimation commonly exists in all three partitioning models, but the predicted values of Kp from the dual model are closer to the measured Kp for derivative PAHs in northern China.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
Menéndez, Cammie Chaumont; Amandus, Harlan; Damadi, Parisa; Wu, Nan; Konda, Srinivas; Hendricks, Scott
2014-05-01
Driving a taxicab remains one of the most dangerous occupations in the United States, with leading homicide rates. Although safety equipment designed to reduce robberies exists, it is not clear what effect it has on reducing taxicab driver homicides. Taxicab driver homicide crime reports for 1996 through 2010 were collected from 20 of the largest cities (>200,000) in the United States: 7 cities with cameras installed in cabs, 6 cities with partitions installed, and 7 cities with neither cameras nor partitions. Poisson regression modeling using generalized estimating equations provided city taxicab driver homicide rates while accounting for serial correlation and clustering of data within cities. Two separate models were constructed to compare (1) cities with cameras installed in taxicabs versus cities with neither cameras nor partitions and (2) cities with partitions installed in taxicabs versus cities with neither cameras nor partitions. Cities with cameras installed in cabs experienced a significant reduction in homicides after cameras were installed (adjRR = 0.11, CL 0.06-0.24) and compared to cities with neither cameras nor partitions (adjRR = 0.32, CL 0.15-0.67). Cities with partitions installed in taxicabs experienced a reduction in homicides (adjRR = 0.78, CL 0.41-1.47) compared to cities with neither cameras nor partitions, but it was not statistically significant. The findings suggest cameras installed in taxicabs are highly effective in reducing homicides among taxicab drivers. Although not statistically significant, the findings suggest partitions installed in taxicabs may be effective.
NASA Astrophysics Data System (ADS)
Pepiot, Perrine; Liang, Youwen; Newale, Ashish; Pope, Stephen
2016-11-01
A pre-partitioned adaptive chemistry (PPAC) approach recently developed and validated in the simplified framework of a partially-stirred reactor is applied to the simulation of turbulent flames using a LES/particle PDF framework. The PPAC approach was shown to simultaneously provide significant savings in CPU and memory requirements, two major limiting factors in LES/particle PDF. The savings are achieved by providing each particle in the PDF method with a specialized reduced representation and kinetic model adjusted to its changing composition. Both representation and model are identified efficiently from a pre-determined list using a low-dimensional binary-tree search algorithm, thereby keeping the run-time overhead associated with the adaptive strategy to a minimum. The Sandia D flame is used as benchmark to quantify the performance of the PPAC algorithm in a turbulent combustion setting. In particular, the CPU and memory benefits, the distribution of the various representations throughout the computational domain, and the relationship between the user-defined error tolerances used to derive the reduced representations and models and the actual errors observed in LES/PDF are characterized. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences under Award Number DE-FG02-90ER14128.
Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.
Zhao, Jian; Chen, Lian-Kuan
2017-04-17
We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
Missel, P J
2000-01-01
Four methods are proposed for modeling diffusion in heterogeneous media where diffusion and partition coefficients take on differing values in each subregion. The exercise was conducted to validate finite element modeling (FEM) procedures in anticipation of modeling drug diffusion with regional partitioning into ocular tissue, though the approach can be useful for other organs, or for modeling diffusion in laminate devices. Partitioning creates a discontinuous value in the dependent variable (concentration) at an intertissue boundary that is not easily handled by available general-purpose FEM codes, which allow for only one value at each node. The discontinuity is handled using a transformation on the dependent variable based upon the region-specific partition coefficient. Methods were evaluated by their ability to reproduce a known exact result, for the problem of the infinite composite medium (Crank, J. The Mathematics of Diffusion, 2nd ed. New York: Oxford University Press, 1975, pp. 38-39.). The most physically intuitive method is based upon the concept of chemical potential, which is continuous across an interphase boundary (method III). This method makes the equation of the dependent variable highly nonlinear. This can be linearized easily by a change of variables (method IV). Results are also given for a one-dimensional problem simulating bolus injection into the vitreous, predicting time disposition of drug in vitreous and retina.
Efficient partitioning and assignment on programs for multiprocessor execution
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1993-01-01
The general problem studied is that of segmenting or partitioning programs for distribution across a multiprocessor system. Efficient partitioning and the assignment of program elements are of great importance since the time consumed in this overhead activity may easily dominate the computation, effectively eliminating any gains made by the use of the parallelism. In this study, the partitioning of sequentially structured programs (written in FORTRAN) is evaluated. Heuristics, developed for similar applications are examined. Finally, a model for queueing networks with finite queues is developed which may be used to analyze multiprocessor system architectures with a shared memory approach to the problem of partitioning. The properties of sequentially written programs form obstacles to large scale (at the procedure or subroutine level) parallelization. Data dependencies of even the minutest nature, reflecting the sequential development of the program, severely limit parallelism. The design of heuristic algorithms is tied to the experience gained in the parallel splitting. Parallelism obtained through the physical separation of data has seen some success, especially at the data element level. Data parallelism on a grander scale requires models that accurately reflect the effects of blocking caused by finite queues. A model for the approximation of the performance of finite queueing networks is developed. This model makes use of the decomposition approach combined with the efficiency of product form solutions.
Acceleration of Binding Site Comparisons by Graph Partitioning.
Krotzky, Timo; Klebe, Gerhard
2015-08-01
The comparison of protein binding sites is a prominent task in computational chemistry and has been studied in many different ways. For the automatic detection and comparison of putative binding cavities the Cavbase system has been developed which uses a coarse-grained set of pseudocenters to represent the physicochemical properties of a binding site and employs a graph-based procedure to calculate similarities between two binding sites. However, the comparison of two graphs is computationally quite demanding which makes large-scale studies such as the rapid screening of entire databases hardly feasible. In a recent work, we proposed the method Local Cliques (LC) for the efficient comparison of Cavbase binding sites. It employs a clique heuristic to detect the maximum common subgraph of two binding sites and an extended graph model to additionally compare the shape of individual surface patches. In this study, we present an alternative to further accelerate the LC method by partitioning the binding-site graphs into disjoint components prior to their comparisons. The pseudocenter sets are split with regard to their assigned phyiscochemical type, which leads to seven much smaller graphs than the original one. Applying this approach on the same test scenarios as in the former comprehensive way results in a significant speed-up without sacrificing accuracy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Partitioning-based mechanisms under personalized differential privacy.
Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian
2017-05-01
Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t -round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms.
Partitioning-based mechanisms under personalized differential privacy
Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian
2017-01-01
Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t-round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms. PMID:28932827
NASA Astrophysics Data System (ADS)
Nielsen, R. L.; Ghiorso, M. S.; Trischman, T.
2015-12-01
The database traceDs is designed to provide a transparent and accessible resource of experimental partitioning data. It now includes ~ 90% of all the experimental trace element partitioning data (~4000 experiments) produced over the past 45 years, and is accessible through a web based interface (using the portal lepr.ofm-research.org). We set a minimum standard for inclusion, with the threshold criteria being the inclusion of: Experimental conditions (temperature, pressure, device, container, time, etc.) Major element composition of the phases Trace element analyses of the phases Data sources that did not report these minimum components were not included. The rationale for not including such data is that the degree of equilibration is unknown, and more important, no rigorous approach to modeling the behavior of trace elements is possible without knowledge of composition of the phases, and the temperature and pressure of formation/equilibration. The data are stored using a schema derived from that of the Library of Experimental Phase Relations (LEPR), modified to account for additional metadata, and restructured to permit multiple analytical entries for various element/technique/standard combinations. In the process of populating the database, we have learned a number of things about the existing published experimental partitioning data. Most important are: ~ 20% of the papers do not satisfy one or more of the threshold criteria. The standard format for presenting data is the average. This was developed as the standard during the time where there were space constraints for publication in spite of fact that all the information can now be published as electronic supplements. The uncertainties that are published with the compositional data are often not adequately explained (e.g. 1 or 2 sigma, standard deviation of the average, etc.). We propose a new set of publication standards for experimental data that include the minimum criteria described above, the publication of all analyses with error based on peak count rates and background, plus information on the structural state of the mineral (e.g. orthopyroxene vs. pigeonite).
FDATMOS16 non-linear partitioning and organic volatility distributions in urban aerosols
Madronich, Sasha; Kleinman, Larry; Conley, Andrew; ...
2015-12-17
Gas-to-particle partitioning of organic aerosols (OA) is represented in most models by Raoult’s law, and depends on the existing mass of particles into which organic gases can dissolve. This raises the possibility of non-linear response of particle-phase OA to the emissions of precursor volatile organic compounds (VOCs) that contribute to this partitioning mass. Implications for air quality management are evident: A strong non-linear dependence would suggest that reductions in VOC emission would have a more-than-proportionate benefit in lowering ambient OA concentrations. Chamber measurements on simple VOC mixtures generally confirm the non-linear scaling between OA and VOCs, usually stated as amore » mass-dependence of the measured OA yields. However, for realistic ambient conditions including urban settings, no single component dominates the composition of the organic particles, and deviations from linearity are presumed to be small. Here we re-examine the linearity question using volatility spectra from several sources: (1) chamber studies of selected aerosols, (2) volatility inferred for aerosols sampled in two megacities, Mexico City and Paris, and (3) an explicit chemistry model (GECKO-A). These few available volatility distributions suggest that urban OA may be only slightly super-linear, with most values of the sensitivity exponent in the range 1.1-1.3, also substantially lower than seen in chambers for some specific aerosols. Furthermore, the rather low values suggest that OA concentrations in megacities are not an inevitable convergence of non-linear effects, but can be addressed (much like in smaller urban areas) by proportionate reductions in emissions.« less
NASA Astrophysics Data System (ADS)
Smith, A. A.; Welch, C.; Stadnyk, T. A.
2018-05-01
Evapotranspiration (ET) partitioning is a growing field of research in hydrology due to the significant fraction of watershed water loss it represents. The use of tracer-aided models has improved understanding of watershed processes, and has significant potential for identifying time-variable partitioning of evaporation (E) from ET. A tracer-aided model was used to establish a time-series of E/ET using differences in riverine δ18O and δ2H in four northern Canadian watersheds (lower Nelson River, Manitoba, Canada). On average E/ET follows a parabolic trend ranging from 0.7 in the spring and autumn to 0.15 (three watersheds) and 0.5 (fourth watershed) during the summer growing season. In the fourth watershed wetlands and shrubs dominate land cover. During the summer, E/ET ratios are highest in wetlands for three watersheds (10% higher than unsaturated soil storage), while lowest for the fourth watershed (20% lower than unsaturated soil storage). Uncertainty of the ET partition parameters is strongly influenced by storage volumes, with large storage volumes increasing partition uncertainty. In addition, higher simulated soil moisture increases estimated E/ET. Although unsaturated soil storage accounts for larger surface areas in these watersheds than wetlands, riverine isotopic composition is more strongly affected by E from wetlands. Comparisons of E/ET to measurement-intensive studies in similar ecoregions indicate that the methodology proposed here adequately partitions ET.
Evaluating abundance and trends in a Hawaiian avian community using state-space analysis
Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.
2016-01-01
Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.
Several models were used to describe the partitioning of ammonia, water, and organic compounds between the gas and particle phases for conditions in the southeastern US during summer 2013. Existing equilibrium models and frameworks were found to be sufficient, although additional...
Solubility properties of siloxane polymers for chemical sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grate, J.W.; Abraham, M.H.
1995-05-01
This paper discusses the factors governing the sorption of vapors by organic polymers. The principles have been applied in the past for designing and selecting polymers for acoustic wave sensors; however they apply equally well to sorption of vapors by polymers used on optical chemical sensors. A set of solvation parameters (a table is presented for various organic vapors) have been developed that describe the particular solubility properties of individual solute molecules; they are used in linear solvation energy relationships (LSER) that model the sorption process. LSER coefficients are tabulated for five polysiloxanes; so are individual interaction terms for eachmore » of the 5 polymers. Dispersion interactions play a major role in determining overall partition coefficients; the log L{sup 16} (gas-liquid partition coefficient of solute on hexadecane) value of vapors are important in determining overall sorption. For the detection of basic vapors such as organophosphates, a hydrogen-bond acidic polymers will be most effective at sorbing them. Currently, fiber optic sensors are being developed where the cladding serves as a sorbent layer to collect and concentrate analyte vapors, which will be detected and identified spectroscopically. These solubility models will be used to design the polymers for the cladding for particular vapors.« less
The oceanic origin of path-independent carbon budgets.
MacDougall, Andrew H
2017-09-04
Virtually all Earth system models (ESM) show a near proportional relationship between cumulative emissions of CO 2 and change in global mean temperature, a relationship which is independent of the emissions pathway taken to reach a cumulative emissions total. The relationship, which has been named the Transient Climate Response to Cumulative CO 2 Emissions (TCRE), gives rise to the concept of a 'carbon budget'. That is, a finite amount of carbon that can be burnt whilst remaining below some chosen global temperature change threshold, such as the 2.0 °C target set by the Paris Agreement. Here we show that the path-independence of TCRE arises from the partitioning ratio of anthropogenic carbon between the ocean and the atmosphere being almost the same as the partitioning ratio of enhanced radiative forcing between the ocean and space. That these ratios are so close in value is a coincidence unique to CO 2 . The simple model used here is underlain by many assumptions and simplifications but does reproduce key aspects of the climate system relevant to the path-independence of carbon budgets. Our results place TCRE and carbon budgets on firm physical foundations and therefore help validate the use of these metrics for climate policy.
Intracochlear Scala Media Pressure Measurement: Implications for Models of Cochlear Mechanics.
Kale, Sushrut S; Olson, Elizabeth S
2015-12-15
Models of the active cochlea build upon the underlying passive mechanics. Passive cochlear mechanics is based on physical and geometrical properties of the cochlea and the fluid-tissue interaction between the cochlear partition and the surrounding fluid. Although the fluid-tissue interaction between the basilar membrane and the fluid in scala tympani (ST) has been explored in both active and passive cochleae, there was no experimental data on the fluid-tissue interaction on the scala media (SM) side of the partition. To this aim, we measured sound-evoked intracochlear pressure in SM close to the partition using micropressure sensors. All the SM pressure data are from passive cochleae, likely because the SM cochleostomy led to loss of endocochlear potential. Thus, these experiments are studies of passive cochlear mechanics. SM pressure close to the tissue showed a pattern of peaks and notches, which could be explained as an interaction between fast and slow (i.e., traveling wave) pressure modes. In several animals SM and ST pressure were measured in the same cochlea. Similar to previous studies, ST-pressure was dominated by a slow, traveling wave mode at stimulus frequencies in the vicinity of the best frequency of the measurement location, and by a fast mode above best frequency. Antisymmetric pressure between SM and ST supported the classic single-partition cochlear models, or a dual-partition model with tight coupling between partitions. From the SM and ST pressure we calculated slow and fast modes, and from active ST pressure we extrapolated the passive findings to the active case. The passive slow mode estimated from SM and ST data was low-pass in nature, as predicted by cochlear models. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Intracochlear Scala Media Pressure Measurement: Implications for Models of Cochlear Mechanics
Kale, Sushrut S.; Olson, Elizabeth S.
2015-01-01
Models of the active cochlea build upon the underlying passive mechanics. Passive cochlear mechanics is based on physical and geometrical properties of the cochlea and the fluid-tissue interaction between the cochlear partition and the surrounding fluid. Although the fluid-tissue interaction between the basilar membrane and the fluid in scala tympani (ST) has been explored in both active and passive cochleae, there was no experimental data on the fluid-tissue interaction on the scala media (SM) side of the partition. To this aim, we measured sound-evoked intracochlear pressure in SM close to the partition using micropressure sensors. All the SM pressure data are from passive cochleae, likely because the SM cochleostomy led to loss of endocochlear potential. Thus, these experiments are studies of passive cochlear mechanics. SM pressure close to the tissue showed a pattern of peaks and notches, which could be explained as an interaction between fast and slow (i.e., traveling wave) pressure modes. In several animals SM and ST pressure were measured in the same cochlea. Similar to previous studies, ST-pressure was dominated by a slow, traveling wave mode at stimulus frequencies in the vicinity of the best frequency of the measurement location, and by a fast mode above best frequency. Antisymmetric pressure between SM and ST supported the classic single-partition cochlear models, or a dual-partition model with tight coupling between partitions. From the SM and ST pressure we calculated slow and fast modes, and from active ST pressure we extrapolated the passive findings to the active case. The passive slow mode estimated from SM and ST data was low-pass in nature, as predicted by cochlear models. PMID:26682824
NASA Astrophysics Data System (ADS)
Zhao, Hongshan; Li, Wei; Wang, Li; Zhou, Shu; Jin, Xuejun
2016-08-01
T wo types of multiphase steels containing blocky or fine martensite have been used to study the phase interaction and the TRIP effect. These steels were obtained by step-quenching and partitioning (S-QP820) or intercritical-quenching and partitioning (I-QP800 & I-QP820). The retained austenite (RA) in S-QP820 specimen containing blocky martensite transformed too early to prevent the local failure at high strain due to the local strain concentration. In contrast, plentiful RA in I-QP800 specimen containing finely dispersed martensite transformed uniformly at high strain, which led to optimized strength and elongation. By applying a coordinate conversion method to the microhardness test, the load partitioning between ferrite and partitioned martensite was proved to follow the linear mixture law. The mechanical behavior of multiphase S-QP820 steel can be modeled based on the Mecking-Kocks theory, Bouquerel's spherical assumption, and Gladman-type mixture law. Finally, the transformation-induced martensite hardening effect has been studied on a bake-hardened specimen.
Sun, Zhiqian; Song, Gian; Sisneros, Thomas A.; Clausen, Bjørn; Pu, Chao; Li, Lin; Gao, Yanfei; Liaw, Peter K.
2016-01-01
An understanding of load sharing among constituent phases aids in designing mechanical properties of multiphase materials. Here we investigate load partitioning between the body-centered-cubic iron matrix and NiAl-type precipitates in a ferritic alloy during uniaxial tensile tests at 364 and 506 °C on multiple length scales by in situ neutron diffraction and crystal plasticity finite element modeling. Our findings show that the macroscopic load-transfer efficiency is not as high as that predicted by the Eshelby model; moreover, it depends on the matrix strain-hardening behavior. We explain the grain-level anisotropic load-partitioning behavior by considering the plastic anisotropy of the matrix and elastic anisotropy of precipitates. We further demonstrate that the partitioned load on NiAl-type precipitates relaxes at 506 °C, most likely through thermally-activated dislocation rearrangement on the microscopic scale. The study contributes to further understanding of load-partitioning characteristics in multiphase materials. PMID:26979660
Sun, Zhiqian; Song, Gian; Sisneros, Thomas A.; ...
2016-03-16
An understanding of load sharing among constituent phases aids in designing mechanical properties of multiphase materials. Here we investigate load partitioning between the body-centered-cubic iron matrix and NiAl-type precipitates in a ferritic alloy during uniaxial tensile tests at 364 and 506 C on multiple length scales by in situ neutron diffraction and crystal plasticity finite element modeling. Our findings show that the macroscopic load-transfer efficiency is not as high as that predicted by the Eshelby model; moreover, it depends on the matrix strain-hardening behavior. We explain the grain-level anisotropic load-partitioning behavior by considering the plastic anisotropy of the matrix andmore » elastic anisotropy of precipitates. We further demonstrate that the partitioned load on NiAl-type precipitates relaxes at 506 C, most likely through thermally-activated dislocation rearrangement on the microscopic scale. Furthermore, the study contributes to further understanding of load-partitioning characteristics in multiphase materials.« less
The relationship between leaf area growth and biomass accumulation in Arabidopsis thaliana
Weraduwage, Sarathi M.; Chen, Jin; Anozie, Fransisca C.; Morales, Alejandro; Weise, Sean E.; Sharkey, Thomas D.
2015-01-01
Leaf area growth determines the light interception capacity of a crop and is often used as a surrogate for plant growth in high-throughput phenotyping systems. The relationship between leaf area growth and growth in terms of mass will depend on how carbon is partitioned among new leaf area, leaf mass, root mass, reproduction, and respiration. A model of leaf area growth in terms of photosynthetic rate and carbon partitioning to different plant organs was developed and tested with Arabidopsis thaliana L. Heynh. ecotype Columbia (Col-0) and a mutant line, gigantea-2 (gi-2), which develops very large rosettes. Data obtained from growth analysis and gas exchange measurements was used to train a genetic programming algorithm to parameterize and test the above model. The relationship between leaf area and plant biomass was found to be non-linear and variable depending on carbon partitioning. The model output was sensitive to the rate of photosynthesis but more sensitive to the amount of carbon partitioned to growing thicker leaves. The large rosette size of gi-2 relative to that of Col-0 resulted from relatively small differences in partitioning to new leaf area vs. leaf thickness. PMID:25914696
The relationship between leaf area growth and biomass accumulation in Arabidopsis thaliana
Weraduwage, Sarathi M.; Chen, Jin; Anozie, Fransisca C.; ...
2015-04-09
Leaf area growth determines the light interception capacity of a crop and is often used as a surrogate for plant growth in high-throughput phenotyping systems. The relationship between leaf area growth and growth in terms of mass will depend on how carbon is partitioned among new leaf area, leaf mass, root mass, reproduction, and respiration. A model of leaf area growth in terms of photosynthetic rate and carbon partitioning to different plant organs was developed and tested with Arabidopsis thaliana L. Heynh. ecotype Columbia (Col-0) and a mutant line, gigantea-2 (gi-2), which develops very large rosettes. Data obtained from growthmore » analysis and gas exchange measurements was used to train a genetic programming algorithm to parameterize and test the above model. The relationship between leaf area and plant biomass was found to be non-linear and variable depending on carbon partitioning. The model output was sensitive to the rate of photosynthesis but more sensitive to the amount of carbon partitioned to growing thicker leaves. The large rosette size of gi-2 relative to that of Col-0 resulted from relatively small differences in partitioning to new leaf area vs. leaf thickness.« less
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
NASA Astrophysics Data System (ADS)
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
A partition-limited model for the plant uptake of organic contaminants from soil and water
Chiou, C.T.; Sheng, G.; Manes, M.
2001-01-01
In dealing with the passive transport of organic contaminants from soils to plants (including crops), a partition-limited model is proposed in which (i) the maximum (equilibrium) concentration of a contaminant in any location in the plant is determined by partition equilibrium with its concentration in the soil interstitial water, which in turn is determined essentially by the concentration in the soil organic matter (SOM) and (ii) the extent of approach to partition equilibrium, as measured by the ratio of the contaminant concentrations in plant water and soil interstitial water, ??pt (??? 1), depends on the transport rate of the contaminant in soil water into the plant and the volume of soil water solution that is required for the plant contaminant level to reach equilibrium with the external soil-water phase. Through reasonable estimates of plant organic-water compositions and of contaminant partition coefficients with various plant components, the model accounts for calculated values of ??pt in several published crop-contamination studies, including near-equilibrium values (i.e., ??pt ??? 1) for relatively water-soluble contaminants and lower values for much less soluble contaminants; the differences are attributed to the much higher partition coefficients of the less soluble compounds between plant lipids and plant water, which necessitates much larger volumes of the plant water transport for achieving the equilibrium capacities. The model analysis indicates that for plants with high water contents the plant-water phase acts as the major reservoir for highly water-soluble contaminants. By contrast, the lipid in a plant, even at small amounts, is usually the major reservoir for highly water-insoluble contaminants.
Tannenbaum, David; Doctor, Jason N; Persell, Stephen D; Friedberg, Mark W; Meeker, Daniella; Friesema, Elisha M; Goldstein, Noah J; Linder, Jeffrey A; Fox, Craig R
2015-03-01
Healthcare professionals are rapidly adopting electronic health records (EHRs). Within EHRs, seemingly innocuous menu design configurations can influence provider decisions for better or worse. The purpose of this study was to examine whether the grouping of menu items systematically affects prescribing practices among primary care providers. We surveyed 166 primary care providers in a research network of practices in the greater Chicago area, of whom 84 responded (51% response rate). Respondents and non-respondents were similar on all observable dimensions except that respondents were more likely to work in an academic setting. The questionnaire consisted of seven clinical vignettes. Each vignette described typical signs and symptoms for acute respiratory infections, and providers chose treatments from a menu of options. For each vignette, providers were randomly assigned to one of two menu partitions. For antibiotic-inappropriate vignettes, the treatment menu either listed over-the-counter (OTC) medications individually while grouping prescriptions together, or displayed the reverse partition. For antibiotic-appropriate vignettes, the treatment menu either listed narrow-spectrum antibiotics individually while grouping broad-spectrum antibiotics, or displayed the reverse partition. The main outcome was provider treatment choice. For antibiotic-inappropriate vignettes, we categorized responses as prescription drugs or OTC-only options. For antibiotic-appropriate vignettes, we categorized responses as broad- or narrow-spectrum antibiotics. Across vignettes, there was an 11.5 percentage point reduction in choosing aggressive treatment options (e.g., broad-spectrum antibiotics) when aggressive options were grouped compared to when those same options were listed individually (95% CI: 2.9 to 20.1%; p = .008). Provider treatment choice appears to be influenced by the grouping of menu options, suggesting that the layout of EHR order sets is not an arbitrary exercise. The careful crafting of EHR order sets can serve as an important opportunity to improve patient care without constraining physicians' ability to prescribe what they believe is best for their patients.
Vecchiarelli, Anthony G.; Hwang, Ling Chin; Mizuuchi, Kiyoshi
2013-01-01
Increasingly diverse types of cargo are being found to be segregated and positioned by ParA-type ATPases. Several minimalistic systems described in bacteria are self-organizing and are known to affect the transport of plasmids, protein machineries, and chromosomal loci. One well-studied model is the F plasmid partition system, SopABC. In vivo, SopA ATPase forms dynamic patterns on the nucleoid in the presence of the ATPase stimulator, SopB, which binds to the sopC site on the plasmid, demarcating it as the cargo. To understand the relationship between nucleoid patterning and plasmid transport, we established a cell-free system to study plasmid partition reactions in a DNA-carpeted flowcell. We observed depletion zones of the partition ATPase on the DNA carpet surrounding partition complexes. The findings favor a diffusion-ratchet model for plasmid motion whereby partition complexes create an ATPase concentration gradient and then climb up this gradient toward higher concentrations of the ATPase. Here, we report on the dynamic properties of the Sop system on a DNA-carpet substrate, which further support the proposed diffusion-ratchet mechanism. PMID:23479605
Whelan, Nathan V; Halanych, Kenneth M
2017-03-01
As phylogenetic datasets have increased in size, site-heterogeneous substitution models such as CAT-F81 and CAT-GTR have been advocated in favor of other models because they purportedly suppress long-branch attraction (LBA). These models are two of the most commonly used models in phylogenomics, and they have been applied to a variety of taxa, ranging from Drosophila to land plants. However, many arguments in favor of CAT models have been based on tenuous assumptions about the true phylogeny, rather than rigorous testing with known trees via simulation. Moreover, CAT models have not been compared to other approaches for handling substitutional heterogeneity such as data partitioning with site-homogeneous substitution models. We simulated amino acid sequence datasets with substitutional heterogeneity on a variety of tree shapes including those susceptible to LBA. Data were analyzed with both CAT models and partitioning to explore model performance; in total over 670,000 CPU hours were used, of which over 97% was spent running analyses with CAT models. In many cases, all models recovered branching patterns that were identical to the known tree. However, CAT-F81 consistently performed worse than other models in inferring the correct branching patterns, and both CAT models often overestimated substitutional heterogeneity. Additionally, reanalysis of two empirical metazoan datasets supports the notion that CAT-F81 tends to recover less accurate trees than data partitioning and CAT-GTR. Given these results, we conclude that partitioning and CAT-GTR perform similarly in recovering accurate branching patterns. However, computation time can be orders of magnitude less for data partitioning, with commonly used implementations of CAT-GTR often failing to reach completion in a reasonable time frame (i.e., for Bayesian analyses to converge). Practices such as removing constant sites and parsimony uninformative characters, or using CAT-F81 when CAT-GTR is deemed too computationally expensive, cannot be logically justified. Given clear problems with CAT-F81, phylogenies previously inferred with this model should be reassessed. [Data partitioning; phylogenomics, simulation, site-heterogeneity, substitution models.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.
2016-11-01
With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.
A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method
NASA Astrophysics Data System (ADS)
Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.
2007-09-01
Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.
Ufuk, Ayşe; Assmus, Frauke; Francis, Laura; Plumb, Jonathan; Damian, Valeriu; Gertz, Michael; Houston, J Brian; Galetin, Aleksandra
2017-04-03
Accumulation of respiratory drugs in human alveolar macrophages (AMs) has not been extensively studied in vitro and in silico despite its potential impact on therapeutic efficacy and/or occurrence of phospholipidosis. The current study aims to characterize the accumulation and subcellular distribution of drugs with respiratory indication in human AMs and to develop an in silico mechanistic AM model to predict lysosomal accumulation of investigated drugs. The data set included 9 drugs previously investigated in rat AM cell line NR8383. Cell-to-unbound medium concentration ratio (K p,cell ) of all drugs (5 μM) was determined to assess the magnitude of intracellular accumulation. The extent of lysosomal sequestration in freshly isolated human AMs from multiple donors (n = 5) was investigated for clarithromycin and imipramine (positive control) using an indirect in vitro method (±20 mM ammonium chloride, NH 4 Cl). The AM cell parameters and drug physicochemical data were collated to develop an in silico mechanistic AM model. Three in silico models differing in their description of drug membrane partitioning were evaluated; model (1) relied on octanol-water partitioning of drugs, model (2) used in vitro data to account for this process, and model (3) predicted membrane partitioning by incorporating AM phospholipid fractions. In vitro K p,cell ranged >200-fold for respiratory drugs, with the highest accumulation seen for clarithromycin. A good agreement in K p,cell was observed between human AMs and NR8383 (2.45-fold bias), highlighting NR8383 as a potentially useful in vitro surrogate tool to characterize drug accumulation in AMs. The mean K p,cell of clarithromycin (81, CV = 51%) and imipramine (963, CV = 54%) were reduced in the presence of NH 4 Cl by up to 67% and 81%, respectively, suggesting substantial contribution of lysosomal sequestration and intracellular binding in the accumulation of these drugs in human AMs. The in vitro data showed variability in drug accumulation between individual human AM donors due to possible differences in lysosomal abundance, volume, and phospholipid content, which may have important clinical implications. Consideration of drug-acidic phospholipid interactions significantly improved the performance of the in silico models; use of in vitro K p,cell obtained in the presence of NH 4 Cl as a surrogate for membrane partitioning (model (2)) captured the variability in clarithromycin and imipramine K p,cell observed in vitro and showed the best ability to predict correctly positive and negative lysosomotropic properties. The developed mechanistic AM model represents a useful in silico tool to predict lysosomal and cellular drug concentrations based on drug physicochemical data and system specific properties, with potential application to other cell types.
Partitioning of Nanoparticles into Organic Phases and Model Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Posner, J.D.; Westerhoff, P.; Hou, W-C.
2011-08-25
There is a recognized need to understand and predict the fate, transport and bioavailability of engineered nanoparticles (ENPs) in aquatic and soil ecosystems. Recent research focuses on either collection of empirical data (e.g., removal of a specific NP through water or soil matrices under variable experimental conditions) or precise NP characterization (e.g. size, degree of aggregation, morphology, zeta potential, purity, surface chemistry, and stability). However, it is almost impossible to transition from these precise measurements to models suitable to assess the NP behavior in the environment with complex and heterogeneous matrices. For decades, the USEPA has developed and applies basicmore » partitioning parameters (e.g., octanol-water partition coefficients) and models (e.g., EPI Suite, ECOSAR) to predict the environmental fate, bioavailability, and toxicity of organic pollutants (e.g., pesticides, hydrocarbons, etc.). In this project we have investigated the hypothesis that NP partition coefficients between water and organic phases (octanol or lipid bilayer) is highly dependent on their physiochemical properties, aggregation, and presence of natural constituents in aquatic environments (salts, natural organic matter), which may impact their partitioning into biological matrices (bioaccumulation) and human exposure (bioavailability) as well as the eventual usage in modeling the fate and bioavailability of ENPs. In this report, we use the terminology "partitioning" to operationally define the fraction of ENPs distributed among different phases. The mechanisms leading to this partitioning probably involve both chemical force interactions (hydrophobic association, hydrogen bonding, ligand exchange, etc.) and physical forces that bring the ENPs in close contact with the phase interfaces (diffusion, electrostatic interactions, mixing turbulence, etc.). Our work focuses on partitioning, but also provides insight into the relative behavior of ENPs as either "more like dissolved substances" or "more like colloids" as the division between behaviors of macromolecules versus colloids remains ill-defined. Below we detail our work on two broadly defined objectives: (i) Partitioning of ENP into octanol, lipid bilayer, and water, and (ii) disruption of lipid bilayers by ENPs. We have found that the partitioning of NP reaches pseudo-equilibrium distributions between water and organic phases. The equilibrium partitioning most strongly depends on the particle surface charge, which leads us to the conclusion that electrostatic interactions are critical to understanding the fate of NP in the environment. We also show that the kinetic rate at which particle partition is a function of their size (small particles partition faster by number) as can be predicted from simple DLVO models. We have found that particle number density is the most effective dosimetry to present our results and provide quantitative comparison across experiments and experimental platforms. Cumulatively, our work shows that lipid bilayers are a more effective organic phase than octanol because of the definable surface area and ease of interpretation of the results. Our early comparison of NP partitioning between water and lipids suggest that this measurement can be predictive of bioaccumulation in aquatic organisms. We have shown that nanoparticle disrupt lipid bilayer membranes and detail how NP-bilayer interaction leads to the malfunction of lipid bilayers in regulating the fluxes of ionic charges and molecules. Our results show that the disruption of the lipid membranes is similar to that of toxin melittin, except single particles can disrupt a bilayer. We show that only a single particle is required to disrupt a 150 nm DOPC liposome. The equilibrium leakage of membranes is a function of the particle number density and particle surface charge, consistent with results from our partitioning experiments. Our disruption experiments with varying surface functionality show that positively charged particles (poly amine) are most disruptive, consistent with in in vitro toxicity panels using cell cultures. Overall, this project has resulted in 8 published or submitted archival papers and has been presented 12 times. We have trained five students and provided growth opportunities for a postdoc.« less
Partitioning medical image databases for content-based queries on a Grid.
Montagnat, J; Breton, V; E Magnin, I
2005-01-01
In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.
Intersecting surface defects and instanton partition functions
NASA Astrophysics Data System (ADS)
Pan, Yiwen; Peelaers, Wolfger
2017-07-01
We analyze intersecting surface defects inserted in interacting four-dimensional N=2 supersymmetric quantum field theories. We employ the realization of a class of such systems as the infrared fixed points of renormalization group flows from larger theories, triggered by perturbed Seiberg-Witten monopole-like configurations, to compute their partition functions. These results are cast into the form of a partition function of 4d/2d/0d coupled systems. Our computations provide concrete expressions for the instanton partition function in the presence of intersecting defects and we study the corresponding ADHM model.
Study of VOCs transport and storage in porous media and assemblies
NASA Astrophysics Data System (ADS)
Xu, Jing
Indoor VOCs concentrations are influenced greatly by the transport and storage of VOCs in building and furnishing materials, majority of which belong to porous media. The transport and storage ability of a porous media for a given VOC can be characterized by its diffusion coefficient and partition coefficient, respectively, and such data are currently lacking. Besides, environmental conditions are another important factor that affects the VOCs emission. The main purposes of this dissertation are: (1) validate the similarity hypothesis between the transport of water vapor and VOCs in porous materials, and help build a database of VOC transport and storage properties with the assistance of the similarity hypothesis; (2) investigate the effect of relative humidity on the diffusion and partition coefficients; (3) develop a numerical multilayer model to simulate the VOCs' emission characteristics in both short and long term. To better understand the similarity and difference between moisture and volatile organic compounds (VOCs) diffusion through porous media, a dynamic dual-chamber experimental system was developed. The diffusion coefficients and partition coefficients of moisture and selected VOCs in materials were compared. Based on the developed similarity theory, the diffusion behavior of each particular VOC in porous media is predictable as long as the similarity coefficient of the VOC is known. Experimental results showed that relative humidity in the 80%RH led to a higher partition coefficient for formaldehyde compared to 50%RH. However, between 25% and 50% RH, there was no significant difference in partition coefficient. The partition coefficient of toluene decreased with the increase of humidity due to competition with water molecules for pore surface area and the non-soluble nature of toluene. The solubility of VOCs was found to correlate well with the partition coefficient of VOCs. The partition coefficient of VOCs was not simply inversely proportional to the vapor pressure of the compound, but also increased with the increase of the Henry's law constant. Experiment results also showed that a higher relative humidity led to a larger effective diffusion coefficient for both conventional wallboard and green wallboard. The partition coefficient (Kma) of formaldehyde in conventional wallboard was larger at 50% RH than at 20% RH, while the difference in partition coefficient between 50% RH and 70% RH was insignificant. For the green wallboard and green carpet, the partition coefficient increased slightly with the increase of relative humidity from 20% to 50% and 70%. Engineered wood products such as particleboard have widely been used with wood veneer and laminate to form multilayer assembly work surfaces or panels. The multilayer model study in this dissertation comprised both numerical and experimental investigation of the VOCs emission from such an assembly. A coupled 1D multilayer model based on CHAMPS (coupled heat, air, moisture and pollutant simulations) was first described. Later, the transport properties of each material layer were determined. Several emission cases from a three-layered heterogeneous work assembly were modeled using a developed simulation model. At last, the numerical model was verified by the experimental data of both hexanal and acetaldehyde emissions in a 50L standard small scale chamber. The model is promising in predicting VOCs' emissions for multilayered porous materials in emission tests.
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Multi-stage mixing in subduction zone: Application to Merapi volcano, Indonesia
NASA Astrophysics Data System (ADS)
Debaille, V.; Doucelance, R.; Weis, D.; Schiano, P.
2003-04-01
Basalts sampling subduction zone volcanism (IAB) often show binary mixing relationship in classical Sr-Nd, Pb-Pb, Sr-Pb isotopic diagrams, generally interpreted as reflecting the involvement of two components in their source. However, several authors have highlighted the presence of minimum three components in such a geodynamical context: mantle wedge, subducted and altered oceanic crust and subducted sediments. The overlying continental crust can also contribute by contamination and assimilation in magma chambers and/or during magma ascent. Here we present a multi-stage model to obtain a two end-member mixing from three components (mantle wedge, altered oceanic crust and sediments). The first stage of the model considers the metasomatism of the mantle wedge by fluids and/or melts released by subducted materials (altered oceanic crust and associated sediments), considering mobility and partition coefficient of trace elements in hydrated fluids and silicate melts. This results in the generation of two distinct end-members, reducing the number of components (mantle wedge, oceanic crust, sediments) from three to two. The second stage of the model concerns the binary mixing of the two end-members thus defined: mantle wedge metasomatized by slab-derived fluids and mantle wedge metasomatized by sediment-derived fluids. This model has been applied on a new isotopic data set (Sr, Nd and Pb, analyzed by TIMS and MC-ICP-MS) of Merapi volcano (Java island, Indonesia). Previous studies have suggested three distinct components in the source of indonesian lavas: mantle wedge, subducted sediments and altered oceanic crust. Moreover, it has been shown that crustal contamination does not significantly affect isotopic ratios of lavas. The multi-stage model proposed here is able to reproduce the binary mixing observed in lavas of Merapi, and a set of numerical values of bulk partition coefficient is given that accounts for the genesis of lavas.
Fayyoumi, Ebaa; Oommen, B John
2009-10-01
We consider the microaggregation problem (MAP) that involves partitioning a set of individual records in a microdata file into a number of mutually exclusive and exhaustive groups. This problem, which seeks for the best partition of the microdata file, is known to be NP-hard and has been tackled using many heuristic solutions. In this paper, we present the first reported fixed-structure-stochastic-automata-based solution to this problem. The newly proposed method leads to a lower value of the information loss (IL), obtains a better tradeoff between the IL and the disclosure risk (DR) when compared with state-of-the-art methods, and leads to a superior value of the scoring index, which is a criterion involving a combination of the IL and the DR. The scheme has been implemented, tested, and evaluated for different real-life and simulated data sets. The results clearly demonstrate the applicability of learning automata to the MAP and its ability to yield a solution that obtains the best tradeoff between IL and DR when compared with the state of the art.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
USDA-ARS?s Scientific Manuscript database
The thermal-based Two Source Energy Balance (TSEB) model partitions the water and energy fluxes from vegetation and soil components providing thus the ability for estimating soil evaporation (E) and canopy transpiration (T) separately. However, it is crucial for ET partitioning to retrieve reliable ...
THE DEVELOPMENT OF THE TEACHING SPACE DIVIDER.
ERIC Educational Resources Information Center
BELLOMY, CLEON C.; CAUDILL, WILLIAM W.
TYPES OF VERTICAL WORK SURFACES AND THE DEVELOPMENT OF A MODEL TEACHING SPACE DIVIDER ARE DISCUSSED IN THIS REPORT. THIS DESIGN IS BASED ON THE EXPRESSED NEED FOR MORE TACKBOARD AND SHELVING SPACE, AND FOR MOVABLE PARTITIONS. THE MODEL PANELS WHICH SERVE DIRECTLY AS PARTITIONS RATHER THAN BEING OVERLAID ON A PLASTERED SURFACE, INCLUDE THE…
Modeling Free Energies of Solvation in Olive Oil
Chamberlin, Adam C.; Levitt, David G.; Cramer, Christopher J.; Truhlar, Donald G.
2009-01-01
Olive oil partition coefficients are useful for modeling the bioavailability of drug-like compounds. We have recently developed an accurate solvation model called SM8 for aqueous and organic solvents (Marenich, A. V.; Olson, R. M.; Kelly, C. P.; Cramer, C. J.; Truhlar, D. G. J. Chem. Theory Comput. 2007, 3, 2011) and a temperature-dependent solvation model called SM8T for aqueous solution (Chamberlin, A. C.; Cramer, C. J.; Truhlar, D. G. J. Phys. Chem. B 2008, 112, 3024). Here we describe an extension of SM8T to predict air–olive oil and water–olive oil partitioning for drug-like solutes as functions of temperature. We also describe the database of experimental partition coefficients used to parameterize the model; this database includes 371 entries for 304 compounds spanning the 291–310 K temperature range. PMID:19434923
NASA Astrophysics Data System (ADS)
Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.
2017-07-01
Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).
Kumaresan, S; Radhakrishnan, S
1996-01-01
A head injury model consisting of the skull, the CSF, the brain and its partitioning membranes and the neck region is simulated by considering its near actual geometry. Three-dimensional finite-element analysis is carried out to investigate the influence of the partitioning membranes of the brain and the neck in head injury analysis through free-vibration analysis and transient analysis. In free-vibration analysis, the first five modal frequencies are calculated, and in transient analysis intracranial pressure and maximum shear stress in the brain are determined for a given occipital impact load.
Prenatal stress and ethanol exposure produces inversion of sexual partner preference in mice.
Popova, Nina K; Morozova, Maryana V; Amstislavskaya, Tamara G
2011-02-01
The presence of a sexually receptive female behind perforated transparent partition induced sexual arousal and specific behavior in male mice so they spent more time near partition in an attempt to make their way to the female. Three-chambered free-choice model was used to evaluate sexual partner preference. The main pattern of sexual preference was the time spent by a male mouse at the partition dividing female (F-partition time) versus a partition dividing male (M-partition time). Pregnant mice were given ethanol (11vol.%) for 1-21 gestational days, and were exposed to restraint stress (2h daily for 15-21 day of the gestation). Control pregnant mice had free access to water and food and were not stressed. Adult male offspring of ethanol and stress exposed dams (E+S) showed decreased F-partition time and increased M-partition time. Whereas F-partition time in all control mice prevailed over M-partition time, 78% E+S mice demonstrated prevailed M-partition time. E+S mice were more active in social interaction with juvenile male. No significant differences between E+S and control mice in the open field and novelty tests were revealed. Therefore, E+S exposure during dam gestation inverted sexual partner preference in male offspring, suggesting that stress and alcohol in pregnancy produces predisposition to homosexuality. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Jaime-González, Carlos; Acebes, Pablo; Mateos, Ana; Mezquida, Eduardo T
2017-01-01
LiDAR technology has firmly contributed to strengthen the knowledge of habitat structure-wildlife relationships, though there is an evident bias towards flying vertebrates. To bridge this gap, we investigated and compared the performance of LiDAR and field data to model habitat preferences of wood mouse (Apodemus sylvaticus) in a Mediterranean high mountain pine forest (Pinus sylvestris). We recorded nine field and 13 LiDAR variables that were summarized by means of Principal Component Analyses (PCA). We then analyzed wood mouse's habitat preferences using three different models based on: (i) field PCs predictors, (ii) LiDAR PCs predictors; and (iii) both set of predictors in a combined model, including a variance partitioning analysis. Elevation was also included as a predictor in the three models. Our results indicate that LiDAR derived variables were better predictors than field-based variables. The model combining both data sets slightly improved the predictive power of the model. Field derived variables indicated that wood mouse was positively influenced by the gradient of increasing shrub cover and negatively affected by elevation. Regarding LiDAR data, two LiDAR PCs, i.e. gradients in canopy openness and complexity in forest vertical structure positively influenced wood mouse, although elevation interacted negatively with the complexity in vertical structure, indicating wood mouse's preferences for plots with lower elevations but with complex forest vertical structure. The combined model was similar to the LiDAR-based model and included the gradient of shrub cover measured in the field. Variance partitioning showed that LiDAR-based variables, together with elevation, were the most important predictors and that part of the variation explained by shrub cover was shared. LiDAR derived variables were good surrogates of environmental characteristics explaining habitat preferences by the wood mouse. Our LiDAR metrics represented structural features of the forest patch, such as the presence and cover of shrubs, as well as other characteristics likely including time since perturbation, food availability and predation risk. Our results suggest that LiDAR is a promising technology for further exploring habitat preferences by small mammal communities.
Boundaries on Range-Range Constrained Admissible Regions for Optical Space Surveillance
NASA Astrophysics Data System (ADS)
Gaebler, J. A.; Axelrad, P.; Schumacher, P. W., Jr.
We propose a new type of admissible-region analysis for track initiation in multi-satellite problems when apparent angles measured at known stations are the only observable. The goal is to create an efficient and parallelizable algorithm for computing initial candidate orbits for a large number of new targets. It takes at least three angles-only observations to establish an orbit by traditional means. Thus one is faced with a problem that requires N-choose-3 sets of calculations to test every possible combination of the N observations. An alternative approach is to reduce the number of combinations by making hypotheses of the range to a target along the observed line-of-sight. If realistic bounds on the range are imposed, consistent with a given partition of the space of orbital elements, a pair of range possibilities can be evaluated via Lambert’s method to find candidate orbits for that that partition, which then requires Nchoose- 2 times M-choose-2 combinations, where M is the average number of range hypotheses per observation. The contribution of this work is a set of constraints that establish bounds on the range-range hypothesis region for a given element-space partition, thereby minimizing M. Two effective constraints were identified, which together, constrain the hypothesis region in range-range space to nearly that of the true admissible region based on an orbital partition. The first constraint is based on the geometry of the vacant orbital focus. The second constraint is based on time-of-flight and Lagrange’s form of Kepler’s equation. A complete and efficient parallelization of the problem is possible on this approach because the element partitions can be arbitrary and can be handled independently of each other.
Influence of Silicate Melt Composition on Metal/Silicate Partitioning of W, Ge, Ga and Ni
NASA Technical Reports Server (NTRS)
Singletary, S. J.; Domanik, K.; Drake, M. J.
2005-01-01
The depletion of the siderophile elements in the Earth's upper mantle relative to the chondritic meteorites is a geochemical imprint of core segregation. Therefore, metal/silicate partition coefficients (Dm/s) for siderophile elements are essential to investigations of core formation when used in conjunction with the pattern of elemental abundances in the Earth's mantle. The partitioning of siderophile elements is controlled by temperature, pressure, oxygen fugacity, and by the compositions of the metal and silicate phases. Several recent studies have shown the importance of silicate melt composition on the partitioning of siderophile elements between silicate and metallic liquids. It has been demonstrated that many elements display increased solubility in less polymerized (mafic) melts. However, the importance of silicate melt composition was believed to be minor compared to the influence of oxygen fugacity until studies showed that melt composition is an important factor at high pressures and temperatures. It was found that melt composition is also important for partitioning of high valency siderophile elements. Atmospheric experiments were conducted, varying only silicate melt composition, to assess the importance of silicate melt composition for the partitioning of W, Co and Ga and found that the valence of the dissolving species plays an important role in determining the effect of composition on solubility. In this study, we extend the data set to higher pressures and investigate the role of silicate melt composition on the partitioning of the siderophile elements W, Ge, Ga and Ni between metallic and silicate liquid.
Spatial coding-based approach for partitioning big spatial data in Hadoop
NASA Astrophysics Data System (ADS)
Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai
2017-09-01
Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.
Kim, Hee Seok; Lee, Dong Soo
2017-11-01
SimpleBox is an important multimedia model used to estimate the predicted environmental concentration for screening-level exposure assessment. The main objectives were (i) to quantitatively assess how the magnitude and nature of prediction bias of SimpleBox vary with the selection of observed concentration data set for optimization and (ii) to present the prediction performance of the optimized SimpleBox. The optimization was conducted using a total of 9604 observed multimedia data for 42 chemicals of four groups (i.e., polychlorinated dibenzo-p-dioxins/furans (PCDDs/Fs), polybrominated diphenyl ethers (PBDEs), phthalates, and polycyclic aromatic hydrocarbons (PAHs)). The model performance was assessed based on the magnitude and skewness of prediction bias. Monitoring data selection in terms of number of data and kind of chemicals plays a significant role in optimization of the model. The coverage of the physicochemical properties was found to be very important to reduce the prediction bias. This suggests that selection of observed data should be made such that the physicochemical property (such as vapor pressure, octanol-water partition coefficient, octanol-air partition coefficient, and Henry's law constant) range of the selected chemical groups be as wide as possible. With optimization, about 55%, 90%, and 98% of the total number of the observed concentration ratios were predicted within factors of three, 10, and 30, respectively, with negligible skewness. Copyright © 2017 Elsevier Ltd. All rights reserved.
The importance of having an appropriate relational data segmentation in ATLAS
NASA Astrophysics Data System (ADS)
Dimitrov, G.
2015-12-01
In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and analysis on the new Oracle 12c version features that could be beneficial will be shared with the audience.
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
Takeshi Ise; Creighton M. Litton; Christian P. Giardina; Akihiko Ito
2010-01-01
Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long�]lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning...
Zhu, Tianqi; Dos Reis, Mario; Yang, Ziheng
2015-03-01
Genetic sequence data provide information about the distances between species or branch lengths in a phylogeny, but not about the absolute divergence times or the evolutionary rates directly. Bayesian methods for dating species divergences estimate times and rates by assigning priors on them. In particular, the prior on times (node ages on the phylogeny) incorporates information in the fossil record to calibrate the molecular tree. Because times and rates are confounded, our posterior time estimates will not approach point values even if an infinite amount of sequence data are used in the analysis. In a previous study we developed a finite-sites theory to characterize the uncertainty in Bayesian divergence time estimation in analysis of large but finite sequence data sets under a strict molecular clock. As most modern clock dating analyses use more than one locus and are conducted under relaxed clock models, here we extend the theory to the case of relaxed clock analysis of data from multiple loci (site partitions). Uncertainty in posterior time estimates is partitioned into three sources: Sampling errors in the estimates of branch lengths in the tree for each locus due to limited sequence length, variation of substitution rates among lineages and among loci, and uncertainty in fossil calibrations. Using a simple but analogous estimation problem involving the multivariate normal distribution, we predict that as the number of loci ([Formula: see text]) goes to infinity, the variance in posterior time estimates decreases and approaches the infinite-data limit at the rate of 1/[Formula: see text], and the limit is independent of the number of sites in the sequence alignment. We then confirmed the predictions by using computer simulation on phylogenies of two or three species, and by analyzing a real genomic data set for six primate species. Our results suggest that with the fossil calibrations fixed, analyzing multiple loci or site partitions is the most effective way for improving the precision of posterior time estimation. However, even if a huge amount of sequence data is analyzed, considerable uncertainty will persist in time estimates. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society of Systematic Biologists.
An efficient approach for treating composition-dependent diffusion within organic particles
O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...
2017-09-07
Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less
An efficient approach for treating composition-dependent diffusion within organic particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.
Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less
Trinh, Minh Man; Tsai, Ching Lan; Hien, To Thi; Thuan, Ngo Thi; Chi, Kai Hsien; Lien, Chien Guo; Chang, Moo Been
2018-07-01
Atmospheric PCDD/Fs and dl-PCBs samples were collected in Hochiminh city, Vietnam to address the effect of meteorological parameters, especially rainfall, on the occurrence and gas/particle partitioning of these persistent organic pollutants. The results indicate that PCDD/Fs and dl-PCBs concentrations in industrial site are higher than those measured in commercial and rural sites during both rainy and dry seasons. In terms of mass concentration, ambient PCDD/F levels measured in dry season are significantly higher than those measured in rainy season while dl-PCB levels do not vary significantly between rainy and dry seasons. The difference could be attributed to different gas/particle partitioning characteristics between PCDD/Fs and dl-PCBs. PCDD/Fs are found to be mainly distributed in particle phase while dl- PCBs are predominantly distributed in gas phase in both rainy and dry seasons. Additionally, Junge-Pankow and Harner-Bidleman models are applied to better understand the gas/particle partitioning of these pollutants in atmosphere. As a results, both PCDD/Fs and dl-PCBs are under non-equilibrium gas/particle partitioning condition, and PCDD/Fs tend to reach equilibrium easier in rainy season while there are no clear trend for dl-PCBs. Harner-Bidleman model performs better in evaluating the gas/particle partitioning of PCDD/Fs while Junge-Pankow model results in better prediction for dl-PCBs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M
2014-10-01
Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.
NASA Astrophysics Data System (ADS)
Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.
2018-04-01
Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.
Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection
NASA Astrophysics Data System (ADS)
Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang
2017-07-01
It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the wind turbine (WT) bearing fault detection and its effectiveness is sufficiently verified. Compared with the current popular bearing fault diagnosis techniques, wavelet analysis and spectral kurtosis, our model achieves a higher diagnostic accuracy.
Predicate Oriented Pattern Analysis for Biomedical Knowledge Discovery
Shen, Feichen; Liu, Hongfang; Sohn, Sunghwan; Larson, David W.; Lee, Yugyung
2017-01-01
In the current biomedical data movement, numerous efforts have been made to convert and normalize a large number of traditional structured and unstructured data (e.g., EHRs, reports) to semi-structured data (e.g., RDF, OWL). With the increasing number of semi-structured data coming into the biomedical community, data integration and knowledge discovery from heterogeneous domains become important research problem. In the application level, detection of related concepts among medical ontologies is an important goal of life science research. It is more crucial to figure out how different concepts are related within a single ontology or across multiple ontologies by analysing predicates in different knowledge bases. However, the world today is one of information explosion, and it is extremely difficult for biomedical researchers to find existing or potential predicates to perform linking among cross domain concepts without any support from schema pattern analysis. Therefore, there is a need for a mechanism to do predicate oriented pattern analysis to partition heterogeneous ontologies into closer small topics and do query generation to discover cross domain knowledge from each topic. In this paper, we present such a model that predicates oriented pattern analysis based on their close relationship and generates a similarity matrix. Based on this similarity matrix, we apply an innovated unsupervised learning algorithm to partition large data sets into smaller and closer topics and generate meaningful queries to fully discover knowledge over a set of interlinked data sources. We have implemented a prototype system named BmQGen and evaluate the proposed model with colorectal surgical cohort from the Mayo Clinic. PMID:28983419
A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.
Tabe-Bordbar, Shayan; Emad, Amin; Zhao, Sihai Dave; Sinha, Saurabh
2018-04-26
Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.
Solving Multi-variate Polynomial Equations in a Finite Field
2013-06-01
Algebraic Background In this section, some algebraic definitions and basics are discussed as they pertain to this re- search. For a more detailed...definitions and basics are discussed as they pertain to this research. For a more detailed treatment, consult a graph theory text such as [10]. A graph G...graph if V(G) can be partitioned into k subsets V1,V2, ...,Vk such that uv is only an edge of G if u and v belong to different partite sets. If, in
Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation
NASA Astrophysics Data System (ADS)
Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.
2018-05-01
Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.
Composition of the core from gallium metal–silicate partitioning experiments
Blanchard, I.; Badro, J.; Siebert, J.; ...
2015-07-24
We present gallium concentration (normalized to CI chondrites) in the mantle is at the same level as that of lithophile elements with similar volatility, implying that there must be little to no gallium in Earth's core. Metal-silicate partitioning experiments, however, have shown that gallium is a moderately siderophile element and should be therefore depleted in the mantle by core formation. Moreover, gallium concentrations in the mantle (4 ppm) are too high to be only brought by the late veneer; and neither pressure, nor temperature, nor silicate composition has a large enough effect on gallium partitioning to make it lithophile. Wemore » therefore systematically investigated the effect of core composition (light element content) on the partitioning of gallium by carrying out metal–silicate partitioning experiments in a piston–cylinder press at 2 GPa between 1673 K and 2073 K. Four light elements (Si, O, S, C) were considered, and their effect was found to be sufficiently strong to make gallium lithophile. The partitioning of gallium was then modeled and parameterized as a function of pressure, temperature, redox and core composition. A continuous core formation model was used to track the evolution of gallium partitioning during core formation, for various magma ocean depths, geotherms, core light element contents, and magma ocean composition (redox) during accretion. The only model for which the final gallium concentration in the silicate Earth matched the observed value is the one involving a light-element rich core equilibrating in a FeO-rich deep magma ocean (>1300 km) with a final pressure of at least 50 GPa. More specifically, the incorporation of S and C in the core provided successful models only for concentrations that lie far beyond their allowable cosmochemical or geophysical limits, whereas realistic O and Si amounts (less than 5 wt.%) in the core provided successful models for magma oceans deeper that 1300 km. In conclusion, these results offer a strong argument for an O- and Si-rich core, formed in a deep terrestrial magma ocean, along with oxidizing conditions.« less
Qu, Yanfei; Ma, Yongwen; Wan, Jinquan; Wang, Yan
2018-06-01
The silicon oil-air partition coefficients (K SiO/A ) of hydrophobic compounds are vital parameters for applying silicone oil as non-aqueous-phase liquid in partitioning bioreactors. Due to the limited number of K SiO/A values determined by experiment for hydrophobic compounds, there is an urgent need to model the K SiO/A values for unknown chemicals. In the present study, we developed a universal quantitative structure-activity relationship (QSAR) model using a sequential approach with macro-constitutional and micromolecular descriptors for silicone oil-air partition coefficients (K SiO/A ) of hydrophobic compounds with large structural variance. The geometry optimization and vibrational frequencies of each chemical were calculated using the hybrid density functional theory at the B3LYP/6-311G** level. Several quantum chemical parameters that reflect various intermolecular interactions as well as hydrophobicity were selected to develop QSAR model. The result indicates that a regression model derived from logK SiO/A , the number of non-hydrogen atoms (#nonHatoms) and energy gap of E LUMO and E HOMO (E LUMO -E HOMO ) could explain the partitioning mechanism of hydrophobic compounds between silicone oil and air. The correlation coefficient R 2 of the model is 0.922, and the internal and external validation coefficient, Q 2 LOO and Q 2 ext , are 0.91 and 0.89 respectively, implying that the model has satisfactory goodness-of-fit, robustness, and predictive ability and thus provides a robust predictive tool to estimate the logK SiO/A values for chemicals in application domain. The applicability domain of the model was visualized by the Williams plot.
NASA Astrophysics Data System (ADS)
Shope, C. L.; Maharjan, G. R.; Tenhunen, J.; Seo, B.; Kim, K.; Riley, J.; Arnhold, S.; Koellner, T.; Ok, Y. S.; Peiffer, S.; Kim, B.; Park, J.-H.; Huwe, B.
2014-02-01
Watershed-scale modeling can be a valuable tool to aid in quantification of water quality and yield; however, several challenges remain. In many watersheds, it is difficult to adequately quantify hydrologic partitioning. Data scarcity is prevalent, accuracy of spatially distributed meteorology is difficult to quantify, forest encroachment and land use issues are common, and surface water and groundwater abstractions substantially modify watershed-based processes. Our objective is to assess the capability of the Soil and Water Assessment Tool (SWAT) model to capture event-based and long-term monsoonal rainfall-runoff processes in complex mountainous terrain. To accomplish this, we developed a unique quality-control, gap-filling algorithm for interpolation of high-frequency meteorological data. We used a novel multi-location, multi-optimization calibration technique to improve estimations of catchment-wide hydrologic partitioning. The interdisciplinary model was calibrated to a unique combination of statistical, hydrologic, and plant growth metrics. Our results indicate scale-dependent sensitivity of hydrologic partitioning and substantial influence of engineered features. The addition of hydrologic and plant growth objective functions identified the importance of culverts in catchment-wide flow distribution. While this study shows the challenges of applying the SWAT model to complex terrain and extreme environments; by incorporating anthropogenic features into modeling scenarios, we can enhance our understanding of the hydroecological impact.
NASA Astrophysics Data System (ADS)
William, Peter
In this dissertation several two dimensional statistical systems exhibiting discrete Z(n) symmetries are studied. For this purpose a newly developed algorithm to compute the partition function of these models exactly is utilized. The zeros of the partition function are examined in order to obtain information about the observable quantities at the critical point. This occurs in the form of critical exponents of the order parameters which characterize phenomena at the critical point. The correlation length exponent is found to agree very well with those computed from strong coupling expansions for the mass gap and with Monte Carlo results. In Feynman's path integral formalism the partition function of a statistical system can be related to the vacuum expectation value of the time ordered product of the observable quantities of the corresponding field theoretic model. Hence a generalization of ordinary scale invariance in the form of conformal invariance is focussed upon. This principle is very suitably applicable, in the case of two dimensional statistical models undergoing second order phase transitions at criticality. The conformal anomaly specifies the universality class to which these models belong. From an evaluation of the partition function, the free energy at criticality is computed, to determine the conformal anomaly of these models. The conformal anomaly for all the models considered here are in good agreement with the predicted values.
Multivariate regression model for partitioning tree volume of white oak into round-product classes
Daniel A. Yaussy; David L. Sonderman
1984-01-01
Describes the development of multivariate equations that predict the expected cubic volume of four round-product classes from independent variables composed of individual tree-quality characteristics. Although the model has limited application at this time, it does demonstrate the feasibility of partitioning total tree cubic volume into round-product classes based on...
A distributed model predictive control scheme for leader-follower multi-agent systems
NASA Astrophysics Data System (ADS)
Franzè, Giuseppe; Lucia, Walter; Tedesco, Francesco
2018-02-01
In this paper, we present a novel receding horizon control scheme for solving the formation problem of leader-follower configurations. The algorithm is based on set-theoretic ideas and is tuned for agents described by linear time-invariant (LTI) systems subject to input and state constraints. The novelty of the proposed framework relies on the capability to jointly use sequences of one-step controllable sets and polyhedral piecewise state-space partitions in order to online apply the 'better' control action in a distributed receding horizon fashion. Moreover, we prove that the design of both robust positively invariant sets and one-step-ahead controllable regions is achieved in a distributed sense. Simulations and numerical comparisons with respect to centralised and local-based strategies are finally performed on a group of mobile robots to demonstrate the effectiveness of the proposed control strategy.
Sharing the cell's bounty - organelle inheritance in yeast.
Knoblach, Barbara; Rachubinski, Richard A
2015-02-15
Eukaryotic cells replicate and partition their organelles between the mother cell and the daughter cell at cytokinesis. Polarized cells, notably the budding yeast Saccharomyces cerevisiae, are well suited for the study of organelle inheritance, as they facilitate an experimental dissection of organelle transport and retention processes. Much progress has been made in defining the molecular players involved in organelle partitioning in yeast. Each organelle uses a distinct set of factors - motor, anchor and adaptor proteins - that ensures its inheritance by future generations of cells. We propose that all organelles, regardless of origin or copy number, are partitioned by the same fundamental mechanism involving division and segregation. Thus, the mother cell keeps, and the daughter cell receives, their fair and equitable share of organelles. This mechanism of partitioning moreover facilitates the segregation of organelle fragments that are not functionally equivalent. In this Commentary, we describe how this principle of organelle population control affects peroxisomes and other organelles, and outline its implications for yeast life span and rejuvenation. © 2015. Published by The Company of Biologists Ltd.
Li, Ji; Gray, B.R.; Bates, D.M.
2008-01-01
Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.
SH c realization of minimal model CFT: triality, poset and Burge condition
NASA Astrophysics Data System (ADS)
Fukuda, M.; Nakamura, S.; Matsuo, Y.; Zhu, R.-D.
2015-11-01
Recently an orthogonal basis of {{W}}_N -algebra (AFLT basis) labeled by N-tuple Young diagrams was found in the context of 4D/2D duality. Recursion relations among the basis are summarized in the form of an algebra SH c which is universal for any N. We show that it has an {{S}}_3 automorphism which is referred to as triality. We study the level-rank duality between minimal models, which is a special example of the automorphism. It is shown that the nonvanishing states in both systems are described by N or M Young diagrams with the rows of boxes appropriately shuffled. The reshuffling of rows implies there exists partial ordering of the set which labels them. For the simplest example, one can compute the partition functions for the partially ordered set (poset) explicitly, which reproduces the Rogers-Ramanujan identities. We also study the description of minimal models by SH c . Simple analysis reproduces some known properties of minimal models, the structure of singular vectors and the N-Burge condition in the Hilbert space.
Solute partitioning under continuous cooling conditions as a cooling rate indicator. [in lunar rocks
NASA Technical Reports Server (NTRS)
Onorato, P. I. K.; Hopper, R. W.; Yinnon, H.; Uhlmann, D. R.; Taylor, L. A.; Garrison, J. R.; Hunter, R.
1981-01-01
A model of solute partitioning in a finite body under conditions of continuous cooling is developed for the determination of cooling rates from concentration profile data, and applied to the partitioning of zirconium between ilmenite and ulvospinel in the Apollo 15 Elbow Crater rocks. Partitioning in a layered composite solid is described numerically in terms of concentration profiles and diffusion coefficients which are functions of time and temperature, respectively; a program based on the model can be used to calculate concentration profiles for various assumed cooling rates given the diffusion coefficients in the two phases and the equilibrium partitioning ratio over a range of temperatures. In the case of the Elbow Rock gabbros, the cooling rates are calculated from measured concentration ratios 10 microns from the interphase boundaries under the assumptions of uniform and equilibrium initial conditions at various starting temperatures. It is shown that the specimens could not have had uniform concentrations profiles at the previously suggested initial temperature of 1350 K. It is concluded that even under conditions where the initial temperature, grain sizes and solute diffusion coefficients are not well characterized, the model can be used to estimate the cooling rate of a grain assemblage to within an order of magnitude.
Coordinated platooning with multiple speeds
Luo, Fengqiao; Larson, Jeffrey; Munson, Todd
2018-03-22
In a platoon, vehicles travel one after another with small intervehicle distances; trailing vehicles in a platoon save fuel because they experience less aerodynamic drag. This work presents a coordinated platooning model with multiple speed options that integrates scheduling, routing, speed selection, and platoon formation/dissolution in a mixed-integer linear program that minimizes the total fuel consumed by a set of vehicles while traveling between their respective origins and destinations. The performance of this model is numerically tested on a grid network and the Chicago-area highway network. We find that the fuel-savings factor of a multivehicle system significantly depends on themore » time each vehicle is allowed to stay in the network; this time affects vehicles’ available speed choices, possible routes, and the amount of time for coordinating platoon formation. For problem instances with a large number of vehicles, we propose and test a heuristic decomposed approach that applies a clustering algorithm to partition the set of vehicles and then routes each group separately. When the set of vehicles is large and the available computational time is small, the decomposed approach finds significantly better solutions than does the full model.« less
Coordinated platooning with multiple speeds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Fengqiao; Larson, Jeffrey; Munson, Todd
In a platoon, vehicles travel one after another with small intervehicle distances; trailing vehicles in a platoon save fuel because they experience less aerodynamic drag. This work presents a coordinated platooning model with multiple speed options that integrates scheduling, routing, speed selection, and platoon formation/dissolution in a mixed-integer linear program that minimizes the total fuel consumed by a set of vehicles while traveling between their respective origins and destinations. The performance of this model is numerically tested on a grid network and the Chicago-area highway network. We find that the fuel-savings factor of a multivehicle system significantly depends on themore » time each vehicle is allowed to stay in the network; this time affects vehicles’ available speed choices, possible routes, and the amount of time for coordinating platoon formation. For problem instances with a large number of vehicles, we propose and test a heuristic decomposed approach that applies a clustering algorithm to partition the set of vehicles and then routes each group separately. When the set of vehicles is large and the available computational time is small, the decomposed approach finds significantly better solutions than does the full model.« less
Intersecting surface defects and instanton partition functions
Pan, Yiwen; Peelaers, Wolfger
2017-07-14
We analyze intersecting surface defects inserted in interacting four-dimensional N = 2 supersymmetric quantum field theories. We employ the realization of a class of such systems as the infrared xed points of renormalization group flows from larger theories, triggered by perturbed Seiberg-Witten monopole-like con gurations, to compute their partition functions. These results are cast into the form of a partition function of 4d/2d/0d coupled systems. In conclusion, our computations provide concrete expressions for the instanton partition function in the presence of intersecting defects and we study the corresponding ADHM model.
Intersecting surface defects and instanton partition functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Yiwen; Peelaers, Wolfger
We analyze intersecting surface defects inserted in interacting four-dimensional N = 2 supersymmetric quantum field theories. We employ the realization of a class of such systems as the infrared xed points of renormalization group flows from larger theories, triggered by perturbed Seiberg-Witten monopole-like con gurations, to compute their partition functions. These results are cast into the form of a partition function of 4d/2d/0d coupled systems. In conclusion, our computations provide concrete expressions for the instanton partition function in the presence of intersecting defects and we study the corresponding ADHM model.
Partitioning in Avionics Architectures: Requirements, Mechanisms, and Assurance
NASA Technical Reports Server (NTRS)
Rushby, John
1999-01-01
Automated aircraft control has traditionally been divided into distinct "functions" that are implemented separately (e.g., autopilot, autothrottle, flight management); each function has its own fault-tolerant computer system, and dependencies among different functions are generally limited to the exchange of sensor and control data. A by-product of this "federated" architecture is that faults are strongly contained within the computer system of the function where they occur and cannot readily propagate to affect the operation of other functions. More modern avionics architectures contemplate supporting multiple functions on a single, shared, fault-tolerant computer system where natural fault containment boundaries are less sharply defined. Partitioning uses appropriate hardware and software mechanisms to restore strong fault containment to such integrated architectures. This report examines the requirements for partitioning, mechanisms for their realization, and issues in providing assurance for partitioning. Because partitioning shares some concerns with computer security, security models are reviewed and compared with the concerns of partitioning.
Ferrari, Thomas; Lombardo, Anna; Benfenati, Emilio
2018-05-14
Several methods exist to develop QSAR models automatically. Some are based on indices of the presence of atoms, other on the most similar compounds, other on molecular descriptors. Here we introduce QSARpy v1.0, a new QSAR modeling tool based on a different approach: the dissimilarity. This tool fragments the molecules of the training set to extract fragments that can be associated to a difference in the property/activity value, called modulators. If the target molecule share part of the structure with a molecule of the training set and differences can be explained with one or more modulators, the property/activity value of the molecule of the training set is adjusted using the value associated to the modulator(s). This tool is tested here on the n-octanol/water partition coefficient (Kow, usually expressed in logarithmic units as log Kow). It is a key parameter in risk assessment since it is a measure of hydrophobicity. Its wide spread use makes these estimation methods very useful to reduce testing costs. Using QSARpy v1.0, we obtained a new model to predict log Kow with accurate performance (RMSE 0.43 and R 2 0.94 for the external test set), comparing favorably with other programs. QSARpy is freely available on request. Copyright © 2018 Elsevier B.V. All rights reserved.
Gas-particle partitioning of alcohol vapors on organic aerosols.
Chan, Lap P; Lee, Alex K Y; Chan, Chak K
2010-01-01
Single particle levitation using an electrodynamic balance (EDB) has been found to give accurate and direct hygroscopic measurements (gas-particle partitioning of water) for a number of inorganic and organic aerosol systems. In this paper, we extend the use of an EDB to examine the gas-particle partitioning of volatile to semivolatile alcohols, including methanol, n-butanol, n-octanol, and n-decanol, on levitated oleic acid particles. The measured K(p) agreed with Pankow's absorptive partitioning model. At high n-butanol vapor concentrations (10(3) ppm), the uptake of n-butanol reduced the average molecular-weight of the oleic acid particle appreciably and hence increased the K(p) according to Pankow's equation. Moreover, the hygroscopicity of mixed oleic acid/n-butanol particles was higher than the predictions given by the UNIFAC model (molecular group contribution method) and the ZSR equation (additive rule), presumably due to molecular interactions between the chemical species in the mixed particles. Despite the high vapor concentrations used, these findings warrant further research on the partitioning of atmospheric organic vapors (K(p)) near sources and how collectively they affect the hygroscopic properties of organic aerosols.
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platts, J.A.; Abraham, M.H.
The partitioning of organic compounds between air and foliage and between water and foliage is of considerable environmental interest. The purpose of this work is to show that partitioning into the cuticular matrix of one particular species can be satisfactorily modeled by general equations the authors have previously developed and, hence, that the same general equations could be used to model partitioning into other plant materials of the same or different species. The general equations are linear free energy relationships that employ descriptors for polarity/polarizability, hydrogen bond acidity and basicity, dispersive effects, and volume. They have been applied to themore » partition of 62 very varied organic compounds between cuticular matrix of the tomato fruit, Lycopersicon esculentum, and either air (MX{sub a}) or water (MX{sub w}). Values of log MX{sub a} covering a range of 12.4 log units are correlated with a standard deviation of 0.232 log unit, and values of log MX{sub w} covering a range of 7.6 log unit are correlated with an SD of 0.236 log unit. Possibilities are discussed for the prediction of new air-plant cuticular matrix and water-plant cuticular matrix partition values on the basis of the equations developed.« less
Global and Local Partitioning of the Charge Transferred in the Parr-Pearson Model.
Orozco-Valencia, Angel Ulises; Gázquez, José L; Vela, Alberto
2017-05-25
Through a simple proposal, the charge transfer obtained from the cornerstone theory of Parr and Pearson is partitioned, for each reactant, in two channels: an electrophilic, through which the species accepts electrons, and the other, a nucleophilic, where the species donates electrons. It is shown that this global model allows us to determine unambiguously the charge-transfer mechanism prevailing in a given reaction. The partitioning is extended to include local effects through the Fukui functions of the reactants. This local model is applied to several emblematic reactions in organic and inorganic chemistry, and we show that besides improving the correlations obtained with the global model it provides valuable information concerning the atoms in the reactants playing the most important roles in the reaction and thus improving our understanding of the reaction under study.
NASA Astrophysics Data System (ADS)
van Westrenen, W.; Allan, N. L.; Blundy, J. D.; Purton, J. A.; Wood, B. J.
2000-05-01
We have studied the energetics of trace element incorporation into pure almandine (Alm), grossular (Gros), pyrope (Py) and spessartine (Spes) garnets (X 3Al 2Si 3O 12, with X = Fe, Ca, Mg, Mn respectively), by means of computer simulations of perfect and defective lattices in the static limit. The simulations use a consistent set of interatomic potentials to describe the non-Coulombic interactions between the ions, and take explicit account of lattice relaxation associated with trace element incorporation. The calculated relaxation (strain) energies Urel are compared to those obtained using the Brice (1975) model of lattice relaxation, and the results compared to experimental garnet-melt trace element partitioning data interpreted using the same model. Simulated Urel associated with a wide range of homovalent (Ni, Mg, Co, Fe, Mn, Ca, Eu, Sr, Ba) and charge-compensated heterovalent (Sc, Lu, Yb, Ho, Gd, Eu, Nd, La, Li, Na, K, Rb) substitutions onto the garnet X-sites show a near-parabolic dependence on trace element radius, in agreement with the Brice model. From application of the Brice model we derived apparent X-site Young's moduli EX(1+, 2+, 3+) and the 'ideal' ionic radii r0(1+, 2+, 3+), corresponding to the minima in plots of Urel vs. radius. For both homovalent and heterovalent substitutions r0 increases in the order Py-Alm-Spes-Gros, consistent with crystallographic data on the size of garnet X-sites and with the results of garnet-melt partitioning studies. Each end-member also shows a marked increase in both the apparent EX and r0 with increasing trace element charge ( Zc). The increase in EX is consistent with values obtained by fitting to the Brice model of experimental garnet-melt partitioning data. However, the increase in r0 with increasing Zc is contrary to experimental observation. To estimate the influence of melt on the energetics of trace element incorporation, solution energies ( Usol) were calculated for appropriate exchange reactions between garnet and melt, using binary and other oxides to simulate cation co-ordination environment in the melt. Usol also shows a parabolic dependence on trace element radius, with inter-garnet trends in EX and r0 similar to those found for relaxation energies. However, r0( i+) obtained from minima in plots of Usol vs. radius are located at markedly different positions, especially for heterovalent substitutions ( i = 1, 3). For each end-member garnet, r0 now decreases with increasing Zc, consistent with experiment. Furthermore, although different assumptions for trace element environment in the melt, e.g., REE 3+ (VI) vs. REE 3+ (VIII), lead to parabolae with differing curvatures and minima, relative differences between end-members are always preserved. We conclude that: 1. The simulated variation in r0 and EX between garnets is largely governed by the solid phase. This stresses the overriding influence of crystal local environment on trace element partitioning. 2. Simulations suggest r0 in garnets varies with trace element charge, as experimentally observed. 3. Absolute values of r0 and EX can be influenced by the presence and structure of a coexisting melt. Thus, quantitative relations between r0, E and crystal chemistry should be derived from well-constrained systematic mineral-melt partitioning studies, and cannot be predicted from crystal-structural data alone.
The Partition Function in the Four-Dimensional Schwarz-Type Topological Half-Flat Two-Form Gravity
NASA Astrophysics Data System (ADS)
Abe, Mitsuko
We derive the partition functions of the Schwarz-type four-dimensional topological half-flat two-form gravity model on K3-surface or T4 up to on-shell one-loop corrections. In this model the bosonic moduli spaces describe an equivalent class of a trio of the Einstein-Kähler forms (the hyper-Kähler forms). The integrand of the partition function is represented by the product of some bar ∂ -torsions. bar ∂ -torsion is the extension of R-torsion for the de Rham complex to that for the bar ∂ -complex of a complex analytic manifold.
Time-partitioning simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Milner, Edward J.; Blech, Richard A.; Chima, Rodrick V.
1987-01-01
A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer.
NASA Technical Reports Server (NTRS)
Irving, A. J.; Merrill, R. B.; Singleton, D. E.
1978-01-01
An experimental study was carried out to measure partition coefficients for two rare-earth elements (Sm and Tm) and Sc among armalcolite, ilmenite, olivine and liquid coexisting in a system modeled on high-Ti mare basalt 74275. This 'primitive' sample was chosen for study because its major and trace element chemistry as well as its equilibrium phase relations at atmospheric pressure are known from previous studies. Beta-track analytical techniques were used so that partition coefficients could be measured in an environment whose bulk trace element composition is similar to that of the natural basalt. Partition coefficients for Cr and Mn were determined in the same experiments by microprobe analysis. The only equilibrium partial melting model appears to be one in which ilmenite is initially present in the source region but is consumed by melting before segregation of the high-Ti mare basalt liquid from the residue.
Copula-based analysis of rhythm
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Viola, M. L. Lanfredi
2016-06-01
In this paper we establish stochastic profiles of the rhythm for three languages: English, Japanese and Spanish. We model the increase or decrease of the acoustical energy, collected into three bands coming from the acoustic signal. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination of the partitions corresponding to the three marginal processes, one for each band of energy, and the partition coming from to the multivariate Markov chain. Then, all the partitions are linked using a copula, in order to estimate the transition probabilities.
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
On the relationships among cloud cover, mixed-phase partitioning, and planetary albedo in GCMs
McCoy, Daniel T.; Tan, Ivy; Hartmann, Dennis L.; ...
2016-05-06
In this study, it is shown that CMIP5 global climate models (GCMs) that convert supercooled water to ice at relatively warm temperatures tend to have a greater mean-state cloud fraction and more negative cloud feedback in the middle and high latitude Southern Hemisphere. We investigate possible reasons for these relationships by analyzing the mixed-phase parameterizations in 26 GCMs. The atmospheric temperature where ice and liquid are equally prevalent (T5050) is used to characterize the mixed-phase parameterization in each GCM. Liquid clouds have a higher albedo than ice clouds, so, all else being equal, models with more supercooled liquid water wouldmore » also have a higher planetary albedo. The lower cloud fraction in these models compensates the higher cloud reflectivity and results in clouds that reflect shortwave radiation (SW) in reasonable agreement with observations, but gives clouds that are too bright and too few. The temperature at which supercooled liquid can remain unfrozen is strongly anti-correlated with cloud fraction in the climate mean state across the model ensemble, but we know of no robust physical mechanism to explain this behavior, especially because this anti-correlation extends through the subtropics. A set of perturbed physics simulations with the Community Atmospheric Model Version 4 (CAM4) shows that, if its temperature-dependent phase partitioning is varied and the critical relative humidity for cloud formation in each model run is also tuned to bring reflected SW into agreement with observations, then cloud fraction increases and liquid water path (LWP) decreases with T5050, as in the CMIP5 ensemble.« less
Modeling and control of a hybrid-electric vehicle for drivability and fuel economy improvements
NASA Astrophysics Data System (ADS)
Koprubasi, Kerem
The gradual decline of oil reserves and the increasing demand for energy over the past decades has resulted in automotive manufacturers seeking alternative solutions to reduce the dependency on fossil-based fuels for transportation. A viable technology that enables significant improvements in the overall tank-to-wheel vehicle energy conversion efficiencies is the hybridization of electrical and conventional drive systems. Sophisticated hybrid powertrain configurations require careful coordination of the actuators and the onboard energy sources for optimum use of the energy saving benefits. The term optimality is often associated with fuel economy, although other measures such as drivability and exhaust emissions are also equally important. This dissertation focuses on the design of hybrid-electric vehicle (HEV) control strategies that aim to minimize fuel consumption while maintaining good vehicle drivability. In order to facilitate the design of controllers based on mathematical models of the HEV system, a dynamic model that is capable of predicting longitudinal vehicle responses in the low-to-mid frequency region (up to 10 Hz) is developed for a parallel HEV configuration. The model is validated using experimental data from various driving modes including electric only, engine only and hybrid. The high fidelity of the model makes it possible to accurately identify critical drivability issues such as time lags, shunt, shuffle, torque holes and hesitation. Using the information derived from the vehicle model, an energy management strategy is developed and implemented on a test vehicle. The resulting control strategy has a hybrid structure in the sense that the main mode of operation (the hybrid mode) is occasionally interrupted by event-based rules to enable the use of the engine start-stop function. The changes in the driveline dynamics during this transition further contribute to the hybrid nature of the system. To address the unique characteristics of the HEV drivetrain and to ensure smooth vehicle operation during mode changes, a special control method is developed. This method is generalized to a broad class of switched systems in which the switching conditions are state dependent or are supervised. The control approach involves partitioning the state-space such that the control law is modified as the state trajectory approaches a switching set and the state is steered to a location within the partition with low transitioning cost. Away from the partitions that contain switching sets, the controller is designed to achieve any suitable control objective. In the case of the HEV control problem, this objective generally involves minimizing fuel consumption. Finally, the experimental verification of this control method is illustrated using the application that originally motivated the development of this approach: the control of a HEV driveline during the transition from electric only to hybrid mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, I.; Badro, J.; Siebert, J.
We present gallium concentration (normalized to CI chondrites) in the mantle is at the same level as that of lithophile elements with similar volatility, implying that there must be little to no gallium in Earth's core. Metal-silicate partitioning experiments, however, have shown that gallium is a moderately siderophile element and should be therefore depleted in the mantle by core formation. Moreover, gallium concentrations in the mantle (4 ppm) are too high to be only brought by the late veneer; and neither pressure, nor temperature, nor silicate composition has a large enough effect on gallium partitioning to make it lithophile. Wemore » therefore systematically investigated the effect of core composition (light element content) on the partitioning of gallium by carrying out metal–silicate partitioning experiments in a piston–cylinder press at 2 GPa between 1673 K and 2073 K. Four light elements (Si, O, S, C) were considered, and their effect was found to be sufficiently strong to make gallium lithophile. The partitioning of gallium was then modeled and parameterized as a function of pressure, temperature, redox and core composition. A continuous core formation model was used to track the evolution of gallium partitioning during core formation, for various magma ocean depths, geotherms, core light element contents, and magma ocean composition (redox) during accretion. The only model for which the final gallium concentration in the silicate Earth matched the observed value is the one involving a light-element rich core equilibrating in a FeO-rich deep magma ocean (>1300 km) with a final pressure of at least 50 GPa. More specifically, the incorporation of S and C in the core provided successful models only for concentrations that lie far beyond their allowable cosmochemical or geophysical limits, whereas realistic O and Si amounts (less than 5 wt.%) in the core provided successful models for magma oceans deeper that 1300 km. In conclusion, these results offer a strong argument for an O- and Si-rich core, formed in a deep terrestrial magma ocean, along with oxidizing conditions.« less
NASA Astrophysics Data System (ADS)
Pagonis, Demetrios; Krechmer, Jordan E.; de Gouw, Joost; Jimenez, Jose L.; Ziemann, Paul J.
2017-12-01
Recent studies have demonstrated that organic compounds can partition from the gas phase to the walls in Teflon environmental chambers and that the process can be modeled as absorptive partitioning. Here these studies were extended to investigate gas-wall partitioning of organic compounds in Teflon tubing and inside a proton-transfer-reaction mass spectrometer (PTR-MS) used to monitor compound concentrations. Rapid partitioning of C8-C14 2-ketones and C11-C16 1-alkenes was observed for compounds with saturation concentrations (c∗) in the range of 3 × 104 to 1 × 107 µg m-3, causing delays in instrument response to step-function changes in the concentration of compounds being measured. These delays vary proportionally with tubing length and diameter and inversely with flow rate and c∗. The gas-wall partitioning process that occurs in tubing is similar to what occurs in a gas chromatography column, and the measured delay times (analogous to retention times) were accurately described using a linear chromatography model where the walls were treated as an equivalent absorbing mass that is consistent with values determined for Teflon environmental chambers. The effect of PTR-MS surfaces on delay times was also quantified and incorporated into the model. The model predicts delays of an hour or more for semivolatile compounds measured under commonly employed conditions. These results and the model can enable better quantitative design of sampling systems, in particular when fast response is needed, such as for rapid transients, aircraft, or eddy covariance measurements. They may also allow estimation of c∗ values for unidentified organic compounds detected by mass spectrometry and could be employed to introduce differences in time series of compounds for use with factor analysis methods. Best practices are suggested for sampling organic compounds through Teflon tubing.
DNA denaturation through a model of the partition points on a one-dimensional lattice
NASA Astrophysics Data System (ADS)
Mejdani, R.; Huseini, H.
1994-08-01
We have shown that by using a model of the partition points gas on a one-dimensional lattice, we can study, besides the saturation curves obtained before for the enzyme kinetics, also the denaturation process, i.e. the breaking of the hydrogen bonds connecting the two strands, under treatment by heat of DNA. We think that this model, as a very simple model and mathematically transparent, can be advantageous for pedagogic goals or other theoretical investigations in chemistry or modern biology.
Exact deconstruction of the 6D (2,0) theory
NASA Astrophysics Data System (ADS)
Hayling, J.; Papageorgakis, C.; Pomoni, E.; Rodríguez-Gómez, D.
2017-06-01
The dimensional-deconstruction prescription of Arkani-Hamed, Cohen, Kaplan, Karch and Motl provides a mechanism for recovering the A-type (2,0) theories on T 2, starting from a four-dimensional N=2 circular-quiver theory. We put this conjecture to the test using two exact-counting arguments: in the decompactification limit, we compare the Higgs-branch Hilbert series of the 4D N=2 quiver to the "half-BPS" limit of the (2,0) superconformal index. We also compare the full partition function for the 4D quiver on S 4 to the (2,0) partition function on S 4 × T 2. In both cases we find exact agreement. The partition function calculation sets up a dictionary between exact results in 4D and 6D.
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
Metatranscriptome analyses indicate resource partitioning between diatoms in the field.
Alexander, Harriet; Jenkins, Bethany D; Rynearson, Tatiana A; Dyhrman, Sonya T
2015-04-28
Diverse communities of marine phytoplankton carry out half of global primary production. The vast diversity of the phytoplankton has long perplexed ecologists because these organisms coexist in an isotropic environment while competing for the same basic resources (e.g., inorganic nutrients). Differential niche partitioning of resources is one hypothesis to explain this "paradox of the plankton," but it is difficult to quantify and track variation in phytoplankton metabolism in situ. Here, we use quantitative metatranscriptome analyses to examine pathways of nitrogen (N) and phosphorus (P) metabolism in diatoms that cooccur regularly in an estuary on the east coast of the United States (Narragansett Bay). Expression of known N and P metabolic pathways varied between diatoms, indicating apparent differences in resource utilization capacity that may prevent direct competition. Nutrient amendment incubations skewed N/P ratios, elucidating nutrient-responsive patterns of expression and facilitating a quantitative comparison between diatoms. The resource-responsive (RR) gene sets deviated in composition from the metabolic profile of the organism, being enriched in genes associated with N and P metabolism. Expression of the RR gene set varied over time and differed significantly between diatoms, resulting in opposite transcriptional responses to the same environment. Apparent differences in metabolic capacity and the expression of that capacity in the environment suggest that diatom-specific resource partitioning was occurring in Narragansett Bay. This high-resolution approach highlights the molecular underpinnings of diatom resource utilization and how cooccurring diatoms adjust their cellular physiology to partition their niche space.
Thermodynamics of Anharmonic Systems: Uncoupled Mode Approximations for Molecules
Li, Yi-Pei; Bell, Alexis T.; Head-Gordon, Martin
2016-05-26
The partition functions, heat capacities, entropies, and enthalpies of selected molecules were calculated using uncoupled mode (UM) approximations, where the full-dimensional potential energy surface for internal motions was modeled as a sum of independent one-dimensional potentials for each mode. The computational cost of such approaches scales the same with molecular size as standard harmonic oscillator vibrational analysis using harmonic frequencies (HO hf). To compute thermodynamic properties, a computational protocol for obtaining the energy levels of each mode was established. The accuracy of the UM approximation depends strongly on how the one-dimensional potentials of each modes are defined. If the potentialsmore » are determined by the energy as a function of displacement along each normal mode (UM-N), the accuracies of the calculated thermodynamic properties are not significantly improved versus the HO hf model. Significant improvements can be achieved by constructing potentials for internal rotations and vibrations using the energy surfaces along the torsional coordinates and the remaining vibrational normal modes, respectively (UM-VT). For hydrogen peroxide and its isotopologs at 300 K, UM-VT captures more than 70% of the partition functions on average. By con trast, the HO hf model and UM-N can capture no more than 50%. For a selected test set of C2 to C8 linear and branched alkanes and species with different moieties, the enthalpies calculated using the HO hf model, UM-N, and UM-VT are all quite accurate comparing with reference values though the RMS errors of the HO model and UM-N are slightly higher than UM-VT. However, the accuracies in entropy calculations differ significantly between these three models. For the same test set, the RMS error of the standard entropies calculated by UM-VT is 2.18 cal mol -1 K -1 at 1000 K. By contrast, the RMS error obtained using the HO model and UM-N are 6.42 and 5.73 cal mol -1 K -1, respectively. For a test set composed of nine alkanes ranging from C5 to C8, the heat capacities calculated with the UM-VT model agree with the experimental values to within a RMS error of 0.78 cal mol -1 K -1 , which is less than one-third of the RMS error of the HO hf (2.69 cal mol -1 K -1) and UM-N (2.41 cal mol -1 K -1) models.« less
PAH concentrations simulated with the AURAMS-PAH chemical transport model over Canada and the USA
NASA Astrophysics Data System (ADS)
Galarneau, E.; Makar, P. A.; Zheng, Q.; Narayan, J.; Zhang, J.; Moran, M. D.; Bari, M. A.; Pathela, S.; Chen, A.; Chlumsky, R.
2013-07-01
The off-line Eulerian AURAMS chemical transport model was adapted to simulate the atmospheric fate of seven PAHs: phenanthrene, anthracene, fluoranthene, pyrene, benz[a]anthracene, chrysene + triphenylene, and benzo[a]pyrene. The model was then run for the year 2002 with hourly output on a~grid covering southern Canada and the continental USA with 42 km horizontal grid spacing. Model predictions were compared to ~ 5000 24 h average PAH measurements from 45 sites, eight of which also provided data on particle/gas partitioning which had been modelled using two alternative schemes. This is the first known regional modelling study for PAHs over a North American domain and the first modelling study at any scale to compare alternative particle/gas partitioning schemes against paired field measurements. Annual average modelled total (gas + particle) concentrations were statistically indistinguishable from measured values for fluoranthene, pyrene and benz[a]anthracene whereas the model underestimated concentrations of phenanthrene, anthracene and chrysene + triphenylene. Significance for benzo[a]pyrene performance was close to the statistical threshold and depended on the particle/gas partitioning scheme employed. On a day-to-day basis, the model simulated total PAH concentrations to the correct order of magnitude the majority of the time. Model performance differed substantially between measurement locations and the limited available evidence suggests that the model spatial resolution was too coarse to capture the distribution of concentrations in densely populated areas. A more detailed analysis of the factors influencing modelled particle/gas partitioning is warranted based on the findings in this study.
Chiou, C.T.
1985-01-01
Triolein-water partition coefficients (KtW) have been determined for 38 slightly water-soluble organic compounds, and their magnitudes have been compared with the corresponding octanol-water partition coefficients (KOW). In the absence of major solvent-solute interaction effects in the organic solvent phase, the conventional treatment (based on Raoult's law) predicts sharply lower partition coefficients for most of the solutes in triolein because of its considerably higher molecular weight, whereas the Flory-Huggins treatment predicts higher partition coefficients with triolein. The data are in much better agreement with the Flory-Huggins model. As expected from the similarity in the partition coefficients, the water solubility (which was previously found to be the major determinant of the KOW) is also the major determinant for the Ktw. When the published BCF values (bioconcentration factors) of organic compounds in fish are based on the lipid content rather than on total mass, they are approximately equal to the Ktw, which suggests at least near equilibrium for solute partitioning between water and fish lipid. The close correlation between Ktw and Kow suggests that Kow is also a good predictor for lipid-water partition coefficients and bioconcentration factors.
NASA Astrophysics Data System (ADS)
Wei, Z.; Lee, X.; Wen, X.; Xiao, W.
2017-12-01
Quantification of the contribution of transpiration (T) to evapotranspiration (ET) is a requirement for understanding changes in carbon assimilation and water cycling in a changing environment. So far, few studies have examined seasonal variability of T/ET and compared different ET partitioning methods under natural conditions across diverse agro-ecosystems. In this study, we apply a two-source model to partition ET for three agro-ecosystems (rice, wheat and corn). The model-estimated T/ET ranges from 0 to 1, with a near continuous increase over time in the early growing season when leaf area index (LAI) is less than 2.5 and then convergence towards a stable value beyond LAI of 2.5. The seasonal change in T/ET can be described well as a function of LAI, implying that LAI is a first-order factor affecting ET partitioning. The two-source model results show that the growing-season (May - September for rice, April - June for wheat and June to September for corn) T/ET is 0.50, 0.84 and 0.64, while an isotopic approach shows that T/ET is 0.74, 0.93 and 0.81 for rice, wheat and maize, respectively. The two-source model results are supported by soil lysimeter and eddy covariance measurements made during the same time period for wheat (0.87). Uncertainty analysis suggests that further improvements to the Craig-Gordon model prediction of the evaporation isotope composition and to measurement of the isotopic composition of ET are necessary to achieve accurate flux partitioning at the ecosystem scale using water isotopes as tracers.
NASA Astrophysics Data System (ADS)
Wang, F.; Annable, M. D.; Jawitz, J. W.
2012-12-01
The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a PCE-contaminated dry cleaner site, located in Jacksonville, Florida. The EST is an analytical solution with field-measurable input parameters. Here, measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ alcohol (ethanol) flood. In addition, a simulated partitioning tracer test from a calibrated spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The ethanol prediction based on both the field partitioning tracer test and the UTCHEM tracer test simulation closely matched the field data. The PCE EST prediction showed a peak shift to an earlier arrival time that was concluded to be caused by well screen interval differences between the field tracer test and alcohol flood. This observation was based on a modeling assessment of potential factors that may influence predictions by using UTCHEM simulations. The imposed injection and pumping flow pattern at this site for both the partitioning tracer test and alcohol flood was more complex than the natural gradient flow pattern (NGFP). Both the EST model and UTCHEM were also used to predict PCE dissolution under natural gradient conditions, with much simpler flow patterns than the forced-gradient double five spot of the alcohol flood. The NGFP predictions based on parameters determined from tracer tests conducted with complex flow patterns underestimated PCE concentrations and total mass removal. This suggests that the flow patterns influence aqueous dissolution and that the aqueous dissolution under the NGFP is more efficient than dissolution under complex flow patterns.
ERIC Educational Resources Information Center
Chhabra, Meenakshi
2017-01-01
This article examines singular historical narratives of the 1947 British India Partition in four history textbooks from India, Pakistan, Bangladesh, and Britain, respectively. Drawing on analysis and work in the field, this study proposes a seven-module "integrated snail model" with a human rights orientation that can be applied to…
K-Partite RNA Secondary Structures
NASA Astrophysics Data System (ADS)
Jiang, Minghui; Tejada, Pedro J.; Lasisi, Ramoni O.; Cheng, Shanhong; Fechser, D. Scott
RNA secondary structure prediction is a fundamental problem in structural bioinformatics. The prediction problem is difficult because RNA secondary structures may contain pseudoknots formed by crossing base pairs. We introduce k-partite secondary structures as a simple classification of RNA secondary structures with pseudoknots. An RNA secondary structure is k-partite if it is the union of k pseudoknot-free sub-structures. Most known RNA secondary structures are either bipartite or tripartite. We show that there exists a constant number k such that any secondary structure can be modified into a k-partite secondary structure with approximately the same free energy. This offers a partial explanation of the prevalence of k-partite secondary structures with small k. We give a complete characterization of the computational complexities of recognizing k-partite secondary structures for all k ≥ 2, and show that this recognition problem is essentially the same as the k-colorability problem on circle graphs. We present two simple heuristics, iterated peeling and first-fit packing, for finding k-partite RNA secondary structures. For maximizing the number of base pair stackings, our iterated peeling heuristic achieves a constant approximation ratio of at most k for 2 ≤ k ≤ 5, and at most frac6{1-(1-6/k)^k} le frac6{1-e^{-6}} < 6.01491 for k ≥ 6. Experiment on sequences from PseudoBase shows that our first-fit packing heuristic outperforms the leading method HotKnots in predicting RNA secondary structures with pseudoknots. Source code, data set, and experimental results are available at
NASA Astrophysics Data System (ADS)
Yuan, Quan; Ma, Guangcai; Xu, Ting; Serge, Bakire; Yu, Haiying; Chen, Jianrong; Lin, Hongjun
2016-10-01
Poly-/perfluoroalkyl substances (PFASs) are a class of synthetic fluorinated organic substances that raise increasing concern because of their environmental persistence, bioaccumulation and widespread presence in various environment media and organisms. PFASs can be released into the atmosphere through both direct and indirect sources, and the gas/particle partition coefficient (KP) is an important parameter that helps us to understand their atmospheric behavior. In this study, we developed a temperature-dependent predictive model for log KP of PFASs and analyzed the molecular mechanism that governs their partitioning equilibrium between gas phase and particle phase. All theoretical computation was carried out at B3LYP/6-31G (d, p) level based on neutral molecular structures by Gaussian 09 program package. The regression model has a good statistical performance and robustness. The application domain has also been defined according to OECD guidance. The mechanism analysis shows that electrostatic interaction and dispersion interaction play the most important role in the partitioning equilibrium. The developed model can be used to predict log KP values of neutral fluorotelomer alcohols and perfluor sulfonamides/sulfonamidoethanols with different substitutions at nitrogen atoms, providing basic data for their ecological risk assessment.
A comparison of two methods for determining copper partitioning in oxidized sediments
Luoma, S.N.
1986-01-01
Model estimations of the proportion of Cu in oxidized sediments associated with extractable organic materials show some agreement with the proportion of Cu extracted from those sediments with ammonium hydroxide. Data were from 17 estuaries of widely differing sediment chemistry. The modelling and extraction methods agreed best where concentrations of organic materials were either in very high concentrations, relative to other sediment components, or in very low concentrations. In the range of component concentrations where the model predicted Cu should be distributed among a variety of components, agreement between the methods was poor. Both approaches indicated that Cu was predominantly partitioned to organic materials in some sediments, and predominantly partitioned to other components (most probably iron oxides and manganese oxides) in other sediments, and that these differences were related to the relative abundances of the specific components in the sediment. Although the results of the two methods of estimating Cu partitioning to organics correlated significantly among 24 stations from the 17 estuaries, the variability in the relationship suggested refinement of parameter values and verification of some important assumptions were essential to the further development of a reasonable model. ?? 1986.
Copula-based prediction of economic movements
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Hirsh, I. D.
2016-06-01
In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.
Vijver, Martina G; Spijker, Job; Vink, Jos P M; Posthuma, Leo
2008-12-01
Metals in floodplain soils and sediments (deposits) can originate from lithogenic and anthropogenic sources, and their availability for uptake in biota is hypothesized to depend on both origin and local sediment conditions. In criteria-based environmental risk assessments, these issues are often neglected, implying local risks to be often over-estimated. Current problem definitions in river basin management tend to require a refined, site-specific focus, resulting in a need to address both aspects. This paper focuses on the determination of local environmental availabilities of metals in fluvial deposits by addressing both the origins of the metals and their partitioning over the solid and solution phases. The environmental availability of metals is assumed to be a key force influencing exposure levels in field soils and sediments. Anthropogenic enrichments of Cu, Zn and Pb in top layers could be distinguished from lithogenic background concentrations and described using an aluminium-proxy. Cd in top layers was attributed to anthropogenic enrichment almost fully. Anthropogenic enrichments for Cu and Zn appeared further to be also represented by cold 2M HNO3 extraction of site samples. For Pb the extractions over-estimated the enrichments. Metal partitioning was measured, and measurements were compared to predictions generated by an empirical regression model and by a mechanistic-kinetic model. The partitioning models predicted metal partitioning in floodplain deposits within about one order of magnitude, though a large inter-sample variability was found for Pb.
Aggregation models on hypergraphs
NASA Astrophysics Data System (ADS)
Alberici, Diego; Contucci, Pierluigi; Mingione, Emanuele; Molari, Marco
2017-01-01
Following a newly introduced approach by Rasetti and Merelli we investigate the possibility to extract topological information about the space where interacting systems are modelled. From the statistical datum of their observable quantities, like the correlation functions, we show how to reconstruct the activities of their constitutive parts which embed the topological information. The procedure is implemented on a class of polymer models on hypergraphs with hard-core interactions. We show that the model fulfils a set of iterative relations for the partition function that generalise those introduced by Heilmann and Lieb for the monomer-dimer case. After translating those relations into structural identities for the correlation functions we use them to test the precision and the robustness of the inverse problem. Finally the possible presence of a further interaction of peer-to-peer type is considered and a criterion to discover it is identified.
IHC-TM connect-disconnect and efferent control V.
Crane, H D
1982-07-01
Four previous papers in this series have explored how the idea of a set of disconnected inner hair cells (IHCs) that can "impact" the tectorial membrane (TM) is consistent with psychophysical data. This paper extends the model and explores the potential for mechanical interaction between the IHCs and outer hair cells (OHCs). In particular, it is speculated that the advantage of IHC-TM disconnect is extended dynamic range, and that movement of the movement of the OHCs and TM, under efferent control, constitutes a mechanical servo system for adjusting IHC-TM spacing along the cochlear partition to achieve this extended range.
Revisiting the choice of the driving temperature for eddy covariance CO2 flux partitioning
Wohlfahrt, Georg; Galvagno, Marta
2017-01-01
So-called CO2 flux partitioning algorithms are widely used to partition the net ecosystem CO2 exchange into the two component fluxes, gross primary productivity and ecosystem respiration. Common CO2 flux partitioning algorithms conceptualize ecosystem respiration to originate from a single source, requiring the choice of a corresponding driving temperature. Using a conceptual dual-source respiration model, consisting of an above- and a below-ground respiration source each driven by a corresponding temperature, we demonstrate that the typical phase shift between air and soil temperature gives rise to a hysteresis relationship between ecosystem respiration and temperature. The hysteresis proceeds in a clockwise fashion if soil temperature is used to drive ecosystem respiration, while a counter-clockwise response is observed when ecosystem respiration is related to air temperature. As a consequence, nighttime ecosystem respiration is smaller than daytime ecosystem respiration when referenced to soil temperature, while the reverse is true for air temperature. We confirm these qualitative modelling results using measurements of day and night ecosystem respiration made with opaque chambers in a short-statured mountain grassland. Inferring daytime from nighttime ecosystem respiration or vice versa, as attempted by CO2 flux partitioning algorithms, using a single-source respiration model is thus an oversimplification resulting in biased estimates of ecosystem respiration. We discuss the likely magnitude of the bias, options for minimizing it and conclude by emphasizing that the systematic uncertainty of gross primary productivity and ecosystem respiration inferred through CO2 flux partitioning needs to be better quantified and reported. PMID:28439145
Weights and topology: a study of the effects of graph construction on 3D image segmentation.
Grady, Leo; Jolly, Marie-Pierre
2008-01-01
Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.
NASA Astrophysics Data System (ADS)
Mitchell, K.; Xia, Y.; Ek, M. B.; Mocko, D. M.; Kumar, S.; Peters-Lidard, C. D.
2016-12-01
NLDAS is a multi-institutional collaborative project sponsored by NOAA's Climate Program Office and NASA's Terrestrial Hydrological Program. NLDAS has a long successful history of producing soil moisture, snow cover, total runoff and streamflow products via application of surface meteorology and precipitation datasets to drive four land-surface models (i.e., Noah, Mosaic, SAC, VIC). The purpose of the NLDAS system is to support numerous research and operational applications in the land modeling and water resources management communities. Since the operational NLDAS version was successfully implemented at NCEP in August 2014, NLDAS products are being used by over 5000 users annually worldwide, including academia, governmental agencies, and private enterprises. Over 71 million files and 144 Tb of data were downloaded in 2015. As we endeavor to increase the quality and breadth of NLDAS products, a joint effort between NASA and NCEP is underway to enable the assimilation of hydrology-relevant remote sensing datasets within NLDAS through the NASA Land Information System (LIS). The use of LIS will also enable easier transition of newly upgraded land surface models into NCEP NLDAS operations. Cold season processes significantly affect water and energy cycles, and their partitioning. As such, in the evaluation of NLDAS systems it is important to assess water and energy exchanges and/or partitioning processes over high-elevations. The Rocky Mountain region of the western U. S. is chosen as such a region to analyze and compare snow water equivalent (SWE), snow cover, snow melt, snow sublimation, total runoff, and sensible heat and latent heat flux. Reference data sets (observation-based and reanalysis) of monthly SWE, streamflow, evapotranspiration, GRACE-based total water storage change, and energy fluxes are used to evaluate model-simulated results. The results show several key factors that affect model simulations: (1) forcing errors such as precipitation partitioning into snowfall and rainfall, (2) snow albedo, (3) refreezing of melted snow, (4) boundary layer stability, and (5) freezing and thawing of soil. Though the anomaly correlations indicate good agreement with the observations or reanalysis products, large quantitative differences are evident in certain cases.
Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka
2016-08-05
Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.
Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka
2014-02-01
In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain.
Arp, Hans Peter H; Lundstedt, Staffan; Josefsson, Sarah; Cornelissen, Gerard; Enell, Anja; Allard, Ann-Sofie; Kleja, Dan Berggren
2014-10-07
Soil quality standards are based on partitioning and toxicity data for laboratory-spiked reference soils, instead of real world, historically contaminated soils, which would be more representative. Here 21 diverse historically contaminated soils from Sweden, Belgium, and France were obtained, and the soil-porewater partitioning along with the bioaccumulation in exposed worms (Enchytraeus crypticus) of native polycyclic aromatic compounds (PACs) were quantified. The native PACs investigated were polycyclic aromatic hydrocarbons (PAHs) and, for the first time to be included in such a study, oxygenated-PAHs (oxy-PAHs) and nitrogen containing heterocyclic PACs (N-PACs). The passive sampler polyoxymethylene (POM) was used to measure the equilibrium freely dissolved porewater concentration, Cpw, of all PACs. The obtained organic carbon normalized partitioning coefficients, KTOC, show that sorption of these native PACs is much stronger than observed in laboratory-spiked soils (typically by factors 10 to 100), which has been reported previously for PAHs but here for the first time for oxy-PAHs and N-PACs. A recently developed KTOC model for historically contaminated sediments predicted the 597 unique, native KTOC values in this study within a factor 30 for 100% of the data and a factor 3 for 58% of the data, without calibration. This model assumes that TOC in pyrogenic-impacted areas sorbs similarly to coal tar, rather than octanol as typically assumed. Black carbon (BC) inclusive partitioning models exhibited substantially poorer performance. Regarding bioaccumulation, Cpw combined with liposome-water partition coefficients corresponded better with measured worm lipid concentrations, Clipid (within a factor 10 for 85% of all PACs and soils), than Cpw combined with octanol-water partition coefficients (within a factor 10 for 76% of all PACs and soils). E. crypticus mortality and reproducibility were also quantified. No enhanced mortality was observed in the 21 historically contaminated soils despite expectations from PAH spiked reference soils. Worm reproducibility weakly correlated to Clipid of PACs, though the contributing influence of metal concentrations and soil texture could not be taken into account. The good agreement of POM-derived Cpw with independent soil and lipid partitioning models further supports that soil risk assessments would improve by accounting for bioavailability. Strategies for including bioavailability in soil risk assessment are presented.
NASA Astrophysics Data System (ADS)
Zuend, A.; Marcolli, C.; Peter, T.; Seinfeld, J. H.
2010-08-01
Semivolatile organic and inorganic aerosol species partition between the gas and aerosol particle phases to maintain thermodynamic equilibrium. Liquid-liquid phase separation into an organic-rich and an aqueous electrolyte phase can occur in the aerosol as a result of the salting-out effect. Such liquid-liquid equilibria (LLE) affect the gas/particle partitioning of the different semivolatile compounds and might significantly alter both particle mass and composition as compared to a one-phase particle. We present a new liquid-liquid equilibrium and gas/particle partitioning model, using as a basis the group-contribution model AIOMFAC (Zuend et al., 2008). This model allows the reliable computation of the liquid-liquid coexistence curve (binodal), corresponding tie-lines, the limit of stability/metastability (spinodal), and further thermodynamic properties of multicomponent systems. Calculations for ternary and multicomponent alcohol/polyol-water-salt mixtures suggest that LLE are a prevalent feature of organic-inorganic aerosol systems. A six-component polyol-water-ammonium sulphate system is used to simulate effects of relative humidity (RH) and the presence of liquid-liquid phase separation on the gas/particle partitioning. RH, salt concentration, and hydrophilicity (water-solubility) are identified as key features in defining the region of a miscibility gap and govern the extent to which compound partitioning is affected by changes in RH. The model predicts that liquid-liquid phase separation can lead to either an increase or decrease in total particulate mass, depending on the overall composition of a system and the particle water content, which is related to the hydrophilicity of the different organic and inorganic compounds. Neglecting non-ideality and liquid-liquid phase separations by assuming an ideal mixture leads to an overestimation of the total particulate mass by up to 30% for the composition and RH range considered in the six-component system simulation. For simplified partitioning parametrizations, we suggest a modified definition of the effective saturation concentration, Cj*, by including water and other inorganics in the absorbing phase. Such a Cj* definition reduces the RH-dependency of the gas/particle partitioning of semivolatile organics in organic-inorganic aerosols by an order of magnitude as compared to the currently accepted definition, which considers the organic species only.
NASA Astrophysics Data System (ADS)
Zuend, A.; Marcolli, C.; Peter, T.; Seinfeld, J. H.
2010-05-01
Semivolatile organic and inorganic aerosol species partition between the gas and aerosol particle phases to maintain thermodynamic equilibrium. Liquid-liquid phase separation into an organic-rich and an aqueous electrolyte phase can occur in the aerosol as a result of the salting-out effect. Such liquid-liquid equilibria (LLE) affect the gas/particle partitioning of the different semivolatile compounds and might significantly alter both particle mass and composition as compared to a one-phase particle. We present a new liquid-liquid equilibrium and gas/particle partitioning model, using as a basis the group-contribution model AIOMFAC (Zuend et al., 2008). This model allows the reliable computation of the liquid-liquid coexistence curve (binodal), corresponding tie-lines, the limit of stability/metastability (spinodal), and further thermodynamic properties of the phase diagram. Calculations for ternary and multicomponent alcohol/polyol-water-salt mixtures suggest that LLE are a prevalent feature of organic-inorganic aerosol systems. A six-component polyol-water-ammonium sulphate system is used to simulate effects of relative humidity (RH) and the presence of liquid-liquid phase separation on the gas/particle partitioning. RH, salt concentration, and hydrophilicity (water-solubility) are identified as key features in defining the region of a miscibility gap and govern the extent to which compound partitioning is affected by changes in RH. The model predicts that liquid-liquid phase separation can lead to either an increase or decrease in total particulate mass, depending on the overall composition of a system and the particle water content, which is related to the hydrophilicity of the different organic and inorganic compounds. Neglecting non-ideality and liquid-liquid phase separations by assuming an ideal mixture leads to an overestimation of the total particulate mass by up to 30% for the composition and RH range considered in the six-component system simulation. For simplified partitioning parametrizations, we suggest a modified definition of the effective saturation concentration, C*j, by including water and other inorganics in the absorbing phase. Such a C*j definition reduces the RH-dependency of the gas/particle partitioning of semivolatile organics in organic-inorganic aerosols by an order of magnitude as compared to the currently accepted definition, which considers the organic species only.
NASA Astrophysics Data System (ADS)
Attia, S.; Paterson, S. R.; Jiang, D.; Miller, R. B.
2017-12-01
Structural studies of orogenic deformation fields are mostly based on small-scale structures ubiquitous in field exposures, hand samples, and under microscopes. Relating deformation histories derived from such structures to changing lithospheric-scale deformation and boundary conditions is not trivial due to vast scale separation (10-6 107 m) between characteristic lengths of small-scale structures and lithospheric plates. Rheological heterogeneity over the range of orogenic scales will lead to deformation partitioning throughout intervening scales of structural development. Spectacular examples of structures documenting deformation partitioning are widespread within hot (i.e., magma-rich) orogens such as the well-studied central Sierra Nevada and Cascades core of western North America: (1) deformation partitioned into localized, narrow, triclinic shear zones separated by broad domains of distributed pure shear at micro- to 10 km scales; (2) deformation partitioned between plutons and surrounding metamorphic host rocks as shown by pluton-wide magmatic fabrics consistently oriented differently than coeval host rock fabrics; (3) partitioning recorded by different fabric intensities, styles, and orientations established from meter-scale grid mapping to 100 km scale domainal analyses; and (4) variations in the causes of strain and kinematics within fold-dominated domains. These complex, partitioned histories require synthesized mapping, geochronology, and structural data at all scales to evaluate partitioning and in the absence of correct scaling can lead to incorrect interpretations of histories. Forward modeling capable of addressing deformation partitioning in materials containing multiple scales of rheologically heterogeneous elements of varying characteristic lengths provides the ability to upscale the large synthesized datasets described above to plate-scale tectonic processes and boundary conditions. By comparing modeling predictions from the recently developed self-consistent Multi-Order Power-Law Approach (MOPLA) to multi-scale field observations, we constrain likely paleo-tectonic controls of orogenic structural evolution rather than predicting a unique, but likely incorrect deformation history.
NASA Technical Reports Server (NTRS)
Harrison, W. J.
1981-01-01
An experimental investigation of Ce, Sm and Tm rare earth element (REE) partition coefficients between coexisting garnets (both natural and synthetic) and hydrous liquids shows that Henry's Law may not be obeyed over a range of REE concentrations of geological relevance. Systematic differences between the three REE and the two garnet compositions may be explained in terms of the differences between REE ionic radii and those of the dodecahedral site into which they substitute, substantiating the Harrison and Wood (1980) model of altervalent substitution. Model calculations demonstrate that significant variation can occur in the rare earth contents of melts produced from a garnet lherzolite, if Henry's Law partition coefficients do not apply for the garnet phase.
Bidleman, Terry F; Nygren, Olle; Tysklind, Mats
2016-09-01
Partition coefficients of gaseous semivolatile organic compounds (SVOCs) between polyurethane foam (PUF) and air (KPA) are needed in the estimation of sampling rates for PUF disk passive air samplers. We determined KPA in field experiments by conducting long-term (24-48 h) air sampling to saturate PUF traps and shorter runs (2-4 h) to measure air concentrations. Sampling events were done at daily mean temperatures ranging from 1.9 to 17.5 °C. Target compounds were hexachlorobenzene (HCB), alpha-hexachlorocyclohexane (α-HCH), 2,4-dibromoanisole (2,4-DiBA) and 2,4,6-tribromoanisole (2,4,6-TriBA). KPA (mL g(-1)) was calculated from quantities on the PUF traps at saturation (ng g(-1)) divided by air concentrations (ng mL(-1)). Enthalpies of PUF-to-air transfer (ΔHPA, kJ mol(-1)) were determined from the slopes of log KPA/mL g(-1) versus 1/T(K) for HCB and the bromoanisoles, KPA of α-HCH was measured only at 14.3 to 17.5 °C and ΔHPA was not determined. Experimental log KPA/mL g(-1) at 15 °C were HCB = 7.37; α-HCH = 8.08; 2,4-DiBA = 7.26 and 2,4,6-TriBA = 7.26. Experimental log KPA/mL g(-1) were compared with predictions based on an octanol-air partition coefficient (log KOA) model (Shoeib and Harner, 2002a) and a polyparameter linear free relationship (pp-LFER) model (Kamprad and Goss, 2007) using different sets of solute parameters. Predicted KP values varied by factors of 3 to over 30, depending on the compound and the model. Such discrepancies provide incentive for experimental measurements of KPA for other SVOCs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Thermodynamic limit of random partitions and dispersionless Toda hierarchy
NASA Astrophysics Data System (ADS)
Takasaki, Kanehisa; Nakatsu, Toshio
2012-01-01
We study the thermodynamic limit of random partition models for the instanton sum of 4D and 5D supersymmetric U(1) gauge theories deformed by some physical observables. The physical observables correspond to external potentials in the statistical model. The partition function is reformulated in terms of the density function of Maya diagrams. The thermodynamic limit is governed by a limit shape of Young diagrams associated with dominant terms in the partition function. The limit shape is characterized by a variational problem, which is further converted to a scalar-valued Riemann-Hilbert problem. This Riemann-Hilbert problem is solved with the aid of a complex curve, which may be thought of as the Seiberg-Witten curve of the deformed U(1) gauge theory. This solution of the Riemann-Hilbert problem is identified with a special solution of the dispersionless Toda hierarchy that satisfies a pair of generalized string equations. The generalized string equations for the 5D gauge theory are shown to be related to hidden symmetries of the statistical model. The prepotential and the Seiberg-Witten differential are also considered.
NASA Astrophysics Data System (ADS)
Kiseeva, Ekaterina S.; Wood, Bernard J.
2015-08-01
We develop a comprehensive model to describe trace and minor element partitioning between sulphide liquids and anhydrous silicate liquids of approximately basaltic composition. We are able thereby to account completely for the effects of temperature and sulphide composition on the partitioning of Ag, Cd, Co, Cr, Cu, Ga, Ge, In, Mn, Ni, Pb, Sb, Ti, Tl, V and Zn. The model was developed from partitioning experiments performed in a piston-cylinder apparatus at 1.5 GPa and 1300 to 1700 °C with sulphide compositions covering the quaternary FeSsbnd NiSsbnd CuS0.5sbnd FeO. Partitioning of most elements is a strong function of the oxygen (or FeO) content of the sulphide. This increases linearly with the FeO content of the silicate melt and decreases with Ni content of the sulphide. As expected, lithophile elements partition more strongly into sulphide as its oxygen content increases, while chalcophile elements enter sulphide less readily with increasing oxygen. We parameterised the effects by using the ε-model of non-ideal interactions in metallic liquids. The resulting equation for partition coefficient of an element M between sulphide and silicate liquids can be expressed as We used our model to calculate the amount of sulphide liquid precipitated along the liquid line of descent of MORB melts and find that 70% of silicate crystallisation is accompanied by ∼0.23% of sulphide precipitation. The latter is sufficient to control the melt concentrations of chalcophile elements such as Cu, Ag and Pb. Our partition coefficients and observed chalcophile element concentrations in MORB glasses were used to estimate sulphur solubility in MORB liquids. We obtained between ∼800 ppm (for primitive MORB) and ∼2000 ppm (for evolved MORB), values in reasonable agreement with experimentally-derived models. The experimental data also enable us to reconsider Ce/Pb and Nd/Pb ratios in MORB. We find that constant Ce/Pb and Nd/Pb ratios of 25 and 20, respectively, can be achieved during fractional crystallisation of magmas generated by 10% melting of depleted mantle provided the latter contains >100 ppm S and about 650 ppm Ce, 550 ppm Nd and 27.5 ppb Pb. Finally, we investigated the hypothesis that the pattern of chalcophile element abundances in the mantle was established by segregation of a late sulphide matte. Taking the elements Cu, Ag, Pb and Zn as examples we find that the Pb/Zn and Cu/Ag ratios of the mantle can, in principle, be explained by segregation of ∼0.4% sulphide matte to the core.
Impacts of environmental conditions on the sorption of volatile organic compounds onto tire powder.
Oh, Dong I; Nam, Kyongphile; Park, Jae W; Khim, Jee H; Kim, Yong K; Kim, Jae Y
2008-05-01
A series of batch tests were performed and the impacts of environmental conditions and phase change on the sorption of volatile organic compounds (VOCs) were investigated. Benzene, trichloroethylene, tetrachloroethylene, and ethylbenzene were selected as target VOCs. Sorption of VOCs onto tire powder was well demonstrated by a linear-partitioning model. Water-tire partition coefficients of VOCs (not tested in this study) could be estimated using a logarithmic relationship between observed water-tire partition coefficients and octanol-water partition coefficients of the VOCs tested. The target VOCs did not seem to compete with other VOCs significantly when sorbed onto the tire powder for the range of concentrations tested. The influence of environmental conditions, such as pH and ionic strength also did not seem to be significant. Water-tire partition coefficients of benzene, trichloroethylene, tetrachloroethylene, and ethylbenzene decreased as the sorbent dosage increased. However, they showed stable values when the sorbent dosage was greater than 10 g/L. Air-tire partition coefficient could be extrapolated from Henry's law constants and water-tire partition coefficient of VOCs.
NASA Astrophysics Data System (ADS)
Tomaschitz, Roman
2013-10-01
A statistical description of the all-particle cosmic-ray spectrum is given in the 10^{14}\\ \\text{eV} to 10^{20}\\ \\text{eV} interval. The high-energy cosmic-ray flux is modeled as an ultra-relativistic multi-component plasma, whose components constitute a mixture of nearly ideal but nonthermal gases of low density and high temperature. Each plasma component is described by an ultra-relativistic power-law density manifested as spectral peak in the wideband fit. The “knee” and “ankle” features of the high- and ultra-high-energy spectrum turn out to be the global and local extrema of the double-logarithmic E3-scaled flux representation in which the spectral fit is performed. The all-particle spectrum is covered by recent data sets from several air shower arrays, and can be modeled as three-component plasma in the indicated energy range extending over six decades. The temperature, specific number density, internal energy and entropy of each plasma component are extracted from the partial fluxes in the broadband fit. The grand partition function and the extensive entropy functional of a non-equilibrated gas mixture with power-law components are derived in phase space by ensemble averaging.
Zhang, X; Patel, L A; Beckwith, O; Schneider, R; Weeden, C J; Kindt, J T
2017-11-14
Micelle cluster distributions from molecular dynamics simulations of a solvent-free coarse-grained model of sodium octyl sulfate (SOS) were analyzed using an improved method to extract equilibrium association constants from small-system simulations containing one or two micelle clusters at equilibrium with free surfactants and counterions. The statistical-thermodynamic and mathematical foundations of this partition-enabled analysis of cluster histograms (PEACH) approach are presented. A dramatic reduction in computational time for analysis was achieved through a strategy similar to the selector variable method to circumvent the need for exhaustive enumeration of the possible partitions of surfactants and counterions into clusters. Using statistics from a set of small-system (up to 60 SOS molecules) simulations as input, equilibrium association constants for micelle clusters were obtained as a function of both number of surfactants and number of associated counterions through a global fitting procedure. The resulting free energies were able to accurately predict micelle size and charge distributions in a large (560 molecule) system. The evolution of micelle size and charge with SOS concentration as predicted by the PEACH-derived free energies and by a phenomenological four-parameter model fit, along with the sensitivity of these predictions to variations in cluster definitions, are analyzed and discussed.
Rayne, Sierra; Forest, Kaya
2014-09-19
The air-water partition coefficient (Kaw) of perfluoro-2-methyl-3-pentanone (PFMP) was estimated using the G4MP2/G4 levels of theory and the SMD solvation model. A suite of 31 fluorinated compounds was employed to calibrate the theoretical method. Excellent agreement between experimental and directly calculated Kaw values was obtained for the calibration compounds. The PCM solvation model was found to yield unsatisfactory Kaw estimates for fluorinated compounds at both levels of theory. The HENRYWIN Kaw estimation program also exhibited poor Kaw prediction performance on the training set. Based on the resulting regression equation for the calibration compounds, the G4MP2-SMD method constrained the estimated Kaw of PFMP to the range 5-8 × 10(-6) M atm(-1). The magnitude of this Kaw range indicates almost all PFMP released into the atmosphere or near the land-atmosphere interface will reside in the gas phase, with only minor quantities dissolved in the aqueous phase as the parent compound and/or its hydrate/hydrate conjugate base. Following discharge into aqueous systems not at equilibrium with the atmosphere, significant quantities of PFMP will be present as the dissolved parent compound and/or its hydrate/hydrate conjugate base.
Grain size evolution and convection regimes of the terrestrial planets
NASA Astrophysics Data System (ADS)
Rozel, A.; Golabek, G. J.; Boutonnet, E.
2011-12-01
A new model of grain size evolution has recently been proposed in Rozel et al. 2010. This new approach stipulates that the grain size dynamics is governed by two additive and simultaneous processes: grain growth and dynamic recrystallization. We use the usual normal grain growth laws for the growth part. For dynamic recrystallization, reducing the mean grain size increases the total area of grain boundaries. Grain boundaries carry some surface tension, so some energy is required to decrease the mean grain size. We consider that this energy is available during mechanical work. It is usually considered to produce some heat via viscous dissipation. A partitioning parameter f is then required to know what amount of energy is dissipated and what part is converted in surface tension. This study gives a new calibration of the partitioning parameter on major Earth materials involved in the dynamic of the terrestrial planets. Our calibration is in adequation with the published piezometric relations available in the literature (equilibrium grain size versus shear stress). We test this new model of grain size evolution in a set of numerical computations of the dynamics of the Earth using stagYY. We show that the grain size evolution has a major effect on the convection regimes of terrestrial planets.
Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun
2016-08-01
Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.
Jin, Xiaochen; Fu, Zhiqiang; Li, Xuehua; Chen, Jingwen
2017-03-22
The octanol-air partition coefficient (K OA ) is a key parameter describing the partition behavior of organic chemicals between air and environmental organic phases. As the experimental determination of K OA is costly, time-consuming and sometimes limited by the availability of authentic chemical standards for the compounds to be determined, it becomes necessary to develop credible predictive models for K OA . In this study, a polyparameter linear free energy relationship (pp-LFER) model for predicting K OA at 298.15 K and a novel model incorporating pp-LFERs with temperature (pp-LFER-T model) were developed from 795 log K OA values for 367 chemicals at different temperatures (263.15-323.15 K), and were evaluated with the OECD guidelines on QSAR model validation and applicability domain description. Statistical results show that both models are well-fitted, robust and have good predictive capabilities. Particularly, the pp-LFER model shows a strong predictive ability for polyfluoroalkyl substances and organosilicon compounds, and the pp-LFER-T model maintains a high predictive accuracy within a wide temperature range (263.15-323.15 K).
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.
Simulating crop growth with Expert-N-GECROS under different site conditions in Southwest Germany
NASA Astrophysics Data System (ADS)
Poyda, Arne; Ingwersen, Joachim; Demyan, Scott; Gayler, Sebastian; Streck, Thilo
2016-04-01
When feedbacks between the land surface and the atmosphere are investigated by Atmosphere-Land surface-Crop-Models (ALCM) it is fundamental to accurately simulate crop growth dynamics as plants directly influence the energy partitioning at the plant-atmosphere interface. To study both the response and the effect of intensive agricultural crop production systems on regional climate change in Southwest Germany, the crop growth model GECROS (YIN & VAN LAAR, 2005) was calibrated based on multi-year field data from typical crop rotations in the Kraichgau and Swabian Alb regions. Additionally, the SOC (soil organic carbon) model DAISY (MÜLLER et al., 1998) was implemented in the Expert-N model tool (ENGEL & PRIESACK, 1993) and combined with GECROS. The model was calibrated based on a set of plant (BBCH, LAI, plant height, aboveground biomass, N content of biomass) and weather data for the years 2010 - 2013 and validated with the data of 2014. As GECROS adjusts the root-shoot partitioning in response to external conditions (water, nitrogen, CO2), it is suitable to simulate crop growth dynamics under changing climate conditions and potentially more frequent stress situations. As C and N pools and turnover rates in soil as well as preceding crop effects were expected to considerably influence crop growth, the model was run in a multi-year, dynamic way. Crop residues and soil mineral N (nitrate, ammonium) available for the subsequent crop were accounted for. The model simulates growth dynamics of winter wheat, winter rape, silage maize and summer barley at the Kraichgau and Swabian Alb sites well. The Expert-N-GECROS model is currently parameterized for crops with potentially increasing shares in future crop rotations. First results will be shown.
Record, M Thomas; Guinn, Emily; Pegram, Laurel; Capp, Michael
2013-01-01
Understanding how Hofmeister salt ions and other solutes interact with proteins, nucleic acids, other biopolymers and water and thereby affect protein and nucleic acid processes as well as model processes (e.g. solubility of model compounds) in aqueous solution is a longstanding goal of biophysical research. Empirical Hofmeister salt and solute "m-values" (derivatives of the observed standard free energy change for a model or biopolymer process with respect to solute or salt concentration m3) are equal to differences in chemical potential derivatives: m-value = delta(dmu2/dm3) = delta mu23, which quantify the preferential interactions of the solute or salt with the surface of the biopolymer or model system (component 2) exposed or buried in the process. Using the solute partitioning model (SPM), we dissect mu23 values for interactions of a solute or Hofmeister salt with a set of model compounds displaying the key functional groups of biopolymers to obtain interaction potentials (called alpha-values) that quantify the interaction of the solute or salt per unit area of each functional group or type of surface. Interpreted using the SPM, these alpha-values provide quantitative information about both the hydration of functional groups and the competitive interaction of water and the solute or salt with functional groups. The analysis corroborates and quantifies previous proposals that the Hofmeister anion and cation series for biopolymer processes are determined by ion-specific, mostly unfavorable interactions with hydrocarbon surfaces; the balance between these unfavorable nonpolar interactions and often-favorable interactions of ions with polar functional groups determine the series null points. The placement of urea and glycine betaine (GB) at opposite ends of the corresponding series of nonelectrolytes results from the favorable interactions of urea, and unfavorable interactions of GB, with many (but not all) biopolymer functional groups. Interaction potentials and local-bulk partition coefficients quantifying the distribution of solutes (e.g. urea, glycine betaine) and Hofmeister salt ions in the vicinity of each functional group make good chemical sense when interpreted in terms of competitive noncovalent interactions. These interaction potentials allow solute and Hofmeister (noncoulombic) salt effects on protein and nucleic acid processes to be interpreted or predicted, and allow the use of solutes and salts as probes of
A knowledge based system for scientific data visualization
NASA Technical Reports Server (NTRS)
Senay, Hikmet; Ignatius, Eve
1992-01-01
A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.
Partitioning in parallel processing of production systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oflazer, K.
1987-01-01
This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Sloma, Michael F.; Mathews, David H.
2016-01-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924
Exact partition functions for gauge theories on Rλ3
NASA Astrophysics Data System (ADS)
Wallet, Jean-Christophe
2016-11-01
The noncommutative space Rλ3, a deformation of R3, supports a 3-parameter family of gauge theory models with gauge-invariant harmonic term, stable vacuum and which are perturbatively finite to all orders. Properties of this family are discussed. The partition function factorizes as an infinite product of reduced partition functions, each one corresponding to the reduced gauge theory on one of the fuzzy spheres entering the decomposition of Rλ3. For a particular sub-family of gauge theories, each reduced partition function is exactly expressible as a ratio of determinants. A relation with integrable 2-D Toda lattice hierarchy is indicated.
Predicting Salt Permeability Coefficients in Highly Swollen, Highly Charged Ion Exchange Membranes.
Kamcev, Jovan; Paul, Donald R; Manning, Gerald S; Freeman, Benny D
2017-02-01
This study presents a framework for predicting salt permeability coefficients in ion exchange membranes in contact with an aqueous salt solution. The model, based on the solution-diffusion mechanism, was tested using experimental salt permeability data for a series of commercial ion exchange membranes. Equilibrium salt partition coefficients were calculated using a thermodynamic framework (i.e., Donnan theory), incorporating Manning's counterion condensation theory to calculate ion activity coefficients in the membrane phase and the Pitzer model to calculate ion activity coefficients in the solution phase. The model predicted NaCl partition coefficients in a cation exchange membrane and two anion exchange membranes, as well as MgCl 2 partition coefficients in a cation exchange membrane, remarkably well at higher external salt concentrations (>0.1 M) and reasonably well at lower external salt concentrations (<0.1 M) with no adjustable parameters. Membrane ion diffusion coefficients were calculated using a combination of the Mackie and Meares model, which assumes ion diffusion in water-swollen polymers is affected by a tortuosity factor, and a model developed by Manning to account for electrostatic effects. Agreement between experimental and predicted salt diffusion coefficients was good with no adjustable parameters. Calculated salt partition and diffusion coefficients were combined within the framework of the solution-diffusion model to predict salt permeability coefficients. Agreement between model and experimental data was remarkably good. Additionally, a simplified version of the model was used to elucidate connections between membrane structure (e.g., fixed charge group concentration) and salt transport properties.
NASA Astrophysics Data System (ADS)
Nicolis, John S.; Katsikas, Anastassis A.
Collective parameters such as the Zipf's law-like statistics, the Transinformation, the Block Entropy and the Markovian character are compared for natural, genetic, musical and artificially generated long texts from generating partitions (alphabets) on homogeneous as well as on multifractal chaotic maps. It appears that minimal requirements for a language at the syntactical level such as memory, selectivity of few keywords and broken symmetry in one dimension (polarity) are more or less met by dynamically iterating simple maps or flows e.g. very simple chaotic hardware. The same selectivity is observed at the semantic level where the aim refers to partitioning a set of enviromental impinging stimuli onto coexisting attractors-categories. Under the regime of pattern recognition and classification, few key features of a pattern or few categories claim the lion's share of the information stored in this pattern and practically, only these key features are persistently scanned by the cognitive processor. A multifractal attractor model can in principle explain this high selectivity, both at the syntactical and the semantic levels.
Singular perturbations with boundary conditions and the Casimir effect in the half space
NASA Astrophysics Data System (ADS)
Albeverio, S.; Cognola, G.; Spreafico, M.; Zerbini, S.
2010-06-01
We study the self-adjoint extensions of a class of nonmaximal multiplication operators with boundary conditions. We show that these extensions correspond to singular rank 1 perturbations (in the sense of Albeverio and Kurasov [Singular Perturbations of Differential Operaters (Cambridge University Press, Cambridge, 2000)]) of the Laplace operator, namely, the formal Laplacian with a singular delta potential, on the half space. This construction is the appropriate setting to describe the Casimir effect related to a massless scalar field in the flat space-time with an infinite conducting plate and in the presence of a pointlike "impurity." We use the relative zeta determinant (as defined in the works of Müller ["Relative zeta functions, relative determinants and scattering theory," Commun. Math. Phys. 192, 309 (1998)] and Spreafico and Zerbini ["Finite temperature quantum field theory on noncompact domains and application to delta interactions," Rep. Math. Phys. 63, 163 (2009)]) in order to regularize the partition function of this model. We study the analytic extension of the associated relative zeta function, and we present explicit results for the partition function and for the Casimir force.
Wiechers, Dirk; Kahlen, Katrin; Stützel, Hartmut
2011-01-01
Background and Aims Growth imbalances between individual fruits are common in indeterminate plants such as cucumber (Cucumis sativus). In this species, these imbalances can be related to differences in two growth characteristics, fruit growth duration until reaching a given size and fruit abortion. Both are related to distribution, and environmental factors as well as canopy architecture play a key role in their differentiation. Furthermore, events leading to a fruit reaching its harvestable size before or simultaneously with a prior fruit can be observed. Functional–structural plant models (FSPMs) allow for interactions between environmental factors, canopy architecture and physiological processes. Here, we tested hypotheses which account for these interactions by introducing dominance and abortion thresholds for the partitioning of assimilates between growing fruits. Methods Using the L-System formalism, an FSPM was developed which combined a model for architectural development, a biochemical model of photosynthesis and a model for assimilate partitioning, the last including a fruit growth model based on a size-related potential growth rate (RP). Starting from a distribution proportional to RP, the model was extended by including abortion and dominance. Abortion was related to source strength and dominance to sink strength. Both thresholds were varied to test their influence on fruit growth characteristics. Simulations were conducted for a dense row and a sparse isometric canopy. Key Results The simple partitioning models failed to simulate individual fruit growth realistically. The introduction of abortion and dominance thresholds gave the best results. Simulations of fruit growth durations and abortion rates were in line with measurements, and events in which a fruit was harvestable earlier than an older fruit were reproduced. Conclusions Dominance and abortion events need to be considered when simulating typical fruit growth traits. By integrating environmental factors, the FSPM can be a valuable tool to analyse and improve existing knowledge about the dynamics of assimilates partitioning. PMID:21715366
Golgi apparatus partitioning during cell division.
Rabouille, Catherine; Jokitalo, Eija
2003-01-01
This review discusses the mitotic segregation of the Golgi apparatus. The results from classical biochemical and morphological studies have suggested that in mammalian cells this organelle remains distinct during mitosis, although highly fragmented through the formation of mitotic Golgi clusters of small tubules and vesicles. Shedding of free Golgi-derived vesicles would consume Golgi clusters and disperse this organelle throughout the cytoplasm. Vesicles could be partitioned in a stochastic and passive way between the two daughter cells and act as a template for the reassembly of this key organelle. This model has recently been modified by results obtained using GFP- or HRP-tagged Golgi resident enzymes, live cell imaging and electron microscopy. Results obtained with these techniques show that the mitotic Golgi clusters are stable entities throughout mitosis that partition in a microtubule spindle-dependent fashion. Furthermore, a newer model proposes that at the onset of mitosis, the Golgi apparatus completely loses its identity and is reabsorbed into the endoplasmic reticulum. This suggests that the partitioning of the Golgi apparatus is entirely dependent on the partitioning of the endoplasmic reticulum. We critically discuss both models and summarize what is known about the molecular mechanisms underlying the Golgi disassembly and reassembly during and after mitosis. We will also review how the study of the Golgi apparatus during mitosis in other organisms can answer current questions and perhaps reveal novel mechanisms.
The simultaneous evolution of author and paper networks
Börner, Katy; Maru, Jeegar T.; Goldstone, Robert L.
2004-01-01
There has been a long history of research into the structure and evolution of mankind's scientific endeavor. However, recent progress in applying the tools of science to understand science itself has been unprecedented because only recently has there been access to high-volume and high-quality data sets of scientific output (e.g., publications, patents, grants) and computers and algorithms capable of handling this enormous stream of data. This article reviews major work on models that aim to capture and recreate the structure and dynamics of scientific evolution. We then introduce a general process model that simultaneously grows coauthor and paper citation networks. The statistical and dynamic properties of the networks generated by this model are validated against a 20-year data set of articles published in PNAS. Systematic deviations from a power law distribution of citations to papers are well fit by a model that incorporates a partitioning of authors and papers into topics, a bias for authors to cite recent papers, and a tendency for authors to cite papers cited by papers that they have read. In this TARL model (for topics, aging, and recursive linking), the number of topics is linearly related to the clustering coefficient of the simulated paper citation network. PMID:14976254