Sample records for set partition sizes

  1. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less

  2. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  3. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  4. Does History Repeat Itself? Wavelets and the Phylodynamics of Influenza A

    PubMed Central

    Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.

    2012-01-01

    Unprecedented global surveillance of viruses will result in massive sequence data sets that require new statistical methods. These data sets press the limits of Bayesian phylogenetics as the high-dimensional parameters that comprise a phylogenetic tree increase the already sizable computational burden of these techniques. This burden often results in partitioning the data set, for example, by gene, and inferring the evolutionary dynamics of each partition independently, a compromise that results in stratified analyses that depend only on data within a given partition. However, parameter estimates inferred from these stratified models are likely strongly correlated, considering they rely on data from a single data set. To overcome this shortfall, we exploit the existing Monte Carlo realizations from stratified Bayesian analyses to efficiently estimate a nonparametric hierarchical wavelet-based model and learn about the time-varying parameters of effective population size that reflect levels of genetic diversity across all partitions simultaneously. Our methods are applied to complete genome influenza A sequences that span 13 years. We find that broad peaks and trends, as opposed to seasonal spikes, in the effective population size history distinguish individual segments from the complete genome. We also address hypotheses regarding intersegment dynamics within a formal statistical framework that accounts for correlation between segment-specific parameters. PMID:22160768

  5. Shell use and partitioning of two sympatric species of hermit crabs on a tropical mudflat

    NASA Astrophysics Data System (ADS)

    Teoh, Hong Wooi; Chong, Ving Ching

    2014-02-01

    Shell use and partitioning of two sympatric hermit crab species (Diogenes moosai and Diogenes lopochir), as determined by shell shape, size and availability, were examined from August 2009 to March 2011 in a tropical mudflat (Malaysia). Shells of 14 gastropod species were used but > 85% comprised shells of Cerithidea cingulata, Nassarius cf. olivaceus, Nassarius jacksonianus, and Thais malayensis. Shell partitioning between hermit crab species, sexes, and developmental stages was evident from occupied shells of different species, shapes, and sizes. Extreme bias in shell use pattern by male and female of both species of hermit crabs suggests that shell shape, which depends on shell species, is the major determinant of shell use. The hermit crab must however fit well into the shell so that compatibility between crab size and shell size becomes crucial. Although shell availability possibly influenced shell use and hermit crab distribution, this is not critical in a tropical setting of high gastropod diversity and abundance.

  6. An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems

    PubMed Central

    Dawson, Kevin J.; Belkhir, Khalid

    2009-01-01

    Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306

  7. Evolving bipartite authentication graph partitions

    DOE PAGES

    Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.

    2017-01-16

    As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less

  8. Evolving bipartite authentication graph partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.

    As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less

  9. Partitioning an object-oriented terminology schema.

    PubMed

    Gu, H; Perl, Y; Halper, M; Geller, J; Kuo, F; Cimino, J J

    2001-07-01

    Controlled medical terminologies are increasingly becoming strategic components of various healthcare enterprises. However, the typical medical terminology can be difficult to exploit due to its extensive size and high density. The schema of a medical terminology offered by an object-oriented representation is a valuable tool in providing an abstract view of the terminology, enhancing comprehensibility and making it more usable. However, schemas themselves can be large and unwieldy. We present a methodology for partitioning a medical terminology schema into manageably sized fragments that promote increased comprehension. Our methodology has a refinement process for the subclass hierarchy of the terminology schema. The methodology is carried out by a medical domain expert in conjunction with a computer. The expert is guided by a set of three modeling rules, which guarantee that the resulting partitioned schema consists of a forest of trees. This makes it easier to understand and consequently use the medical terminology. The application of our methodology to the schema of the Medical Entities Dictionary (MED) is presented.

  10. Multiphase complete exchange: A theoretical analysis

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1993-01-01

    Complete Exchange requires each of N processors to send a unique message to each of the remaining N-1 processors. For a circuit switched hypercube with N = 2(sub d) processors, the Direct and Standard algorithms for Complete Exchange are optimal for very large and very small message sizes, respectively. For intermediate sizes, a hybrid Multiphase algorithm is better. This carries out Direct exchanges on a set of subcubes whose dimensions are a partition of the integer d. The best such algorithm for a given message size m could hitherto only be found by enumerating all partitions of d. The Multiphase algorithm is analyzed assuming a high performance communication network. It is proved that only algorithms corresponding to equipartitions of d (partitions in which the maximum and minimum elements differ by at most 1) can possibly be optimal. The run times of these algorithms plotted against m form a hull of optimality. It is proved that, although there is an exponential number of partitions, (1) the number of faces on this hull is Theta(square root of d), (2) the hull can be found in theta(square root of d) time, and (3) once it has been found, the optimal algorithm for any given m can be found in Theta(log d) time. These results provide a very fast technique for minimizing communication overhead in many important applications, such as matrix transpose, Fast Fourier transform, and ADI.

  11. Mass budget partitioning during explosive eruptions: insights from the 2006 paroxysm of Tungurahua volcano, Ecuador

    NASA Astrophysics Data System (ADS)

    Bernard, Julien; Eychenne, Julia; Le Pennec, Jean-Luc; Narváez, Diego

    2016-08-01

    How and how much the mass of juvenile magma is split between vent-derived tephra, PDC deposits and lavas (i.e., mass partition) is related to eruption dynamics and style. Estimating such mass partitioning budgets may reveal important for hazard evaluation purposes. We calculated the volume of each product emplaced during the August 2006 paroxysmal eruption of Tungurahua volcano (Ecuador) and converted it into masses using high-resolution grainsize, componentry and density data. This data set is one of the first complete descriptions of mass partitioning associated with a VEI 3 andesitic event. The scoria fall deposit, near-vent agglutinate and lava flow include 28, 16 and 12 wt. % of the erupted juvenile mass, respectively. Much (44 wt. %) of the juvenile material fed Pyroclastic Density Currents (i.e., dense flows, dilute surges and co-PDC plumes), highlighting that tephra fall deposits do not depict adequately the size and fragmentation processes of moderate PDC-forming event. The main parameters controlling the mass partitioning are the type of magmatic fragmentation, conditions of magma ascent, and crater area topography. Comparisons of our data set with other PDC-forming eruptions of different style and magma composition suggest that moderate andesitic eruptions are more prone to produce PDCs, in proportions, than any other eruption type. This finding may be explained by the relatively low magmatic fragmentation efficiency of moderate andesitic eruptions. These mass partitioning data reveal important trends that may be critical for hazard assessment, notably at frequently active andesitic edifices.

  12. Linear Chord Diagrams with Long Chords

    NASA Astrophysics Data System (ADS)

    Sullivan, Everett

    A linear chord diagram of size n is a partition of the first 2n integers into sets of size two. These diagrams appear in many different contexts in combinatorics and other areas of mathematics, particularly knot theory. We explore various constraints that produce diagrams which have no short chords. A number of patterns appear from the results of these constraints which we can prove using techniques ranging from explicit bijections to non-commutative algebra.

  13. Efficient bulk-loading of gridfiles

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Nicol, David M.

    1994-01-01

    This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.

  14. A mechanism-mediated model for carcinogenicity: Model content and prediction of the outcome of rodent carcinogenicity bioassays currently being conducted on 25 organic chemicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purdy, R.

    A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less

  15. Kelly et al. (2016): Simulating the phase partitioning of NH3, HNO3, and HCl with size-resolved particles over northern Colorado in winter

    EPA Pesticide Factsheets

    In this study, modeled gas- and aerosol phase ammonia, nitric acid, and hydrogen chloride are compared to measurements taken during a field campaign conducted in northern Colorado in February and March 2011. We compare the modeled and observed gas-particle partitioning, and assess potential reasons for discrepancies between the model and measurements. This data set contains scripts and data used for each figure in the associated manuscript. Figures are generated using the R project statistical programming language. Data files are in either comma-separated value (CSV) format or netCDF, a standard self-describing binary data format commonly used in the earth and atmospheric sciences. This dataset is associated with the following publication:Kelly , J., K. Baker , C. Nolte, S. Napelenok , W.C. Keene, and A.A.P. Pszenny. Simulating the phase partitioning of NH3, HNO3, and HCl with size-resolved particles over northern Colorado in winter. ATMOSPHERIC ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 131: 67-77, (2016).

  16. Cluster formation and drag reduction-proposed mechanism of particle recirculation within the partition column of the bottom spray fluid-bed coater.

    PubMed

    Wang, Li Kun; Heng, Paul Wan Sia; Liew, Celine Valeria

    2015-04-01

    Bottom spray fluid-bed coating is a common technique for coating multiparticulates. Under the quality-by-design framework, particle recirculation within the partition column is one of the main variability sources affecting particle coating and coat uniformity. However, the occurrence and mechanism of particle recirculation within the partition column of the coater are not well understood. The purpose of this study was to visualize and define particle recirculation within the partition column. Based on different combinations of partition gap setting, air accelerator insert diameter, and particle size fraction, particle movements within the partition column were captured using a high-speed video camera. The particle recirculation probability and voidage information were mapped using a visiometric process analyzer. High-speed images showed that particles contributing to the recirculation phenomenon were behaving as clustered colonies. Fluid dynamics analysis indicated that particle recirculation within the partition column may be attributed to the combined effect of cluster formation and drag reduction. Both visiometric process analysis and particle coating experiments showed that smaller particles had greater propensity toward cluster formation than larger particles. The influence of cluster formation on coating performance and possible solutions to cluster formation were further discussed. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  17. Minimum Sample Size Requirements for Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas

    2014-01-01

    An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…

  18. Grain size evolution and convection regimes of the terrestrial planets

    NASA Astrophysics Data System (ADS)

    Rozel, A.; Golabek, G. J.; Boutonnet, E.

    2011-12-01

    A new model of grain size evolution has recently been proposed in Rozel et al. 2010. This new approach stipulates that the grain size dynamics is governed by two additive and simultaneous processes: grain growth and dynamic recrystallization. We use the usual normal grain growth laws for the growth part. For dynamic recrystallization, reducing the mean grain size increases the total area of grain boundaries. Grain boundaries carry some surface tension, so some energy is required to decrease the mean grain size. We consider that this energy is available during mechanical work. It is usually considered to produce some heat via viscous dissipation. A partitioning parameter f is then required to know what amount of energy is dissipated and what part is converted in surface tension. This study gives a new calibration of the partitioning parameter on major Earth materials involved in the dynamic of the terrestrial planets. Our calibration is in adequation with the published piezometric relations available in the literature (equilibrium grain size versus shear stress). We test this new model of grain size evolution in a set of numerical computations of the dynamics of the Earth using stagYY. We show that the grain size evolution has a major effect on the convection regimes of terrestrial planets.

  19. The Development of the Speaker Independent ARM Continuous Speech Recognition System

    DTIC Science & Technology

    1992-01-01

    spokeTi airborne reconnaissance reports u-ing a speech recognition system based on phoneme-level hidden Markov models (HMMs). Previous versions of the ARM...will involve automatic selection from multiple model sets, corresponding to different speaker types, and that the most rudimen- tary partition of a...The vocabulary size for the ARM task is 497 words. These words are related to the phoneme-level symbols corresponding to the models in the model set

  20. On models of the genetic code generated by binary dichotomic algorithms.

    PubMed

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  2. Partition resampling and extrapolation averaging: approximation methods for quantifying gene expression in large numbers of short oligonucleotide arrays.

    PubMed

    Goldstein, Darlene R

    2006-10-01

    Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.

  3. Thermodynamic factors in partitioning and rejection of organic compounds by polyamide composite membranes.

    PubMed

    Ben-David, Adi; Oren, Yoram; Freger, Viatcheslav

    2006-11-15

    The paper analyzes the mechanism of partitioning and rejection of organic solutes by polyamide membranes for reverse osmosis and nanofiltration. The partitioning of homologous series of alcohols and polyols, in which polarity changes with size in opposite ways, was measured using attenuated total reflection IR spectroscopy. The results show that the partitioning of polyols monotonously decreases with size, whereas for alcohols it is not monotonous and slightly decreases for small C1-C3 alcohols followed by a sharp increase for larger alcohols. These results may be explained by assuming a heterogeneous structure of polyamide comprising a hydrophobic polyamide matrix and a polar internal aqueous phase. The partitioning data could consistently explain the results of rejection in standard filtration experiments. They clearly demonstrate that high/low partitioning may play a significant role in achieving a low/high rejection of organics. In particular, this points to the need to account for the partitioning effect while using molecular probes such as polyols or sugars for estimating the effective "pore" size or molecular weight cutoff of a membrane and for choosing/developing organic-rejecting membranes.

  4. On the star partition dimension of comb product of cycle and path

    NASA Astrophysics Data System (ADS)

    Alfarisi, Ridho; Darmaji

    2017-08-01

    Let G = (V, E) be a connected graphs with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk }. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S1, S2, S3, …, Sk } is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si, 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V(G) is the star partition dimension of G, denoted by spd(G). Finding the star partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of cycle and path namely Cm⊳Pn and Pn⊳Cm for n ≥ 2 and m ≥ 3.

  5. Spatial coding-based approach for partitioning big spatial data in Hadoop

    NASA Astrophysics Data System (ADS)

    Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai

    2017-09-01

    Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.

  6. Partitioning heritability by functional annotation using genome-wide association summary statistics.

    PubMed

    Finucane, Hilary K; Bulik-Sullivan, Brendan; Gusev, Alexander; Trynka, Gosia; Reshef, Yakir; Loh, Po-Ru; Anttila, Verneri; Xu, Han; Zang, Chongzhi; Farh, Kyle; Ripke, Stephan; Day, Felix R; Purcell, Shaun; Stahl, Eli; Lindstrom, Sara; Perry, John R B; Okada, Yukinori; Raychaudhuri, Soumya; Daly, Mark J; Patterson, Nick; Neale, Benjamin M; Price, Alkes L

    2015-11-01

    Recent work has demonstrated that some functional categories of the genome contribute disproportionately to the heritability of complex diseases. Here we analyze a broad set of functional elements, including cell type-specific elements, to estimate their polygenic contributions to heritability in genome-wide association studies (GWAS) of 17 complex diseases and traits with an average sample size of 73,599. To enable this analysis, we introduce a new method, stratified LD score regression, for partitioning heritability from GWAS summary statistics while accounting for linked markers. This new method is computationally tractable at very large sample sizes and leverages genome-wide information. Our findings include a large enrichment of heritability in conserved regions across many traits, a very large immunological disease-specific enrichment of heritability in FANTOM5 enhancers and many cell type-specific enrichments, including significant enrichment of central nervous system cell types in the heritability of body mass index, age at menarche, educational attainment and smoking behavior.

  7. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  8. Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution

    PubMed Central

    Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen

    2014-01-01

    Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804

  9. Study of energy conversion and partitioning in the magnetic reconnection layer of a laboratory plasma

    DOE PAGES

    Yamada, Masaaki; Yoo, Jongsoo; Jara-Almonte, Jonathan; ...

    2015-05-15

    The most important feature of magnetic reconnection is that it energizes plasma particles by converting magnetic energy to particle energy, the exact mechanisms by which this happens are yet to be determined despite a long history of reconnection research. Recently, we have reported our results on the energy conversion and partitioning in a laboratory reconnection layer in a short communication [Yamada et al., Nat. Commun. 5, 4474 (2014)]. The present paper is a detailed elaboration of this report together with an additional dataset with different boundary sizes. Our experimental study of the reconnection layer is carried out in the two-fluidmore » physics regime where ions and electrons move quite differently. We have observed that the conversion of magnetic energy occurs across a region significantly larger than the narrow electron diffusion region. A saddle shaped electrostatic potential profile exists in the reconnection plane, and ions are accelerated by the resulting electric field at the separatrices. These accelerated ions are then thermalized by re-magnetization in the downstream region. A quantitative inventory of the converted energy is presented in a reconnection layer with a well-defined, variable boundary. We also carried out a systematic study of the effects of boundary conditions on the energy inventory. This study concludes that about 50% of the inflowing magnetic energy is converted to particle energy, 2/3 of which is ultimately transferred to ions and 1/3 to electrons. When assisted by another set of magnetic reconnection experiment data and numerical simulations with different sizes of monitoring box, it is also observed that the observed features of energy conversion and partitioning do not depend on the size of monitoring boundary across the range of sizes tested from 1.5 to 4 ion skin depths.« less

  10. Hardware Index to Set Partition Converter

    DTIC Science & Technology

    2013-01-01

    Brisk, J.G. de Figueiredo Coutinho, P.C. Diniz (Eds.): ARC 2013, LNCS 7806, pp. 72–83, 2013. c© Springer-Verlag Berlin Heidelberg 2013 Report...374 (1990) 13. Orlov, M.: Efficient generation of set partitions (March 2002), http://www.cs.bgu.ac.il/~orlovm/papers/partitions.pdf 14. Reingold, E

  11. A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Steinley, Douglas

    2007-01-01

    Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…

  12. On the partition dimension of comb product of path and complete graph

    NASA Astrophysics Data System (ADS)

    Darmaji, Alfarisi, Ridho

    2017-08-01

    For a vertex v of a connected graph G(V, E) with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk}. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). Finding the partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of path and complete graph. The results show that comb product of complete grapph Km and path Pn namely p d (Km⊳Pn)=m where m ≥ 3 and n ≥ 2 and p d (Pn⊳Km)=m where m ≥ 3, n ≥ 2 and m ≥ n.

  13. Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data

    PubMed Central

    Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria

    2015-01-01

    Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841

  14. Standardized Sky Partitioning for the Next Generation Astronomy and Space Science Archives

    NASA Technical Reports Server (NTRS)

    Lal, Nand (Technical Monitor); McLean, Brian

    2004-01-01

    The Johns Hopkins University and Space Telescope Science Institute are working together on this project to develop a library of standard software for data archives that will benefit the wider astronomical community. The ultimate goal was to develop and distribute a software library aimed at providing a common system for partitioning and indexing the sky in manageable sized regions and provide complex queries on the objects stored in this system. Whilst ongoing maintenance work will continue the primary goal has been completed. Most of the next generation sky surveys in the different wavelengths like 2MASS, GALEX, SDSS, GSC-II, DPOSS and FIRST have agreed on this common set of utilities. In this final report, we summarize work on the work elements assigned to the STScI project team.

  15. Normalized Cut Algorithm for Automated Assignment of Protein Domains

    NASA Technical Reports Server (NTRS)

    Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.

  16. Genetic variation in tree structure and its relation to size in Douglas-fir: I. Biomass partitioning, foliage efficiency, stem form, and wood density.

    Treesearch

    J.B. St. Clair

    1994-01-01

    Genetic variation and covariation among traits of tree size and structure were assessed in an 18-year-old Douglas-fir (Pseudotsuga menziesii var. menziesii (Mirb.) Franco) genetic test in the Coast Range of Oregon. Considerable genetic variation was found in size, biomass partitioning, and wood density, and genetic gains may be...

  17. Allan Variance Calculation for Nonuniformly Spaced Input Data

    DTIC Science & Technology

    2015-01-01

    τ (tau). First, the set of gyro values is partitioned into bins of duration τ. For example, if the sampling duration τ is 2 sec and there are 4,000...Variance Calculation For each value of τ, the conventional AV calculation partitions the gyro data sets into bins with approximately τ / Δt...value of Δt. Therefore, a new way must be found to partition the gyro data sets into bins. The basic concept behind the modified AV calculation is

  18. Scheduling multicore workload on shared multipurpose clusters

    NASA Astrophysics Data System (ADS)

    Templon, J. A.; Acosta-Silva, C.; Flix Molina, J.; Forti, A. C.; Pérez-Calero Yzquierdo, A.; Starink, R.

    2015-12-01

    With the advent of workloads containing explicit requests for multiple cores in a single grid job, grid sites faced a new set of challenges in workload scheduling. The most common batch schedulers deployed at HEP computing sites do a poor job at multicore scheduling when using only the native capabilities of those schedulers. This paper describes how efficient multicore scheduling was achieved at the sites the authors represent, by implementing dynamically-sized multicore partitions via a minimalistic addition to the Torque/Maui batch system already in use at those sites. The paper further includes example results from use of the system in production, as well as measurements on the dependence of performance (especially the ramp-up in throughput for multicore jobs) on node size and job size.

  19. Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael

    2015-05-15

    We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rössler system and find that periodic dynamics translate tomore » ring structures whereas chaotic time series translate to band or tube-like structures—thereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.« less

  20. Recurrence relations in one-dimensional Ising models.

    PubMed

    da Conceição, C M Silva; Maia, R N P

    2017-09-01

    The exact finite-size partition function for the nonhomogeneous one-dimensional (1D) Ising model is found through an approach using algebra operators. Specifically, in this paper we show that the partition function can be computed through a trace from a linear second-order recurrence relation with nonconstant coefficients in matrix form. A relation between the finite-size partition function and the generalized Lucas polynomials is found for the simple homogeneous model, thus establishing a recursive formula for the partition function. This is an important property and it might indicate the possible existence of recurrence relations in higher-dimensional Ising models. Moreover, assuming quenched disorder for the interactions within the model, the quenched averaged magnetic susceptibility displays a nontrivial behavior due to changes in the ferromagnetic concentration probability.

  1. Unsupervised segmentation of MRI knees using image partition forests

    NASA Astrophysics Data System (ADS)

    Marčan, Marija; Voiculescu, Irina

    2016-03-01

    Nowadays many people are affected by arthritis, a condition of the joints with limited prevention measures, but with various options of treatment the most radical of which is surgical. In order for surgery to be successful, it can make use of careful analysis of patient-based models generated from medical images, usually by manual segmentation. In this work we show how to automate the segmentation of a crucial and complex joint -- the knee. To achieve this goal we rely on our novel way of representing a 3D voxel volume as a hierarchical structure of partitions which we have named Image Partition Forest (IPF). The IPF contains several partition layers of increasing coarseness, with partitions nested across layers in the form of adjacency graphs. On the basis of a set of properties (size, mean intensity, coordinates) of each node in the IPF we classify nodes into different features. Values indicating whether or not any particular node belongs to the femur or tibia are assigned through node filtering and node-based region growing. So far we have evaluated our method on 15 MRI knee images. Our unsupervised segmentation compared against a hand-segmented gold standard has achieved an average Dice similarity coefficient of 0.95 for femur and 0.93 for tibia, and an average symmetric surface distance of 0.98 mm for femur and 0.73 mm for tibia. The paper also discusses ways to introduce stricter morphological and spatial conditioning in the bone labelling process.

  2. Certificate Revocation Using Fine Grained Certificate Space Partitioning

    NASA Astrophysics Data System (ADS)

    Goyal, Vipul

    A new certificate revocation system is presented. The basic idea is to divide the certificate space into several partitions, the number of partitions being dependent on the PKI environment. Each partition contains the status of a set of certificates. A partition may either expire or be renewed at the end of a time slot. This is done efficiently using hash chains.

  3. Pesticides in the atmosphere: a comparison of gas-particle partitioning and particle size distribution of legacy and current-use pesticides

    NASA Astrophysics Data System (ADS)

    Degrendele, C.; Okonski, K.; Melymuk, L.; Landlová, L.; Kukučka, P.; Audy, O.; Kohoutek, J.; Čupr, P.; Klánová, J.

    2016-02-01

    This study presents a comparison of seasonal variation, gas-particle partitioning, and particle-phase size distribution of organochlorine pesticides (OCPs) and current-use pesticides (CUPs) in air. Two years (2012/2013) of weekly air samples were collected at a background site in the Czech Republic using a high-volume air sampler. To study the particle-phase size distribution, air samples were also collected at an urban and rural site in the area of Brno, Czech Republic, using a cascade impactor separating atmospheric particulates according to six size fractions. Major differences were found in the atmospheric distribution of OCPs and CUPs. The atmospheric concentrations of CUPs were driven by agricultural activities while secondary sources such as volatilization from surfaces governed the atmospheric concentrations of OCPs. Moreover, clear differences were observed in gas-particle partitioning; CUP partitioning was influenced by adsorption onto mineral surfaces while OCPs were mainly partitioning to aerosols through absorption. A predictive method for estimating the gas-particle partitioning has been derived and is proposed for polar and non-polar pesticides. Finally, while OCPs and the majority of CUPs were largely found on fine particles, four CUPs (carbendazim, isoproturon, prochloraz, and terbuthylazine) had higher concentrations on coarse particles ( > 3.0 µm), which may be related to the pesticide application technique. This finding is particularly important and should be further investigated given that large particles result in lower risks from inhalation (regardless the toxicity of the pesticide) and lower potential for long-range atmospheric transport.

  4. Allometry, sexual size dimorphism, and niche partitioning in the Mediterranean gecko (Hemidactylus turcicus)

    Treesearch

    James B. Johnson; Lance D. McBrayer; Daniel Saenz

    2005-01-01

    Hemidactylus tucrius is a small gekkonid lizard native to the Middle East and Asia that is known to exhibit sexual dimorphism in head size. Several potential explanations exist for the evolution and maintenance of sexual dimorphism in lizards. We tested 2 of these competing hypotheses concerning diet partitioning and differential growth. We found no...

  5. The partition dimension of cycle books graph

    NASA Astrophysics Data System (ADS)

    Santoso, Jaya; Darmaji

    2018-03-01

    Let G be a nontrivial and connected graph with vertex set V(G), edge set E(G) and S ⊆ V(G) with v ∈ V(G), the distance between v and S is d(v,S) = min{d(v,x)|x ∈ S}. For an ordered partition ∏ = {S 1, S 2, S 3,…, Sk } of V(G), the representation of v with respect to ∏ is defined by r(v|∏) = (d(v, S 1), d(v, S 2),…, d(v, Sk )). The partition ∏ is called a resolving partition of G if all representations of vertices are distinct. The partition dimension pd(G) is the smallest integer k such that G has a resolving partition set with k members. In this research, we will determine the partition dimension of Cycle Books {B}{Cr,m}. Cycle books graph {B}{Cr,m} is a graph consisting of m copies cycle Cr with the common path P 2. It is shown that the partition dimension of cycle books graph, pd({B}{C3,m}) is 3 for m = 2, 3, and m for m ≥ 4. pd({B}{C4,m}) is 3 + 2k for m = 3k + 2, 4 + 2(k ‑ 1) for m = 3k + 1, and 3 + 2(k ‑ 1) for m = 3k. pd({B}{C5,m}) is m + 1.

  6. Direct optimization, affine gap costs, and node stability.

    PubMed

    Aagesen, Lone

    2005-09-01

    The outcome of a phylogenetic analysis based on DNA sequence data is highly dependent on the homology-assignment step and may vary with alignment parameter costs. Robustness to changes in parameter costs is therefore a desired quality of a data set because the final conclusions will be less dependent on selecting a precise optimal cost set. Here, node stability is explored in relationship to separate versus combined analysis in three different data sets, all including several data partitions. Robustness to changes in cost sets is measured as number of successive changes that can be made in a given cost set before a specific clade is lost. The changes are in all cases base change cost, gap penalties, and adding/removing/changing affine gap costs. When combining data partitions, the number of clades that appear in the entire parameter space is not remarkably increased, in some cases this number even decreased. However, when combining data partitions the trees from cost sets including affine gap costs were always more similar than the trees were from cost sets without affine gap costs. This was not the case when the data partitions were analyzed independently. When data sets were combined approximately 80% of the clades found under cost sets including affine gap costs resisted at least one change to the cost set.

  7. QSPR modeling of octanol/water partition coefficient for vitamins by optimal descriptors calculated with SMILES.

    PubMed

    Toropov, A A; Toropova, A P; Raska, I

    2008-04-01

    Simplified molecular input line entry system (SMILES) has been utilized in constructing quantitative structure-property relationships (QSPR) for octanol/water partition coefficient of vitamins and organic compounds of different classes by optimal descriptors. Statistical characteristics of the best model (vitamins) are the following: n=17, R(2)=0.9841, s=0.634, F=931 (training set); n=7, R(2)=0.9928, s=0.773, F=690 (test set). Using this approach for modeling octanol/water partition coefficient for a set of organic compounds gives a model that is statistically characterized by n=69, R(2)=0.9872, s=0.156, F=5184 (training set) and n=70, R(2)=0.9841, s=0.179, F=4195 (test set).

  8. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  9. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa

    2012-02-01

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007

  10. Entanglement, replicas, and Thetas

    NASA Astrophysics Data System (ADS)

    Mukhi, Sunil; Murthy, Sameer; Wu, Jie-Qiang

    2018-01-01

    We compute the single-interval Rényi entropy (replica partition function) for free fermions in 1+1d at finite temperature and finite spatial size by two methods: (i) using the higher-genus partition function on the replica Riemann surface, and (ii) using twist operators on the torus. We compare the two answers for a restricted set of spin structures, leading to a non-trivial proposed equivalence between higher-genus Siegel Θ-functions and Jacobi θ-functions. We exhibit this proposal and provide substantial evidence for it. The resulting expressions can be elegantly written in terms of Jacobi forms. Thereafter we argue that the correct Rényi entropy for modular-invariant free-fermion theories, such as the Ising model and the Dirac CFT, is given by the higher-genus computation summed over all spin structures. The result satisfies the physical checks of modular covariance, the thermal entropy relation, and Bose-Fermi equivalence.

  11. Size-dependent forced PEG partitioning into channels: VDAC, OmpC, and α-hemolysin

    DOE PAGES

    Aksoyoglu, M. Alphan; Podgornik, Rudolf; Bezrukov, Sergey M.; ...

    2016-07-27

    Nonideal polymer mixtures of PEGs of different molecular weights partition differently into nanosize protein channels. Here, we assess the validity of the recently proposed theoretical approach of forced partitioning for three structurally different beta-barrel channels: voltage-dependent anion channel from outer mitochondrial membrane VDAC, bacterial porin OmpC (outer membrane protein C), and bacterial channel-forming toxin alpha-hemolysin. Our interpretation is based on the idea that relatively less-penetrating polymers push the more easily penetrating ones into nanosize channels in excess of their bath concentration. Comparison of the theory with experiments is excellent for VDAC. Polymer partitioning data for the other two channels aremore » consistent with theory if additional assumptions regarding the energy penalty of pore penetration are included. In conclusion, the obtained results demonstrate that the general concept of "polymers pushing polymers" is helpful in understanding and quantification of concrete examples of size-dependent forced partitioning of polymers into protein nanopores.« less

  12. Size-dependent forced PEG partitioning into channels: VDAC, OmpC, and α-hemolysin

    PubMed Central

    Aksoyoglu, M. Alphan; Podgornik, Rudolf; Bezrukov, Sergey M.; Gurnev, Philip A.; Muthukumar, Murugappan; Parsegian, V. Adrian

    2016-01-01

    Nonideal polymer mixtures of PEGs of different molecular weights partition differently into nanosize protein channels. Here, we assess the validity of the recently proposed theoretical approach of forced partitioning for three structurally different β-barrel channels: voltage-dependent anion channel from outer mitochondrial membrane VDAC, bacterial porin OmpC (outer membrane protein C), and bacterial channel-forming toxin α-hemolysin. Our interpretation is based on the idea that relatively less-penetrating polymers push the more easily penetrating ones into nanosize channels in excess of their bath concentration. Comparison of the theory with experiments is excellent for VDAC. Polymer partitioning data for the other two channels are consistent with theory if additional assumptions regarding the energy penalty of pore penetration are included. The obtained results demonstrate that the general concept of “polymers pushing polymers” is helpful in understanding and quantification of concrete examples of size-dependent forced partitioning of polymers into protein nanopores. PMID:27466408

  13. Pesticides in the atmosphere: a comparison of gas-particle partitioning and particle size distribution of legacy and current-use pesticides

    NASA Astrophysics Data System (ADS)

    Degrendele, C.; Okonski, K.; Melymuk, L.; Landlová, L.; Kukučka, P.; Audy, O.; Kohoutek, J.; Čupr, P.; Klánová, J.

    2015-09-01

    This study presents a comparison of seasonal variation, gas-particle partitioning and particle-phase size distribution of organochlorine pesticides (OCPs) and current-use pesticides (CUPs) in air. Two years (2012/2013) of weekly air samples were collected at a background site in the Czech Republic using a high-volume air sampler. To study the particle-phase size distribution, air samples were also collected at an urban and rural site in the area of Brno, Czech Republic, using a cascade impactor separating atmospheric particulates according to six size fractions. The timing and frequencies of detection of CUPs related to their legal status, usage amounts and their environmental persistence, while OCPs were consistently detected throughout the year. Two different seasonal trends were noted: certain compounds had higher concentrations only during the growing season (April-September) and other compounds showed two peaks, first in the growing season and second in plowing season (October-November). In general, gas-particle partitioning of pesticides was governed by physicochemical properties, with higher vapor pressure leading to higher gas phase fractions, and associated seasonality in gas-particle partitioning was observed in nine pesticides. However, some anomalous partitioning was observed for fenpropimorph and chlorpyrifos suggesting the influence of current pesticide application on gas-particle distributions. Nine pesticides had highest particle phase concentrations on fine particles (< 0.95 μm) and four pesticides on coarser (> 1.5 μm) particles.

  14. Model-based recursive partitioning to identify risk clusters for metabolic syndrome and its components: findings from the International Mobility in Aging Study

    PubMed Central

    Pirkle, Catherine M; Wu, Yan Yan; Zunzunegui, Maria-Victoria; Gómez, José Fernando

    2018-01-01

    Objective Conceptual models underpinning much epidemiological research on ageing acknowledge that environmental, social and biological systems interact to influence health outcomes. Recursive partitioning is a data-driven approach that allows for concurrent exploration of distinct mixtures, or clusters, of individuals that have a particular outcome. Our aim is to use recursive partitioning to examine risk clusters for metabolic syndrome (MetS) and its components, in order to identify vulnerable populations. Study design Cross-sectional analysis of baseline data from a prospective longitudinal cohort called the International Mobility in Aging Study (IMIAS). Setting IMIAS includes sites from three middle-income countries—Tirana (Albania), Natal (Brazil) and Manizales (Colombia)—and two from Canada—Kingston (Ontario) and Saint-Hyacinthe (Quebec). Participants Community-dwelling male and female adults, aged 64–75 years (n=2002). Primary and secondary outcome measures We apply recursive partitioning to investigate social and behavioural risk factors for MetS and its components. Model-based recursive partitioning (MOB) was used to cluster participants into age-adjusted risk groups based on variabilities in: study site, sex, education, living arrangements, childhood adversities, adult occupation, current employment status, income, perceived income sufficiency, smoking status and weekly minutes of physical activity. Results 43% of participants had MetS. Using MOB, the primary partitioning variable was participant sex. Among women from middle-incomes sites, the predicted proportion with MetS ranged from 58% to 68%. Canadian women with limited physical activity had elevated predicted proportions of MetS (49%, 95% CI 39% to 58%). Among men, MetS ranged from 26% to 41% depending on childhood social adversity and education. Clustering for MetS components differed from the syndrome and across components. Study site was a primary partitioning variable for all components except HDL cholesterol. Sex was important for most components. Conclusion MOB is a promising technique for identifying disease risk clusters (eg, vulnerable populations) in modestly sized samples. PMID:29500203

  15. Inference of boundaries in causal sets

    NASA Astrophysics Data System (ADS)

    Cunningham, William J.

    2018-05-01

    We investigate the extrinsic geometry of causal sets in (1+1) -dimensional Minkowski spacetime. The properties of boundaries in an embedding space can be used not only to measure observables, but also to supplement the discrete action in the partition function via discretized Gibbons–Hawking–York boundary terms. We define several ways to represent a causal set using overlapping subsets, which then allows us to distinguish between null and non-null bounding hypersurfaces in an embedding space. We discuss algorithms to differentiate between different types of regions, consider when these distinctions are possible, and then apply the algorithms to several spacetime regions. Numerical results indicate the volumes of timelike boundaries can be measured to within 0.5% accuracy for flat boundaries and within 10% accuracy for highly curved boundaries for medium-sized causal sets with N  =  214 spacetime elements.

  16. Calibration sets and the accuracy of vibrational scaling factors: A case study with the X3LYP hybrid functional

    NASA Astrophysics Data System (ADS)

    Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.

    2010-09-01

    A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.

  17. Calibration sets and the accuracy of vibrational scaling factors: a case study with the X3LYP hybrid functional.

    PubMed

    Teixeira, Filipe; Melo, André; Cordeiro, M Natália D S

    2010-09-21

    A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.

  18. On the star partition dimension of comb product of cycle and complete graph

    NASA Astrophysics Data System (ADS)

    Alfarisi, Ridho; Darmaji; Dafik

    2017-06-01

    Let G = (V, E) be a connected graphs with vertex set V (G), edge set E(G) and S ⊆ V (G). For an ordered partition Π = {S 1, S 2, S 3, …, Sk } of V (G), the representation of a vertex v ∈ V (G) with respect to Π is the k-vectors r(v|Π) = (d(v, S 1), d(v, S 2), …, d(v, Sk )), where d(v, Sk ) represents the distance between the vertex v and the set Sk , defined by d(v, Sk ) = min{d(v, x)|x ∈ Sk}. The partition Π of V (G) is a resolving partition if the k-vektors r(v|Π), v ∈ V (G) are distinct. The minimum resolving partition Π is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S 1, S 2, S 3, …, Sk} is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si , 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V (G) is the star partition dimension of G, denoted by spd(G). Finding a star partition dimension of G is classified to be a NP-Hard problem. Furthermore, the comb product between G and H, denoted by G ⊲ H, is a graph obtained by taking one copy of G and |V (G)| copies of H and grafting the i-th copy of H at the vertex o to the i-th vertex of G. By definition of comb product, we can say that V (G ⊲ H) = {(a, u)|a ∈ V (G), u ∈ V (H)} and (a, u)(b, v) ∈ E(G ⊲ H) whenever a = b and uv ∈ E(H), or ab ∈ E(G) and u = v = o. In this paper, we will study the star partition dimension of comb product of cycle and complete graph, namely Cn ⊲ Km and Km ⊲ Cn for n ≥ 3 and m ≥ 3.

  19. Partition of nonionic organic compounds in aquatic systems

    USGS Publications Warehouse

    Smith, James A.; Witkowski, Patrick J.; Chiou, Cary T.

    1988-01-01

    In aqueous systems, the distribution of many nonionic organic solutes in soil-sediment, aquatic organisms, and dissolved organic matter can be explained in terms of a partition model. The nonionic organic solute is distributed between water and different organic phases that behave as bulk solvents. Factors such as polarity, composition, and molecular size of the solute and organic phase determine the relative importance of partition to the environmental distribution of the solute. This chapter reviews these factors in the context of a partition model and also examines several environmental applications of the partition model for surface- and ground-water systems.

  20. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  1. The swiss army knife of job submission tools: grid-control

    NASA Astrophysics Data System (ADS)

    Stober, F.; Fischer, M.; Schleper, P.; Stadie, H.; Garbers, C.; Lange, J.; Kovalchuk, N.

    2017-10-01

    grid-control is a lightweight and highly portable open source submission tool that supports all common workflows in high energy physics (HEP). It has been used by a sizeable number of HEP analyses to process tasks that sometimes consist of up to 100k jobs. grid-control is built around a powerful plugin and configuration system, that allows users to easily specify all aspects of the desired workflow. Job submission to a wide range of local or remote batch systems or grid middleware is supported. Tasks can be conveniently specified through the parameter space that will be processed, which can consist of any number of variables and data sources with complex dependencies on each other. Dataset information is processed through a configurable pipeline of dataset filters, partition plugins and partition filters. The partition plugins can take the number of files, size of the work units, metadata or combinations thereof into account. All changes to the input datasets or variables are propagated through the processing pipeline and can transparently trigger adjustments to the parameter space and the job submission. While the core functionality is completely experiment independent, full integration with the CMS computing environment is provided by a small set of plugins.

  2. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  3. Canopy gap size influences niche partitioning of the ground-layer plant community in a northern temperate forest

    Treesearch

    Christel C. Kern; Rebecca A. Montgomery; Peter B. Reich; Terry F. Strong

    2013-01-01

    The Gap Partitioning Hypothesis (GPH) posits that gaps create heterogeneity in resources crucial for tree regeneration in closed-canopy forests, allowing trees with contrasting strategies to coexist along resource gradients. Few studies have examined gap partitioning of temperate, ground-layer vascular plants. We used a ground-layer plant community of a temperate...

  4. Orthogonal recursive bisection data decomposition for high performance computing in cardiac model simulations: dependence on anatomical geometry.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.

  5. Processing scalar implicature: a Constraint-Based approach

    PubMed Central

    Degen, Judith; Tanenhaus, Michael K.

    2014-01-01

    Three experiments investigated the processing of the implicature associated with some using a “gumball paradigm”. On each trial participants saw an image of a gumball machine with an upper chamber with 13 gumballs and an empty lower chamber. Gumballs then dropped to the lower chamber and participants evaluated statements, such as “You got some of the gumballs”. Experiment 1 established that some is less natural for reference to small sets (1, 2 and 3 of the 13 gumballs) and unpartitioned sets (all 13 gumballs) compared to intermediate sets (6–8). Partitive some of was less natural than simple some when used with the unpartitioned set. In Experiment 2, including exact number descriptions lowered naturalness ratings for some with small sets but not for intermediate size sets and the unpartitioned set. In Experiment 3 the naturalness ratings from Experiment 2 predicted response times. The results are interpreted as evidence for a Constraint-Based account of scalar implicature processing and against both two-stage, Literal-First models and pragmatic Default models. PMID:25265993

  6. Impact of Surface Roughness and Soil Texture on Mineral Dust Emission Fluxes Modeling

    NASA Technical Reports Server (NTRS)

    Menut, Laurent; Perez, Carlos; Haustein, Karsten; Bessagnet, Bertrand; Prigent, Catherine; Alfaro, Stephane

    2013-01-01

    Dust production models (DPM) used to estimate vertical fluxes of mineral dust aerosols over arid regions need accurate data on soil and surface properties. The Laboratoire Inter-Universitaire des Systemes Atmospheriques (LISA) data set was developed for Northern Africa, the Middle East, and East Asia. This regional data set was built through dedicated field campaigns and include, among others, the aerodynamic roughness length, the smooth roughness length of the erodible fraction of the surface, and the dry (undisturbed) soil size distribution. Recently, satellite-derived roughness length and high-resolution soil texture data sets at the global scale have emerged and provide the opportunity for the use of advanced schemes in global models. This paper analyzes the behavior of the ERS satellite-derived global roughness length and the State Soil Geographic data base-Food and Agriculture Organization of the United Nations (STATSGO-FAO) soil texture data set (based on wet techniques) using an advanced DPM in comparison to the LISA data set over Northern Africa and the Middle East. We explore the sensitivity of the drag partition scheme (a critical component of the DPM) and of the dust vertical fluxes (intensity and spatial patterns) to the roughness length and soil texture data sets. We also compare the use of the drag partition scheme to a widely used preferential source approach in global models. Idealized experiments with prescribed wind speeds show that the ERS and STATSGO-FAO data sets provide realistic spatial patterns of dust emission and friction velocity thresholds in the region. Finally, we evaluate a dust transport model for the period of March to July 2011 with observed aerosol optical depths from Aerosol Robotic Network sites. Results show that ERS and STATSGO-FAO provide realistic simulations in the region.

  7. New Linear Partitioning Models Based on Experimental Water: Supercritical CO2 Partitioning Data of Selected Organic Compounds.

    PubMed

    Burant, Aniela; Thompson, Christopher; Lowry, Gregory V; Karamalidis, Athanasios K

    2016-05-17

    Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch-reactor system with dual spectroscopic detectors: a near-infrared spectrometer for measuring the organic analyte in the CO2 phase and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly parameter linear free-energy relationship and to develop five new linear free-energy relationships for predicting water-sc-CO2 partitioning coefficients. A total of four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than does the model built for the entire data set.

  8. Cell-autonomous-like silencing of GFP-partitioned transgenic Nicotiana benthamiana.

    PubMed

    Sohn, Seong-Han; Frost, Jennifer; Kim, Yoon-Hee; Choi, Seung-Kook; Lee, Yi; Seo, Mi-Suk; Lim, Sun-Hyung; Choi, Yeonhee; Kim, Kook-Hyung; Lomonossoff, George

    2014-08-01

    We previously reported the novel partitioning of regional GFP-silencing on leaves of 35S-GFP transgenic plants, coining the term "partitioned silencing". We set out to delineate the mechanism of partitioned silencing. Here, we report that the partitioned plants were hemizygous for the transgene, possessing two direct-repeat copies of 35S-GFP. The detection of both siRNA expression (21 and 24 nt) and DNA methylation enrichment specifically at silenced regions indicated that both post-transcriptional gene silencing (PTGS) and transcriptional gene silencing (TGS) were involved in the silencing mechanism. Using in vivo agroinfiltration of 35S-GFP/GUS and inoculation of TMV-GFP RNA, we demonstrate that PTGS, not TGS, plays a dominant role in the partitioned silencing, concluding that the underlying mechanism of partitioned silencing is analogous to RNA-directed DNA methylation (RdDM). The initial pattern of partitioned silencing was tightly maintained in a cell-autonomous manner, although partitioned-silenced regions possess a potential for systemic spread. Surprisingly, transcriptome profiling through next-generation sequencing demonstrated that expression levels of most genes involved in the silencing pathway were similar in both GFP-expressing and silenced regions although a diverse set of region-specific transcripts were detected.This suggests that partitioned silencing can be triggered and regulated by genes other than the genes involved in the silencing pathway. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  9. Mg-perovskite/silicate melt and magnesiowuestite/silicate melt partition coefficients for KLB-1 at 250 Kbars

    NASA Technical Reports Server (NTRS)

    Drake, Michael J.; Rubie, David C.; Mcfarlane, Elisabeth A.

    1992-01-01

    The partitioning of elements amongst lower mantle phases and silicate melts is of interest in unraveling the early thermal history of the Earth. Because of the technical difficulty in carrying out such measurements, only one direct set of measurements was reported previously, and these results as well as interpretations based on them have generated controversy. Here we report what are to our knowledge only the second set of directly measured trace element partition coefficients for a natural system (KLB-1).

  10. The nature and barium partitioning between immiscible melts - A comparison of experimental and natural systems with reference to lunar granite petrogenesis

    NASA Technical Reports Server (NTRS)

    Neal, C. R.; Taylor, L. A.

    1989-01-01

    Elemental partitioning between immiscible melts has been studied using experimental liquid-liquid Kds and those determined by analysis of immiscible glasses in basalt mesostases in order to investigate lunar granite petrogenesis. Experimental data show that Ba is partitioned into the basic immiscible melt, while probe analysis results show that Ba is partitioned into the granitic immiscible melt. It is concluded that lunar granite of significant size can only occur in a plutonic or deep hypabyssal environment.

  11. Matrix partitioning and EOF/principal component analysis of Antarctic Sea ice brightness temperatures

    NASA Technical Reports Server (NTRS)

    Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.

    1984-01-01

    A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.

  12. Set Partitions and the Multiplication Principle

    ERIC Educational Resources Information Center

    Lockwood, Elise; Caughman, John S., IV

    2016-01-01

    To further understand student thinking in the context of combinatorial enumeration, we examine student work on a problem involving set partitions. In this context, we note some key features of the multiplication principle that were often not attended to by students. We also share a productive way of thinking that emerged for several students who…

  13. Anionic Pt in Silicate Melts at Low Oxygen Fugacity: Speciation, Partitioning and Implications for Core Formation Processes on Asteroids

    NASA Technical Reports Server (NTRS)

    Medard, E.; Martin, A. M.; Righter, K.; Malouta, A.; Lee, C.-T.

    2017-01-01

    Most siderophile element concentrations in planetary mantles can be explained by metal/ silicate equilibration at high temperature and pressure during core formation. Highly siderophile elements (HSE = Au, Re, and the Pt-group elements), however, usually have higher mantle abundances than predicted by partitioning models, suggesting that their concentrations have been set by late accretion of material that did not equilibrate with the core. The partitioning of HSE at the low oxygen fugacities relevant for core formation is however poorly constrained due to the lack of sufficient experimental constraints to describe the variations of partitioning with key variables like temperature, pressure, and oxygen fugacity. To better understand the relative roles of metal/silicate partitioning and late accretion, we performed a self-consistent set of experiments that parameterizes the influence of oxygen fugacity, temperature and melt composition on the partitioning of Pt, one of the HSE, between metal and silicate melts. The major outcome of this project is the fact that Pt dissolves in an anionic form in silicate melts, causing a dependence of partitioning on oxygen fugacity opposite to that reported in previous studies.

  14. Comparison of modeling approaches for carbon partitioning: Impact on estimates of global net primary production and equilibrium biomass of woody vegetation from MODIS GPP

    NASA Astrophysics Data System (ADS)

    Ise, Takeshi; Litton, Creighton M.; Giardina, Christian P.; Ito, Akihiko

    2010-12-01

    Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long-lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning to leaves, stems, and roots varies consistently with GPP and that the ratio of net primary production (NPP) to GPP is conservative across environmental gradients. To examine influences of carbon partitioning schemes employed by global ecosystem models, we used this meta-analysis-based model and a satellite-based (MODIS) terrestrial GPP data set to estimate global woody NPP and equilibrium biomass, and then compared it to two process-based ecosystem models (Biome-BGC and VISIT) using the same GPP data set. We hypothesized that different carbon partitioning schemes would result in large differences in global estimates of woody NPP and equilibrium biomass. Woody NPP estimated by Biome-BGC and VISIT was 25% and 29% higher than the meta-analysis-based model for boreal forests, with smaller differences in temperate and tropics. Global equilibrium woody biomass, calculated from model-specific NPP estimates and a single set of tissue turnover rates, was 48 and 226 Pg C higher for Biome-BGC and VISIT compared to the meta-analysis-based model, reflecting differences in carbon partitioning to structural versus metabolically active tissues. In summary, we found that different carbon partitioning schemes resulted in large variations in estimates of global woody carbon flux and storage, indicating that stand-level controls on carbon partitioning are not yet accurately represented in ecosystem models.

  15. How to share underground reservoirs

    NASA Astrophysics Data System (ADS)

    Schrenk, K. J.; Araújo, N. A. M.; Herrmann, H. J.

    2012-10-01

    Many resources, such as oil, gas, or water, are extracted from porous soils and their exploration is often shared among different companies or nations. We show that the effective shares can be obtained by invading the porous medium simultaneously with various fluids. Partitioning a volume in two parts requires one division surface while the simultaneous boundary between three parts consists of lines. We identify and characterize these lines, showing that they form a fractal set consisting of a single thread spanning the medium and a surrounding cloud of loops. While the spanning thread has fractal dimension 1.55 +/- 0.03, the set of all lines has dimension 1.69 +/- 0.02. The size distribution of the loops follows a power law and the evolution of the set of lines exhibits a tricritical point described by a crossover with a negative dimension at criticality.

  16. Partition coefficients of organic compounds between water and imidazolium-, pyridinium-, and phosphonium-based ionic liquids.

    PubMed

    Padró, Juan M; Pellegrino Vidal, Rocío B; Reta, Mario

    2014-12-01

    The partition coefficients, P IL/w, of several compounds, some of them of biological and pharmacological interest, between water and room-temperature ionic liquids based on the imidazolium, pyridinium, and phosphonium cations, namely 1-octyl-3-methylimidazolium hexafluorophosphate, N-octylpyridinium tetrafluorophosphate, trihexyl(tetradecyl)phosphonium chloride, trihexyl(tetradecyl)phosphonium bromide, trihexyl(tetradecyl)phosphonium bis(trifluoromethylsulfonyl)imide, and trihexyl(tetradecyl)phosphonium dicyanamide, were accurately measured. In this way, we extended our database of partition coefficients in room-temperature ionic liquids previously reported. We employed the solvation parameter model with different probe molecules (the training set) to elucidate the chemical interactions involved in the partition process and discussed the most relevant differences among the three types of ionic liquids. The multiparametric equations obtained with the aforementioned model were used to predict the partition coefficients for compounds (the test set) not present in the training set, most being of biological and pharmacological interest. An excellent agreement between calculated and experimental log P IL/w values was obtained. Thus, the obtained equations can be used to predict, a priori, the extraction efficiency for any compound using these ionic liquids as extraction solvents in liquid-liquid extractions.

  17. The iron-nickel-phosphorus system: Effects on the distribution of trace elements during the evolution of iron meteorites

    NASA Astrophysics Data System (ADS)

    Corrigan, Catherine M.; Chabot, Nancy L.; McCoy, Timothy J.; McDonough, William F.; Watson, Heather C.; Saslow, Sarah A.; Ash, Richard D.

    2009-05-01

    To better understand the partitioning behavior of elements during the formation and evolution of iron meteorites, two sets of experiments were conducted at 1 atm in the Fe-Ni-P system. The first set examined the effect of P on solid metal/liquid metal partitioning behavior of 22 elements, while the other set explored the effect of the crystal structures of body-centered cubic (α)- and face-centered cubic (γ)-solid Fe alloys on partitioning behavior. Overall, the effect of P on the partition coefficients for the majority of the elements was minimal. As, Au, Ga, Ge, Ir, Os, Pt, Re, and Sb showed slightly increasing partition coefficients with increasing P-content of the metallic liquid. Co, Cu, Pd, and Sn showed constant partition coefficients. Rh, Ru, W, and Mo showed phosphorophile (P-loving) tendencies. Parameterization models were applied to solid metal/liquid metal results for 12 elements. As, Au, Pt, and Re failed to match previous parameterization models, requiring the determination of separate parameters for the Fe-Ni-S and Fe-Ni-P systems. Experiments with coexisting α and γ Fe alloy solids produced partitioning ratios close to unity, indicating that an α versus γ Fe alloy crystal structure has only a minor influence on the partitioning behaviors of the trace element studied. A simple relationship between an element's natural crystal structure and its α/γ partitioning ratio was not observed. If an iron meteorite crystallizes from a single metallic liquid that contains both S and P, the effect of P on the distribution of elements between the crystallizing solids and the residual liquid will be minor in comparison to the effect of S. This indicates that to a first order, fractional crystallization models of the Fe-Ni-S-P system that do not take into account P are appropriate for interpreting the evolution of iron meteorites if the effects of S are appropriately included in the effort.

  18. HARP: A Dynamic Inertial Spectral Partitioner

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Sohn, Andrew; Biswas, Rupak

    1997-01-01

    Partitioning unstructured graphs is central to the parallel solution of computational science and engineering problems. Spectral partitioners, such recursive spectral bisection (RSB), have proven effecfive in generating high-quality partitions of realistically-sized meshes. The major problem which hindered their wide-spread use was their long execution times. This paper presents a new inertial spectral partitioner, called HARP. The main objective of the proposed approach is to quickly partition the meshes at runtime in a manner that works efficiently for real applications in the context of distributed-memory machines. The underlying principle of HARP is to find the eigenvectors of the unpartitioned vertices and then project them onto the eigerivectors of the original mesh. Results for various meshes ranging in size from 1000 to 100,000 vertices indicate that HARP can indeed partition meshes rapidly at runtime. Experimental results show that our largest mesh can be partitioned sequentially in only a few seconds on an SP2 which is several times faster than other spectral partitioners while maintaining the solution quality of the proven RSB method. A parallel WI version of HARP has also been implemented on IBM SP2 and Cray T3E. Parallel HARP, running on 64 processors SP2 and T3E, can partition a mesh containing more than 100,000 vertices into 64 subgrids in about half a second. These results indicate that graph partitioning can now be truly embedded in dynamically-changing real-world applications.

  19. The photochemical formation and gas-particle partitioning of oxidation products of decamethyl cyclopentasiloxane and decamethyl tetrasiloxane in the atmosphere

    NASA Astrophysics Data System (ADS)

    Chandramouli, Bharadwaj; Kamens, Richard M.

    Decamethyl cyclopentasiloxane (D 5) and decamethyl tetrasiloxane (MD 2M) were injected into a smog chamber containing fine Arizona road dust particles (95% surface area <2.6 μM) and an urban smog atmosphere in the daytime. A photochemical reaction - gas-particle partitioning reaction scheme, was implemented to simulate the formation and gas-particle partitioning of hydroxyl oxidation products of D 5 and MD 2M. This scheme incorporated the reactions of D 5 and MD 2M into an existing urban smog chemical mechanism carbon bond IV and partitioned the products between gas and particle phase by treating gas-particle partitioning as a kinetic process and specifying an uptake and off-gassing rate. A photochemical model PKSS was used to simulate this set of reactions. A Langmuirian partitioning model was used to convert the measured and estimated mass-based partitioning coefficients ( KP) to a molar or volume-based form. The model simulations indicated that >99% of all product silanol formed in the gas-phase partition immediately to particle phase and the experimental data agreed with model predictions. One product, D 4TOH was observed and confirmed for the D 5 reaction and this system was modeled successfully. Experimental data was inadequate for MD 2M reaction products and it is likely that more than one product formed. The model set up a framework into which more reaction and partitioning steps can be easily added.

  20. Subcellular compartmentalization of Cd and Zn in two bivalves. II. Significance of trophically available metal (TAM)

    USGS Publications Warehouse

    Wallace, W.G.; Luoma, S.N.

    2003-01-01

    This paper examines how the subcellular partitioning of Cd and Zn in the bivalves Macoma balthica and Potamocorbula amurensis may affect the trophic transfer of metal to predators. Results show that the partitioning of metals to organelles, 'enzymes' and metallothioneins (MT) comprise a subcellular compartment containing trophically available metal (TAM; i.e. metal trophically available to predators), and that because this partitioning varies with species, animal size and metal, TAM is similarly influenced. Clams from San Francisco Bay, California, were exposed for 14 d to 3.5 ??g 1-1 Cd and 20.5 ??g 1-1 Zn, including 109Cd and 65Zn as radiotracers, and were used in feeding experiments with grass shrimp Palaemon macrodatylus, or used to investigate the subcellular partitioning of metal. Grass shrimp fed Cd-contaminated P. amurensis absorbed ???60% of ingested Cd, which was in accordance with the partitioning of Cd to the bivalve's TAM compartment (i.e. Cd associated with organelles, 'enzymes' and MT); a similar relationship was found in previous studies with grass shrimp fed Cd-contaminated oligochaetes. Thus, TAM may be used as a tool to predict the trophic transfer of at least Cd. Subcellular fractionation revealed that ???34% of both the Cd and Zn accumulated by M. balthica was associated with TAM, while partitioning to TAM in P. amurensis was metal-dependent (???60% for TAM-Cd%, ???73% for TAM-Zn%). The greater TAM-Cd% of P. amurensis than M. balthica is due to preferential binding of Cd to MT and 'enzymes', while enhanced TAM-Zn% of P. amurensis results from a greater binding of Zn to organelles. TAM for most species-metal combinations was size-dependent, decreasing with increased clam size. Based on field data, it is estimated that of the 2 bivalves, P. amurensis poses the greater threat of Cd exposure to predators because of higher tissue concentrations and greater partitioning as TAM; exposure of Zn to predators would be similar between these species.

  1. Extracting Aggregation Free Energies of Mixed Clusters from Simulations of Small Systems: Application to Ionic Surfactant Micelles.

    PubMed

    Zhang, X; Patel, L A; Beckwith, O; Schneider, R; Weeden, C J; Kindt, J T

    2017-11-14

    Micelle cluster distributions from molecular dynamics simulations of a solvent-free coarse-grained model of sodium octyl sulfate (SOS) were analyzed using an improved method to extract equilibrium association constants from small-system simulations containing one or two micelle clusters at equilibrium with free surfactants and counterions. The statistical-thermodynamic and mathematical foundations of this partition-enabled analysis of cluster histograms (PEACH) approach are presented. A dramatic reduction in computational time for analysis was achieved through a strategy similar to the selector variable method to circumvent the need for exhaustive enumeration of the possible partitions of surfactants and counterions into clusters. Using statistics from a set of small-system (up to 60 SOS molecules) simulations as input, equilibrium association constants for micelle clusters were obtained as a function of both number of surfactants and number of associated counterions through a global fitting procedure. The resulting free energies were able to accurately predict micelle size and charge distributions in a large (560 molecule) system. The evolution of micelle size and charge with SOS concentration as predicted by the PEACH-derived free energies and by a phenomenological four-parameter model fit, along with the sensitivity of these predictions to variations in cluster definitions, are analyzed and discussed.

  2. The multi-reference retaining the excitation degree perturbation theory: A size-consistent, unitary invariant, and rapidly convergent wavefunction based ab initio approach

    NASA Astrophysics Data System (ADS)

    Fink, Reinhold F.

    2009-02-01

    The retaining the excitation degree (RE) partitioning [R.F. Fink, Chem. Phys. Lett. 428 (2006) 461(20 September)] is reformulated and applied to multi-reference cases with complete active space (CAS) reference wave functions. The generalised van Vleck perturbation theory is employed to set up the perturbation equations. It is demonstrated that this leads to a consistent and well defined theory which fulfils all important criteria of a generally applicable ab initio method: The theory is proven numerically and analytically to be size-consistent and invariant with respect to unitary orbital transformations within the inactive, active and virtual orbital spaces. In contrast to most previously proposed multi-reference perturbation theories the necessary condition for a proper perturbation theory to fulfil the zeroth order perturbation equation is exactly satisfied with the RE partitioning itself without additional projectors on configurational spaces. The theory is applied to several excited states of the benchmark systems CH2 , SiH2 , and NH2 , as well as to the lowest states of the carbon, nitrogen and oxygen atoms. In all cases comparisons are made with full configuration interaction results. The multi-reference (MR)-RE method is shown to provide very rapidly converging perturbation series. Energy differences between states of similar configurations converge even faster.

  3. Using Cluster Analysis to Compartmentalize a Large Managed Wetland Based on Physical, Biological, and Climatic Geospatial Attributes.

    PubMed

    Hahus, Ian; Migliaccio, Kati; Douglas-Mankin, Kyle; Klarenberg, Geraldine; Muñoz-Carpena, Rafael

    2018-04-27

    Hierarchical and partitional cluster analyses were used to compartmentalize Water Conservation Area 1, a managed wetland within the Arthur R. Marshall Loxahatchee National Wildlife Refuge in southeast Florida, USA, based on physical, biological, and climatic geospatial attributes. Single, complete, average, and Ward's linkages were tested during the hierarchical cluster analyses, with average linkage providing the best results. In general, the partitional method, partitioning around medoids, found clusters that were more evenly sized and more spatially aggregated than those resulting from the hierarchical analyses. However, hierarchical analysis appeared to be better suited to identify outlier regions that were significantly different from other areas. The clusters identified by geospatial attributes were similar to clusters developed for the interior marsh in a separate study using water quality attributes, suggesting that similar factors have influenced variations in both the set of physical, biological, and climatic attributes selected in this study and water quality parameters. However, geospatial data allowed further subdivision of several interior marsh clusters identified from the water quality data, potentially indicating zones with important differences in function. Identification of these zones can be useful to managers and modelers by informing the distribution of monitoring equipment and personnel as well as delineating regions that may respond similarly to future changes in management or climate.

  4. Gas-particle partitioning of semi-volatile organics on organic aerosols using a predictive activity coefficient model: analysis of the effects of parameter choices on model performance

    NASA Astrophysics Data System (ADS)

    Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.

    The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.

  5. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  6. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  7. The role of viscous fluid flow in active cochlear partition vibration

    NASA Astrophysics Data System (ADS)

    Svobodny, Thomas

    2001-11-01

    Sound transduction occurs via the forcing of the basilar membrane by a traveling wave set up in the cochlear chamber. At the threshold of hearing the amplitude of the vibrations is on the nanometer scale. Fluid flow in this chamber is at very low Reynolds number (because of the tiny size). The actual transduction occurs through the mechanism of stereocilia of hair cells. Analysis and simulation of the interaction between the microhydrodynamical flow and the basilar membrane vibration will be presented in this talk. We will describe the three-dimensional distribution of energy and how fluid flow affects stereociliar deflection.

  8. Raster Data Partitioning for Supporting Distributed GIS Processing

    NASA Astrophysics Data System (ADS)

    Nguyen Thai, B.; Olasz, A.

    2015-08-01

    In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.

  9. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  10. A Colorful Laboratory Investigation of Hydrophobic Interactions, the Partition Coefficient, Gibbs Energy of Transfer, and the Effect of Hofmeister Salts

    ERIC Educational Resources Information Center

    McCain, Daniel F.; Allgood, Ottie E.; Cox, Jacob T.; Falconi, Audrey E.; Kim, Michael J.; Shih, Wei-Yu

    2012-01-01

    Only a few pedagogical experiments have been published dealing specifically with the hydrophobic interaction though it plays a central role in biochemistry. A set of experiments is presented in which students partition a variety of colorful indicator dyes in biphasic water/organic solvent mixtures. Students monitor the partitioning visually and…

  11. Lost in the supermarket: Quantifying the cost of partitioning memory sets in hybrid search.

    PubMed

    Boettcher, Sage E P; Drew, Trafton; Wolfe, Jeremy M

    2018-01-01

    The items on a memorized grocery list are not relevant in every aisle; for example, it is useless to search for the cabbage in the cereal aisle. It might be beneficial if one could mentally partition the list so only the relevant subset was active, so that vegetables would be activated in the produce section. In four experiments, we explored observers' abilities to partition memory searches. For example, if observers held 16 items in memory, but only eight of the items were relevant, would response times resemble a search through eight or 16 items? In Experiments 1a and 1b, observers were not faster for the partition set; however, they suffered relatively small deficits when "lures" (items from the irrelevant subset) were presented, indicating that they were aware of the partition. In Experiment 2 the partitions were based on semantic distinctions, and again, observers were unable to restrict search to the relevant items. In Experiments 3a and 3b, observers attempted to remove items from the list one trial at a time but did not speed up over the course of a block, indicating that they also could not limit their memory searches. Finally, Experiments 4a, 4b, 4c, and 4d showed that observers were able to limit their memory searches when a subset was relevant for a run of trials. Overall, observers appear to be unable or unwilling to partition memory sets from trial to trial, yet they are capable of restricting search to a memory subset that remains relevant for several trials. This pattern is consistent with a cost to switching between currently relevant memory items.

  12. Generic pure quantum states as steady states of quasi-local dissipative dynamics

    NASA Astrophysics Data System (ADS)

    Karuvade, Salini; Johnson, Peter D.; Ticozzi, Francesco; Viola, Lorenza

    2018-04-01

    We investigate whether a generic pure state on a multipartite quantum system can be the unique asymptotic steady state of locality-constrained purely dissipative Markovian dynamics. In the tripartite setting, we show that the problem is equivalent to characterizing the solution space of a set of linear equations and establish that the set of pure states obeying the above property has either measure zero or measure one, solely depending on the subsystems’ dimension. A complete analytical characterization is given when the central subsystem is a qubit. In the N-partite case, we provide conditions on the subsystems’ size and the nature of the locality constraint, under which random pure states cannot be quasi-locally stabilized generically. Also, allowing for the possibility to approximately stabilize entangled pure states that cannot be exact steady states in settings where stabilizability is generic, our results offer insights into the extent to which random pure states may arise as unique ground states of frustration-free parent Hamiltonians. We further argue that, to a high probability, pure quantum states sampled from a t-design enjoy the same stabilizability properties of Haar-random ones as long as suitable dimension constraints are obeyed and t is sufficiently large. Lastly, we demonstrate a connection between the tasks of quasi-local state stabilization and unique state reconstruction from local tomographic information, and provide a constructive procedure for determining a generic N-partite pure state based only on knowledge of the support of any two of the reduced density matrices of about half the parties, improving over existing results.

  13. PROCEDURE FOR DETERMINATION OF SEDIMENT PARTICLE SIZE (GRAIN SIZE)

    EPA Science Inventory

    Sediment quality and sediment remediation projects have become a high priority for USEPA. Sediment particle size determinations are used in environmental assessments for habitat characterization, chemical normalization, and partitioning potential of chemicals. The accepted met...

  14. An Unequal Secure Encryption Scheme for H.264/AVC Video Compression Standard

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Wang, Jidong; Ikenaga, Takeshi; Tsunoo, Yukiyasu; Goto, Satoshi

    H.264/AVC is the newest video coding standard. There are many new features in it which can be easily used for video encryption. In this paper, we propose a new scheme to do video encryption for H.264/AVC video compression standard. We define Unequal Secure Encryption (USE) as an approach that applies different encryption schemes (with different security strength) to different parts of compressed video data. This USE scheme includes two parts: video data classification and unequal secure video data encryption. Firstly, we classify the video data into two partitions: Important data partition and unimportant data partition. Important data partition has small size with high secure protection, while unimportant data partition has large size with low secure protection. Secondly, we use AES as a block cipher to encrypt the important data partition and use LEX as a stream cipher to encrypt the unimportant data partition. AES is the most widely used symmetric cryptography which can ensure high security. LEX is a new stream cipher which is based on AES and its computational cost is much lower than AES. In this way, our scheme can achieve both high security and low computational cost. Besides the USE scheme, we propose a low cost design of hybrid AES/LEX encryption module. Our experimental results show that the computational cost of the USE scheme is low (about 25% of naive encryption at Level 0 with VEA used). The hardware cost for hybrid AES/LEX module is 4678 Gates and the AES encryption throughput is about 50Mbps.

  15. Visualizing phylogenetic tree landscapes.

    PubMed

    Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A

    2017-02-02

    Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.

  16. Geochemical heterogeneity in a sand and gravel aquifer: Effect of sediment mineralogy and particle size on the sorption of chlorobenzenes

    USGS Publications Warehouse

    Barber, L.B.; Thurman, E.M.; Runnells, D.R.; ,

    1992-01-01

    The effect of particle size, mineralogy and sediment organic carbon (SOC) on solution of tetrachlorobenzene and pentachlorobenzene was evaluated using batch-isotherm experiments on sediment particle-size and mineralogical fractions from a sand and gravel aquifer, Cape Cod, Massachusetts. Concentration of SOC and sorption of chlorobenzenes increase with decreasing particle size. For a given particle size, the magnetic fraction has a higher SOC content and sorption capacity than the bulk or non-magnetic fractions. Sorption appears to be controlled by the magnetic minerals, which comprise only 5-25% of the bulk sediment. Although SOC content of the bulk sediment is < 0.1%, the observed sorption of chlorobenzenes is consistent with a partition mechanism and is adequately predicted by models relating sorption to the octanol/water partition coefficient of the solute and SOC content. A conceptual model based on preferential association of dissolved organic matter with positively-charged mineral surfaces is proposed to describe micro-scale, intergranular variability in sorption properties of the aquifer sediments.The effect of particle size, mineralogy and sediment organic carbon (SOC) on sorption of tetrachlorobenzene and pentachlorobenzene was evaluated using batch-isotherm experiments on sediment particle-size and mineralogical fractions from a sand and gravel aquifer, Cape Cod, Massachusetts. Concentration of SOC and sorption of chlorobenzenes increase with decreasing particle size. For a given particle size, the magnetic fraction has a higher SOC content and sorption capacity than the bulk or non-magnetic fractions. Sorption appears to be controlled by the magnetic minerals, which comprise only 5-25% of the bulk sediment. Although SOC content of the bulk sediment is <0.1%, the observed sorption of chlorobenzenes is consistent with a partition mechanism and is adequately predicted by models relating sorption to the octanol/water partition coefficient of the solute and SOC content. A conceptual model based on preferential association of dissolved organic matter with positively-charged mineral surfaces is proposed to describe micro-scale, intergranular variability in sorption properties of the aquifer sediments.

  17. Particle size distribution and gas-particle partitioning of polychlorinated biphenyls in the atmosphere in Beijing, China.

    PubMed

    Zhu, Qingqing; Zheng, Minghui; Liu, Guorui; Zhang, Xian; Dong, Shujun; Gao, Lirong; Liang, Yong

    2017-01-01

    Size-fractionated samples of urban particulate matter (PM; ≤1.0, 1.0-2.5, 2.5-10, and >10 μm) and gaseous samples were simultaneously obtained to study the distribution of polychlorinated biphenyls (PCBs) in the atmosphere in Beijing, China. Most recent investigations focused on the analysis of gaseous PCBs, and much less attention has been paid to the occurrence of PCBs among different PM fractions. In the present study, the gas-particle partitioning and size-specific distribution of PCBs in atmosphere were investigated. The total concentrations (gas + particle phase fractions) of Σ 12 dioxin-like PCBs, Σ 7 indicator PCBs, and ΣPCBs were 1.68, 42.1, and 345 pg/m 3 , respectively. PCBs were predominantly in the gas phase (86.8-99.0 % of the total concentrations). The gas-particle partition coefficients (K p ) of PCBs were found to be a significant linear correlated with the subcooled liquid vapor pressures (P L 0 ) (R 2  = 0.83, P < 0.01). The slope (m r ) implied that the gas-particle partitioning of PCBs was affected both by the mechanisms of adsorption and absorption. In addition, the concentrations of PCBs increased as the particle size decreased (>10, 2.5-10, 1.0-2.5, and ≤1.0 μm), with most of the PCBs contained in the fraction of ≤1.0 μm (53.4 % of the total particulate concentrations). Tetra-CBs were the main homolog in the air samples in the gas phase and PM fractions, followed by tri-CBs. This work will contribute to the knowledge of PCBs among different PM fractions and fill the gap of the size distribution of particle-bound dioxin-like PCBs in the air.

  18. GAS-PARTICLE PARTITIONING OF SEMI-VOLATILE ORGANICS ON ORGANIC AEROSOLS USING A PREDICTIVE ACTIVITY COEFFICIENT MODEL: ANALYSIS OF THE EFFECTS OF PARAMETER CHOICES ON MODEL PERFORMANCE. (R826771)

    EPA Science Inventory

    The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the Gliding Box method applied to trace element distribution of a geochemical data set

    NASA Astrophysics Data System (ADS)

    Paz González, Antonio; Vidal Vázquez, Eva; Rosario García Moreno, M.; Paz Ferreiro, Jorge; Saa Requejo, Antonio; María Tarquis, Ana

    2010-05-01

    The application of fractal theory to process geochemical prospecting data can provide useful information for evaluating mineralization potential. A geochemical survey was carried out in the west area of Coruña province (NW Spain). Major elements and trace elements were determined by standard analytical techniques. It is well known that there are specific elements or arrays of elements, which are associated with specific types of mineralization. Arsenic has been used to evaluate the metallogenetic importance of the studied zone. Moreover, as can be considered as a pathfinder of Au, as these two elements are genetically associated. The main objective of this study was to use multifractal analysis to characterize the distribution of three trace elements, namely Au, As, and Sb. Concerning the local geology, the study area comprises predominantly acid rocks, mainly alkaline and calcalkaline granites, gneiss and migmatites. The most significant structural feature of this zone is the presence of a mylonitic band, with an approximate NE-SW orientation. The data set used in this study comprises 323 samples collected, with standard geochemical criteria, preferentially in the B horizon of the soil. Occasionally where this horizon was not present, samples were collected from the C horizon. Samples were taken in a rectilinear grid. The sampling lines were perpendicular to the NE-SW tectonic structures. Frequency distributions of the studied elements departed from normal. Coefficients of variation ranked as follows: Sb < As < Au. Significant correlation coefficients between Au, Sb, and As were found, even if these were low. The so-called ‘gliding box' algorithm (GB) proposed originally for lacunarity analysis has been extended to multifractal modelling and provides an alternative to the ‘box-counting' method for implementing multifractal analysis. The partitioning method applied in GB algorithm constructs samples by gliding a box of certain size (a) over the grid map in all possible directions. An "up-scaling" partitioning process will begin with a minimum size or area box (amin) up to a certain size less than the total area A. An advantage of the GB method is the large sample size that usually leads to better statistical results on Dq values, particularly for negative values of q. Because this partitioning overlaps, the measure defined on these boxes is not statistically independent and the definition of the measure in the gliding boxes is different. In order to show the advantages of the GB method, spatial distributions of As, Sb, and Au in the studied area were analyzed. We discussed the usefulness of this method to achieve the numerical characterization of anomalies and its differentiation from the background from the available data of the geochemistry survey.

  19. Using Patient Demographics and Statistical Modeling to Predict Knee Tibia Component Sizing in Total Knee Arthroplasty.

    PubMed

    Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James

    2018-06-01

    Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. The partitioning behavior of persistent toxicant organic contaminants in eutrophic sediments: Coefficients and effects of fluorescent organic matter and particle size.

    PubMed

    He, Wei; Yang, Chen; Liu, Wenxiu; He, Qishuang; Wang, Qingmei; Li, Yilong; Kong, Xiangzhen; Lan, Xinyu; Xu, Fuliu

    2016-12-01

    In the shallow lakes, the partitioning of organic contaminants into the water phase from the solid phase might pose a potential hazard to both benthic and planktonic organisms, which would further damage aquatic ecosystems. This study determined the concentrations of polycyclic aromatic hydrocarbons (PAHs), organochlorine pesticides (OCPs), and phthalate esters (PAEs) in both the sediment and the pore water from Lake Chaohu and calculated the sediment - pore water partition coefficient (K D ) and the organic carbon normalized sediment - pore water partition coefficient (K OC ), and explored the effects of particle size, organic matter content, and parallel factor fluorescent organic matter (PARAFAC-FOM) on K D . The results showed that log K D values of PAHs (2.61-3.94) and OCPs (1.75-3.05) were significantly lower than that of PAEs (4.13-5.05) (p < 0.05). The chemicals were ranked by log K OC as follows: PAEs (6.05-6.94) > PAHs (4.61-5.86) > OCPs (3.62-4.97). A modified MCI model can predict K OC values in a range of log 1.5 at a higher frequency, especially for PAEs. The significantly positive correlation between K OC and the octanol - water partition coefficient (K OW ) were observed for PAHs and OCPs. However, significant correlation was found for PAEs only when excluding PAEs with lower K OW . Sediments with smaller particle sizes (clay and silt) and their organic matter would affect distributions of PAHs and OCPs between the sediment and the pore water. Protein-like fluorescent organic matter (C2) was associated with the K D of PAEs. Furthermore, the partitioning of PARAFAC-FOM between the sediment and the pore water could potentially affect the distribution of organic pollutants. The partitioning mechanism of PAEs between the sediment and the pore water might be different from that of PAHs and OCPs, as indicated by their associations with influencing factors and K OW . Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  3. Predicting the partitioning of biological compounds between room-temperature ionic liquids and water by means of the solvation-parameter model.

    PubMed

    Padró, Juan M; Ponzinibbio, Agustín; Mesa, Leidy B Agudelo; Reta, Mario

    2011-03-01

    The partition coefficients, P(IL/w), for different probe molecules as well as for compounds of biological interest between the room-temperature ionic liquids (RTILs) 1-butyl-3-methylimidazolium hexafluorophosphate, [BMIM][PF(6)], 1-hexyl-3-methylimidazolium hexafluorophosphate, [HMIM][PF(6)], 1-octyl-3-methylimidazolium tetrafluoroborate, [OMIM][BF(4)] and water were accurately measured. [BMIM][PF(6)] and [OMIM][BF(4)] were synthesized by adapting a procedure from the literature to a simpler, single-vessel and faster methodology, with a much lesser consumption of organic solvent. We employed the solvation-parameter model to elucidate the general chemical interactions involved in RTIL/water partitioning. With this purpose, we have selected different solute descriptor parameters that measure polarity, polarizability, hydrogen-bond-donor and hydrogen-bond-acceptor interactions, and cavity formation for a set of specifically selected probe molecules (the training set). The obtained multiparametric equations were used to predict the partition coefficients for compounds not present in the training set (the test set), most being of biological interest. Partial solubility of the ionic liquid in water (and water into the ionic liquid) was taken into account to explain the obtained results. This fact has not been deeply considered up to date. Solute descriptors were obtained from the literature, when available, or else calculated through commercial software. An excellent agreement between calculated and experimental log P(IL/w) values was obtained, which demonstrated that the resulting multiparametric equations are robust and allow predicting partitioning for any organic molecule in the biphasic systems studied.

  4. High Performance Computing Based Parallel HIearchical Modal Association Clustering (HPAR HMAC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patlolla, Dilip R; Surendran Nair, Sujithkumar; Graves, Daniel A.

    For many applications, clustering is a crucial step in order to gain insight into the makeup of a dataset. The best approach to a given problem often depends on a variety of factors, such as the size of the dataset, time restrictions, and soft clustering requirements. The HMAC algorithm seeks to combine the strengths of 2 particular clustering approaches: model-based and linkage-based clustering. One particular weakness of HMAC is its computational complexity. HMAC is not practical for mega-scale data clustering. For high-definition imagery, a user would have to wait months or years for a result; for a 16-megapixel image, themore » estimated runtime skyrockets to over a decade! To improve the execution time of HMAC, it is reasonable to consider an multi-core implementation that utilizes available system resources. An existing imple-mentation (Ray and Cheng 2014) divides the dataset into N partitions - one for each thread prior to executing the HMAC algorithm. This implementation benefits from 2 types of optimization: parallelization and divide-and-conquer. By running each partition in parallel, the program is able to accelerate computation by utilizing more system resources. Although the parallel implementation provides considerable improvement over the serial HMAC, it still suffers from poor computational complexity, O(N2). Once the maximum number of cores on a system is exhausted, the program exhibits slower behavior. We now consider a modification to HMAC that involves a recursive partitioning scheme. Our modification aims to exploit divide-and-conquer benefits seen by the parallel HMAC implementation. At each level in the recursion tree, partitions are divided into 2 sub-partitions until a threshold size is reached. When the partition can no longer be divided without falling below threshold size, the base HMAC algorithm is applied. This results in a significant speedup over the parallel HMAC.« less

  5. Mode entanglement of Gaussian fermionic states

    NASA Astrophysics Data System (ADS)

    Spee, C.; Schwaiger, K.; Giedke, G.; Kraus, B.

    2018-04-01

    We investigate the entanglement of n -mode n -partite Gaussian fermionic states (GFS). First, we identify a reasonable definition of separability for GFS and derive a standard form for mixed states, to which any state can be mapped via Gaussian local unitaries (GLU). As the standard form is unique, two GFS are equivalent under GLU if and only if their standard forms coincide. Then, we investigate the important class of local operations assisted by classical communication (LOCC). These are central in entanglement theory as they allow one to partially order the entanglement contained in states. We show, however, that there are no nontrivial Gaussian LOCC (GLOCC) among pure n -partite (fully entangled) states. That is, any such GLOCC transformation can also be accomplished via GLU. To obtain further insight into the entanglement properties of such GFS, we investigate the richer class of Gaussian stochastic local operations assisted by classical communication (SLOCC). We characterize Gaussian SLOCC classes of pure n -mode n -partite states and derive them explicitly for few-mode states. Furthermore, we consider certain fermionic LOCC and show how to identify the maximally entangled set of pure n -mode n -partite GFS, i.e., the minimal set of states having the property that any other state can be obtained from one state inside this set via fermionic LOCC. We generalize these findings also to the pure m -mode n -partite (for m >n ) case.

  6. Fishing and temperature effects on the size structure of exploited fish stocks.

    PubMed

    Tu, Chen-Yi; Chen, Kuan-Ting; Hsieh, Chih-Hao

    2018-05-08

    Size structure of fish stock plays an important role in maintaining sustainability of the population. Size distribution of an exploited stock is predicted to shift toward small individuals caused by size-selective fishing and/or warming; however, their relative contribution remains relatively unexplored. In addition, existing analyses on size structure have focused on univariate size-based indicators (SBIs), such as mean length, evenness of size classes, or the upper 95-percentile of the length frequency distribution; these approaches may not capture full information of size structure. To bridge the gap, we used the variation partitioning approach to examine how the size structure (composition of size classes) responded to fishing, warming and the interaction. We analyzed 28 exploited stocks in the West US, Alaska and North Sea. Our result shows fishing has the most prominent effect on the size structure of the exploited stocks. In addition, the fish stocks experienced higher variability in fishing is more responsive to the temperature effect in their size structure, suggesting that fishing may elevate the sensitivity of exploited stocks in responding to environmental effects. The variation partitioning approach provides complementary information to univariate SBIs in analyzing size structure.

  7. Task-specific image partitioning.

    PubMed

    Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D

    2013-02-01

    Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.

  8. QSPR modeling of octanol/water partition coefficient of antineoplastic agents by balance of correlations.

    PubMed

    Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio

    2010-04-01

    Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.

  9. Equivalence of partition properties and determinacy

    PubMed Central

    Kechris, Alexander S.; Woodin, W. Hugh

    1983-01-01

    It is shown that, within L(ℝ), the smallest inner model of set theory containing the reals, the axiom of determinacy is equivalent to the existence of arbitrarily large cardinals below Θ with the strong partition property κ → (κ)κ. PMID:16593299

  10. Lieb-Robinson bounds on n -partite connected correlation functions

    NASA Astrophysics Data System (ADS)

    Tran, Minh Cong; Garrison, James R.; Gong, Zhe-Xuan; Gorshkov, Alexey V.

    2017-11-01

    Lieb and Robinson provided bounds on how fast bipartite connected correlations can arise in systems with only short-range interactions. We generalize Lieb-Robinson bounds on bipartite connected correlators to multipartite connected correlators. The bounds imply that an n -partite connected correlator can reach unit value in constant time. Remarkably, the bounds also allow for an n -partite connected correlator to reach a value that is exponentially large with system size in constant time, a feature which stands in contrast to bipartite connected correlations. We provide explicit examples of such systems.

  11. Efficient Construction of Mesostate Networks from Molecular Dynamics Trajectories.

    PubMed

    Vitalis, Andreas; Caflisch, Amedeo

    2012-03-13

    The coarse-graining of data from molecular simulations yields conformational space networks that may be used for predicting the system's long time scale behavior, to discover structural pathways connecting free energy basins in the system, or simply to represent accessible phase space regions of interest and their connectivities in a two-dimensional plot. In this contribution, we present a tree-based algorithm to partition conformations of biomolecules into sets of similar microstates, i.e., to coarse-grain trajectory data into mesostates. On account of utilizing an architecture similar to that of established tree-based algorithms, the proposed scheme operates in near-linear time with data set size. We derive expressions needed for the fast evaluation of mesostate properties and distances when employing typical choices for measures of similarity between microstates. Using both a pedagogically useful and a real-word application, the algorithm is shown to be robust with respect to tree height, which in addition to mesostate threshold size is the main adjustable parameter. It is demonstrated that the derived mesostate networks can preserve information regarding the free energy basins and barriers by which the system is characterized.

  12. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    PubMed

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  13. A novel partitioning method for block-structured adaptive meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtainmore » the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.« less

  14. A novel partitioning method for block-structured adaptive meshes

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-07-01

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.

  15. PBDE emission from E-wastes during the pyrolytic process: Emission factor, compositional profile, size distribution, and gas-particle partitioning.

    PubMed

    Cai, ChuanYang; Yu, ShuangYu; Liu, Yu; Tao, Shu; Liu, WenXin

    2018-04-01

    Polybrominated diphenyl ether (PBDE) pollution in E-waste recycling areas has garnered great concern by scientists, the government and the public. In the current study, two typical kinds of E-wastes (printed wiring boards and plastic casings of household or office appliances) were selected to investigate the emission behaviors of individual PBDEs during the pyrolysis process. Emission factors (EFs), compositional profile, particle size distribution and gas-particle partitioning of PBDEs were explored. The mean EF values of the total PBDEs were determined at 8.1 ± 4.6 μg/g and 10.4 ± 11.3 μg/g for printed wiring boards and plastic casings, respectively. Significantly positive correlations were observed between EFs and original addition contents of PBDEs. BDE209 was the most abundant in the E-waste materials, while lowly brominated and highly brominated components (excluding BDE209) were predominant in the exhaust fumes. The distribution of total PBDEs on different particle sizes was characterized by a concentration of finer particles with an aerodynamic diameter between 0.4 μm and 2.1 μm and followed by less than 0.4 μm. Similarly, the distribution of individual species was dominated by finer particles. Most of the freshly emitted PBDEs (via pyrolysis) were liable to exist in the particulate phase with respect to the gaseous phase, particularly for finer particles. In addition, a linear relationship between the partitioning coefficient (K P ) and the subcooled liquid vapor pressure (P L 0 ) of the different components indicated non-equilibrium gas-particle partitioning during the pyrolysis process and suggested that absorption by particulate organic carbon, rather than surface adsorption, governed gas-particle partitioning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Optical and Gravimetric Partitioning of Coastal Ocean Suspended Particulate Inorganic Matter (PIM)

    NASA Astrophysics Data System (ADS)

    Stavn, R. H.; Zhang, X.; Falster, A. U.; Gray, D. J.; Rick, J. J.; Gould, R. W., Jr.

    2016-02-01

    Recent work on the composition of suspended particulates of estuarine and coastal waters increases our capabilities to investigate the biogeochemal processes occurring in these waters. The biogeochemical properties associated with the particulates involve primarily sorption/desorption of dissolved matter onto the particle surfaces, which vary with the types of particulates. Therefore, the breakdown into chemical components of suspended matter will greatly expand the biogeochemistry of the coastal ocean region. The gravimetric techniques for these studies are here expanded and refined. In addition, new optical inversions greatly expand our capabilities to study spatial extent of the components of suspended particulate matter. The partitioning of a gravimetric PIM determination into clay minerals and amorphous silica is aided by electron microprobe analysis. The amorphous silica is further partitioned into contributions by detrital material and by the tests of living diatoms based on an empirical formula relating the chlorophyll content of cultured living diatoms in log phase growth to their frustules determined after gravimetric analysis of the ashed diatom residue. The optical inversion of composition of suspended particulates is based on the entire volume scattering function (VSF) measured in the field with a Multispectral Volume Scattering Meter and a LISST 100 meter. The VSF is partitioned into an optimal combination of contributions by particle subpopulations, each of which is uniquely represented by a refractive index and a log-normal size distribution. These subpopulations are aggregated to represent the two components of PIM using the corresponding refractive indices and sizes which also yield a particle size distribution for the two components. The gravimetric results of partitioning PIM into clay minerals and amorphous silica confirm the optical inversions from the VSF.

  17. Convex Regression with Interpretable Sharp Partitions

    PubMed Central

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120

  18. Monkey search algorithm for ECE components partitioning

    NASA Astrophysics Data System (ADS)

    Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.

    2018-05-01

    The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.

  19. Canonical partition functions: ideal quantum gases, interacting classical gases, and interacting quantum gases

    NASA Astrophysics Data System (ADS)

    Zhou, Chi-Chun; Dai, Wu-Sheng

    2018-02-01

    In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.

  20. Data compression of discrete sequence: A tree based approach using dynamic programming

    NASA Technical Reports Server (NTRS)

    Shivaram, Gurusrasad; Seetharaman, Guna; Rao, T. R. N.

    1994-01-01

    A dynamic programming based approach for data compression of a ID sequence is presented. The compression of an input sequence of size N to that of a smaller size k is achieved by dividing the input sequence into k subsequences and replacing the subsequences by their respective average values. The partitioning of the input sequence is carried with the intention of reducing the mean squared error in the reconstructed sequence. The complexity involved in finding the partitions which would result in such an optimal compressed sequence is reduced by using the dynamic programming approach, which is presented.

  1. Geographical and Temporal Body Size Variation in a Reptile: Roles of Sex, Ecology, Phylogeny and Ecology Structured in Phylogeny

    PubMed Central

    Aragón, Pedro; Fitze, Patrick S.

    2014-01-01

    Geographical body size variation has long interested evolutionary biologists, and a range of mechanisms have been proposed to explain the observed patterns. It is considered to be more puzzling in ectotherms than in endotherms, and integrative approaches are necessary for testing non-exclusive alternative mechanisms. Using lacertid lizards as a model, we adopted an integrative approach, testing different hypotheses for both sexes while incorporating temporal, spatial, and phylogenetic autocorrelation at the individual level. We used data on the Spanish Sand Racer species group from a field survey to disentangle different sources of body size variation through environmental and individual genetic data, while accounting for temporal and spatial autocorrelation. A variation partitioning method was applied to separate independent and shared components of ecology and phylogeny, and estimated their significance. Then, we fed-back our models by controlling for relevant independent components. The pattern was consistent with the geographical Bergmann's cline and the experimental temperature-size rule: adults were larger at lower temperatures (and/or higher elevations). This result was confirmed with additional multi-year independent data-set derived from the literature. Variation partitioning showed no sex differences in phylogenetic inertia but showed sex differences in the independent component of ecology; primarily due to growth differences. Interestingly, only after controlling for independent components did primary productivity also emerge as an important predictor explaining size variation in both sexes. This study highlights the importance of integrating individual-based genetic information, relevant ecological parameters, and temporal and spatial autocorrelation in sex-specific models to detect potentially important hidden effects. Our individual-based approach devoted to extract and control for independent components was useful to reveal hidden effects linked with alternative non-exclusive hypothesis, such as those of primary productivity. Also, including measurement date allowed disentangling and controlling for short-term temporal autocorrelation reflecting sex-specific growth plasticity. PMID:25090025

  2. Potential and kinetic energetic analysis of phonon modes in varied molecular solids

    NASA Astrophysics Data System (ADS)

    Kraczek, Brent

    2015-03-01

    We calculate partitioned kinetic and potential energies of the phonon modes in molecular solids to illuminate the dynamical behavior of the constituent molecules. This enables analysis of the relationship between the characteristics of sets of phonon modes, molecular structure and chemical reactivity by partitioning the kinetic energy into the translational, rotational and vibrational motions of groups of atoms (including molecules), and the potential energy into the energy contained within interatomic interactions. We consider three solids of differing size and rigidity: naphthalene (C1 0 H6), nitromethane (CH3NO2)andα-HMX(C4H8N8O8). Naphthalene and nitromethane mostly act in the semi-rigid manner often expected in molecular solids. HMX exhibits behavior that is significantly less-rigid. While there are definite correlations between the kinetic and potential energetic analyses, there are also differences, particularly in the excitation of chemical bonds by low-frequency lattice modes. This suggests that in many cases computational and experimental methods dependent on atomic displacements may not identify phonon modes active in chemical reactivity.

  3. Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming

    NASA Astrophysics Data System (ADS)

    Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders

    The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.

  4. Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    B. Hendrickson; T.G. Kolda

    1998-09-01

    A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.

  5. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    NASA Astrophysics Data System (ADS)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  6. Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.

    PubMed

    Vera, J Fernando; Macías, Rodrigo

    2017-06-01

    One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.

  7. Additional support for Afrotheria and Paenungulata, the performance of mitochondrial versus nuclear genes, and the impact of data partitions with heterogeneous base composition.

    PubMed

    Springer, M S; Amrine, H M; Burk, A; Stanhope, M J

    1999-03-01

    We concatenated sequences for four mitochondrial genes (12S rRNA, tRNA valine, 16S rRNA, cytochrome b) and four nuclear genes [aquaporin, alpha 2B adrenergic receptor (A2AB), interphotoreceptor retinoid-binding protein (IRBP), von Willebrand factor (vWF)] into a multigene data set representing 11 eutherian orders (Artiodactyla, Hyracoidea, Insectivora, Lagomorpha, Macroscelidea, Perissodactyla, Primates, Proboscidea, Rodentia, Sirenia, Tubulidentata). Within this data set, we recognized nine mitochondrial partitions (both stems and loops, for each of 12S rRNA, tRNA valine, and 16S rRNA; and first, second, and third codon positions of cytochrome b) and 12 nuclear partitions (first, second, and third codon positions, respectively, of each of the four nuclear genes). Four of the 21 partitions (third positions of cytochrome b, A2AB, IRBP, and vWF) showed significant heterogeneity in base composition across taxa. Phylogenetic analyses (parsimony, minimum evolution, maximum likelihood) based on sequences for all 21 partitions provide 99-100% bootstrap support for Afrotheria and Paenungulata. With the elimination of the four partitions exhibiting heterogeneity in base composition, there is also high bootstrap support (89-100%) for cow + horse. Statistical tests reject Altungulata, Anagalida, and Ungulata. Data set heterogeneity between mitochondrial and nuclear genes is most evident when all partitions are included in the phylogenetic analyses. Mitochondrial-gene trees associate cow with horse, whereas nuclear-gene trees associate cow with hedgehog and these two with horse. However, after eliminating third positions of A2AB, IRBP, and vWF, nuclear data agree with mitochondrial data in supporting cow + horse. Nuclear genes provide stronger support for both Afrotheria and Paenungulata. Removal of third positions of cytochrome b results in improved performance for the mitochondrial genes in recovering these clades.

  8. Methods for selecting fixed-effect models for heterogeneous codon evolution, with comments on their application to gene and genome data.

    PubMed

    Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P

    2007-02-08

    Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.

  9. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  10. Twig-leaf size relationships in woody plants vary intraspecifically along a soil moisture gradient

    NASA Astrophysics Data System (ADS)

    Yang, Xiao-Dong; Yan, En-Rong; Chang, Scott X.; Wang, Xi-Hua; Zhao, Yan-Tao; Shi, Qing-Ru

    2014-10-01

    Understanding scaling relationships between twig size and leaf size along environmental gradients is important for revealing strategies of plant biomass allocation with changing environmental constraints. However, it remains poorly understood how variations in the slope and y-intercept in the twig-leaf size relationship partition among individual, population and species levels across communities. Here, we determined the scaling relationships between twig cross-sectional area (twig size) and total leaf area per twig (leaf size) among individual, population and species levels along a soil moisture gradient in subtropical forests in eastern China. Twig and leaf tissues from 95 woody plant species were collected from three sites that form a soil moisture gradient: a wet site (W), a mesophytic site (M), and a dry site (D). The variance in scaling slope and y-intercept was partitioned among individual, population and species levels using a nested ANOVA. In addition, the change in the twig-leaf size relationship over the soil moisture gradient was determined for each of overlapping and turnover species. Twig size was positively related to leaf size across the three levels, with the variance partitioned at the individual level in scaling slope and y-intercept being 98 and 90%, respectively. Along the soil moisture gradient, the twig-leaf size relationship differed inter- and intraspecifically. At the species and population levels, there were homogeneous slopes but the y-intercept was W > M = D. In contrast, at the individual level, the regression slopes were heterogeneous among the three sites. More remarkably, the twig-leaf size relationships changed from negative allometry for overlapping species to isometry for turnover species. This study provides strong evidence for the twig-leaf size relationship to be intraspecific, particularly at the individual level. Our findings suggest that whether or not species have overlapping habitats is crucial for shaping the deployment pattern between twigs and leaves.

  11. Ion size effects upon ionic exclusion from dielectric interfaces and slit nanopores

    NASA Astrophysics Data System (ADS)

    Buyukdagli, Sahin; Achim, C. V.; Ala-Nissila, T.

    2011-05-01

    A previously developed field-theoretic model (Coalson et al 1995 J. Chem. Phys. 102 4584) that treats core collisions and Coulomb interactions on the same footing is investigated in order to understand ion size effects on the partition of neutral and charged particles at planar interfaces and the ionic selectivity of slit nanopores. We introduce a variational scheme that can go beyond the mean-field (MF) regime and couple in a consistent way pore-modified core interactions, steric effects, electrostatic solvation and image-charge forces, and surface charge induced electrostatic potential. Density profiles of neutral particles in contact with a neutral hard wall, obtained from Monte Carlo (MC) simulations are compared with the solutions of mean-field and variational equations. A recently proposed random-phase approximation (RPA) method is tested as well. We show that in the dilute limit, the MF and the variational theories agree well with simulation results, in contrast to the RPA method. The partition of charged Yukawa particles at a neutral dielectric interface (e.g. an air-water or protein-water interface) is investigated. It is shown that as a result of the competition between core collisions that push the ions toward the surface, and repulsive solvation and image forces that exclude them from the interface, a concentration peak of finite size ions sets in close to the dielectric interface. This effect is amplified with increasing ion size and bulk concentration. An integral expression for the surface tension that accounts for excluded volume effects is computed and the decrease of the surface tension with increasing ion size is illustrated. We also characterize the role played by the ion size in the ionic selectivity of neutral slit nanopores. We show that the complex interplay between electrostatic forces, excluded volume effects induced by core collisions and steric effects leads to an unexpected reversal in the ionic selectivity of the pore with varying pore size: while large pores exhibit a higher conductivity for large ions, narrow pores exclude large ions more efficiently than small ones.

  12. Temperature and composition dependencies of trace element partitioning - Olivine/melt and low-Ca pyroxene/melt

    NASA Technical Reports Server (NTRS)

    Colson, R. O.; Mckay, G. A.; Taylor, L. A.

    1988-01-01

    This paper presents a systematic thermodynamic analysis of the effects of temperature and composition on olivine/melt and low-Ca pyroxene/melt partitioning. Experiments were conducted in several synthetic basalts with a wide range of Fe/Mg, determining partition coefficients for Eu, Ca, Mn, Fe, Ni, Sm, Cd, Y, Yb, Sc, Al, Zr, and Ti and modeling accurately the changes in free energy for trace element exchange between crystal and melt as functions of the trace element size and charge. On the basis of this model, partition coefficients for olivine/melt and low-Ca pyroxene/melt can be predicted for a wide range of elements over a variety of basaltic bulk compositions and temperatures. Moreover, variations in partition coeffeicients during crystallization or melting can be modeled on the basis of changes in temperature and major element chemistry.

  13. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    NASA Technical Reports Server (NTRS)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  14. Copula-based prediction of economic movements

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Hirsh, I. D.

    2016-06-01

    In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.

  15. Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems

    DTIC Science & Technology

    2015-05-01

    of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC

  16. Biogeochemical regions of the Mediterranean Sea: An objective multidimensional and multivariate environmental approach

    NASA Astrophysics Data System (ADS)

    Reygondeau, Gabriel; Guieu, Cécile; Benedetti, Fabio; Irisson, Jean-Olivier; Ayata, Sakina-Dorothée; Gasparini, Stéphane; Koubbi, Philippe

    2017-02-01

    When dividing the ocean, the aim is generally to summarise a complex system into a representative number of units, each representing a specific environment, a biological community or a socio-economical specificity. Recently, several geographical partitions of the global ocean have been proposed using statistical approaches applied to remote sensing or observations gathered during oceanographic cruises. Such geographical frameworks defined at a macroscale appear hardly applicable to characterise the biogeochemical features of semi-enclosed seas that are driven by smaller-scale chemical and physical processes. Following the Longhurst's biogeochemical partitioning of the pelagic realm, this study investigates the environmental divisions of the Mediterranean Sea using a large set of environmental parameters. These parameters were informed in the horizontal and the vertical dimensions to provide a 3D spatial framework for environmental management (12 regions found for the epipelagic, 12 for the mesopelagic, 13 for the bathypelagic and 26 for the seafloor). We show that: (1) the contribution of the longitudinal environmental gradient to the biogeochemical partitions decreases with depth; (2) the partition of the surface layer cannot be extrapolated to other vertical layers as the partition is driven by a different set of environmental variables. This new partitioning of the Mediterranean Sea has strong implications for conservation as it highlights that management must account for the differences in zoning with depth at a regional scale.

  17. Topics

    ERIC Educational Resources Information Center

    Mathematics Teaching, 1972

    1972-01-01

    Topics discussed in this column include patterns of inverse multipliers in modular arithmetic; diagrams for product sets, set intersection, and set union; function notation; patterns in the number of partitions of positive integers; and tessellations. (DT)

  18. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  19. A Dynamic Laplacian for Identifying Lagrangian Coherent Structures on Weighted Riemannian Manifolds

    NASA Astrophysics Data System (ADS)

    Froyland, Gary; Kwok, Eric

    2017-06-01

    Transport and mixing in dynamical systems are important properties for many physical, chemical, biological, and engineering processes. The detection of transport barriers for dynamics with general time dependence is a difficult, but important problem, because such barriers control how rapidly different parts of phase space (which might correspond to different chemical or biological agents) interact. The key factor is the growth of interfaces that partition phase space into separate regions. The paper Froyland (Nonlinearity 28(10):3587-3622, 2015) introduced the notion of dynamic isoperimetry: the study of sets with persistently small boundary size (the interface) relative to enclosed volume, when evolved by the dynamics. Sets with this minimal boundary size to volume ratio were identified as level sets of dominant eigenfunctions of a dynamic Laplace operator. In this present work we extend the results of Froyland (Nonlinearity 28(10):3587-3622, 2015) to the situation where the dynamics (1) is not necessarily volume preserving, (2) acts on initial agent concentrations different from uniform concentrations, and (3) occurs on a possibly curved phase space. Our main results include generalised versions of the dynamic isoperimetric problem, the dynamic Laplacian, Cheeger's inequality, and the Federer-Fleming theorem. We illustrate the computational approach with some simple numerical examples.

  20. Benefits Assessment of the Interaction Between Traffic Flow Management Delay and Airspace Partitions in the Presence of Weather

    NASA Technical Reports Server (NTRS)

    Palopo, Kee; Lee, Hak-Tae; Chatterji, Gano

    2011-01-01

    The concept of re-partitioning the airspace into a new set of sectors for allocating capacity rather than delaying flights to comply with the capacity constraints of a static set of sectors is being explored. The reduction in delay, a benefit, achieved by this concept needs to be greater than the cost of controllers and equipment needed for the additional sectors. Therefore, tradeoff studies are needed for benefits assessment of this concept.

  1. Architecture Aware Partitioning Algorithms

    DTIC Science & Technology

    2006-01-19

    follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj

  2. A mesh partitioning algorithm for preserving spatial locality in arbitrary geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nivarti, Girish V., E-mail: g.nivarti@alumni.ubc.ca; Salehi, M. Mahdi; Bushe, W. Kendal

    2015-01-15

    Highlights: •An algorithm for partitioning computational meshes is proposed. •The Morton order space-filling curve is modified to achieve improved locality. •A spatial locality metric is defined to compare results with existing approaches. •Results indicate improved performance of the algorithm in complex geometries. -- Abstract: A space-filling curve (SFC) is a proximity preserving linear mapping of any multi-dimensional space and is widely used as a clustering tool. Equi-sized partitioning of an SFC ignores the loss in clustering quality that occurs due to inaccuracies in the mapping. Often, this results in poor locality within partitions, especially for the conceptually simple, Morton ordermore » curves. We present a heuristic that improves partition locality in arbitrary geometries by slicing a Morton order curve at points where spatial locality is sacrificed. In addition, we develop algorithms that evenly distribute points to the extent possible while maintaining spatial locality. A metric is defined to estimate relative inter-partition contact as an indicator of communication in parallel computing architectures. Domain partitioning tests have been conducted on geometries relevant to turbulent reactive flow simulations. The results obtained highlight the performance of our method as an unsupervised and computationally inexpensive domain partitioning tool.« less

  3. Feeding ecology and niche overlap of Lake Ontario offshore forage fish assessed with stable isotopes

    USGS Publications Warehouse

    Mumby, James; Johson, Timothy; Stewart, Thomas; Halfyard, Edward; Walsh, Maureen; Weidel, Brian C.; Lantry, Jana; Fisk, Aarron

    2017-01-01

    The forage fish communities of the Laurentian Great Lakes continue to experience changes that have altered ecosystem structure, yet little is known about how they partition resources. Seasonal, spatial and body size variation in δ13C and δ15N was used to assess isotopic niche overlap and resource and habitat partitioning among the five common offshore Lake Ontario forage fish species (n = 2037) [Alewife (Alosa pseudoharengus), Rainbow Smelt (Osmerus mordax), Round Goby (Neogobius melanostomus), and Deepwater (Myoxocephalus thompsonii) and Slimy (Cottus cognatus) Sculpin]. Round Goby had the largest isotopic niche (6.1‰2, standard ellipse area (SEAC)), followed by Alewife (3.4‰2) while Rainbow Smelt, Slimy Sculpin and Deepwater Sculpin had the smallest and similar niche size (1.7-1.8‰2), with only the Sculpin species showing significant isotopic niche overlap (>63%). Stable isotopes in Alewife, Round Goby and Rainbow Smelt varied with location, season and size, but did not in the Sculpin spp. Lake Ontario forage fish species have partitioned food and habitat resources, and non-native Alewife and Round Goby have the largest isotopic niche, suggestive of a boarder ecological niche, and may contribute to their current high abundance.

  4. Estimation of octanol/water partition coefficients using LSER parameters

    USGS Publications Warehouse

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  5. DNA metabarcoding illuminates dietary niche partitioning by African large herbivores.

    PubMed

    Kartzinel, Tyler R; Chen, Patricia A; Coverdale, Tyler C; Erickson, David L; Kress, W John; Kuzmina, Maria L; Rubenstein, Daniel I; Wang, Wei; Pringle, Robert M

    2015-06-30

    Niche partitioning facilitates species coexistence in a world of limited resources, thereby enriching biodiversity. For decades, biologists have sought to understand how diverse assemblages of large mammalian herbivores (LMH) partition food resources. Several complementary mechanisms have been identified, including differential consumption of grasses versus nongrasses and spatiotemporal stratification in use of different parts of the same plant. However, the extent to which LMH partition food-plant species is largely unknown because comprehensive species-level identification is prohibitively difficult with traditional methods. We used DNA metabarcoding to quantify diet breadth, composition, and overlap for seven abundant LMH species (six wild, one domestic) in semiarid African savanna. These species ranged from almost-exclusive grazers to almost-exclusive browsers: Grass consumption inferred from mean sequence relative read abundance (RRA) ranged from >99% (plains zebra) to <1% (dik-dik). Grass RRA was highly correlated with isotopic estimates of % grass consumption, indicating that RRA conveys reliable quantitative information about consumption. Dietary overlap was greatest between species that were similar in body size and proportional grass consumption. Nonetheless, diet composition differed between all species-even pairs of grazers matched in size, digestive physiology, and location-and dietary similarity was sometimes greater across grazing and browsing guilds than within them. Such taxonomically fine-grained diet partitioning suggests that coarse trophic categorizations may generate misleading conclusions about competition and coexistence in LMH assemblages, and that LMH diversity may be more tightly linked to plant diversity than is currently recognized.

  6. DNA metabarcoding illuminates dietary niche partitioning by African large herbivores

    PubMed Central

    Kartzinel, Tyler R.; Chen, Patricia A.; Coverdale, Tyler C.; Erickson, David L.; Kress, W. John; Kuzmina, Maria L.; Rubenstein, Daniel I.; Wang, Wei; Pringle, Robert M.

    2015-01-01

    Niche partitioning facilitates species coexistence in a world of limited resources, thereby enriching biodiversity. For decades, biologists have sought to understand how diverse assemblages of large mammalian herbivores (LMH) partition food resources. Several complementary mechanisms have been identified, including differential consumption of grasses versus nongrasses and spatiotemporal stratification in use of different parts of the same plant. However, the extent to which LMH partition food-plant species is largely unknown because comprehensive species-level identification is prohibitively difficult with traditional methods. We used DNA metabarcoding to quantify diet breadth, composition, and overlap for seven abundant LMH species (six wild, one domestic) in semiarid African savanna. These species ranged from almost-exclusive grazers to almost-exclusive browsers: Grass consumption inferred from mean sequence relative read abundance (RRA) ranged from >99% (plains zebra) to <1% (dik-dik). Grass RRA was highly correlated with isotopic estimates of % grass consumption, indicating that RRA conveys reliable quantitative information about consumption. Dietary overlap was greatest between species that were similar in body size and proportional grass consumption. Nonetheless, diet composition differed between all species—even pairs of grazers matched in size, digestive physiology, and location—and dietary similarity was sometimes greater across grazing and browsing guilds than within them. Such taxonomically fine-grained diet partitioning suggests that coarse trophic categorizations may generate misleading conclusions about competition and coexistence in LMH assemblages, and that LMH diversity may be more tightly linked to plant diversity than is currently recognized. PMID:26034267

  7. PAQ: Partition Analysis of Quasispecies.

    PubMed

    Baccam, P; Thompson, R J; Fedrigo, O; Carpenter, S; Cornette, J L

    2001-01-01

    The complexities of genetic data may not be accurately described by any single analytical tool. Phylogenetic analysis is often used to study the genetic relationship among different sequences. Evolutionary models and assumptions are invoked to reconstruct trees that describe the phylogenetic relationship among sequences. Genetic databases are rapidly accumulating large amounts of sequences. Newly acquired sequences, which have not yet been characterized, may require preliminary genetic exploration in order to build models describing the evolutionary relationship among sequences. There are clustering techniques that rely less on models of evolution, and thus may provide nice exploratory tools for identifying genetic similarities. Some of the more commonly used clustering methods perform better when data can be grouped into mutually exclusive groups. Genetic data from viral quasispecies, which consist of closely related variants that differ by small changes, however, may best be partitioned by overlapping groups. We have developed an intuitive exploratory program, Partition Analysis of Quasispecies (PAQ), which utilizes a non-hierarchical technique to partition sequences that are genetically similar. PAQ was used to analyze a data set of human immunodeficiency virus type 1 (HIV-1) envelope sequences isolated from different regions of the brain and another data set consisting of the equine infectious anemia virus (EIAV) regulatory gene rev. Analysis of the HIV-1 data set by PAQ was consistent with phylogenetic analysis of the same data, and the EIAV rev variants were partitioned into two overlapping groups. PAQ provides an additional tool which can be used to glean information from genetic data and can be used in conjunction with other tools to study genetic similarities and genetic evolution of viral quasispecies.

  8. Modeling of adipose/blood partition coefficient for environmental chemicals.

    PubMed

    Papadaki, K C; Karakitsios, S P; Sarigiannis, D A

    2017-12-01

    A Quantitative Structure Activity Relationship (QSAR) model was developed in order to predict the adipose/blood partition coefficient of environmental chemical compounds. The first step of QSAR modeling was the collection of inputs. Input data included the experimental values of adipose/blood partition coefficient and two sets of molecular descriptors for 67 organic chemical compounds; a) the descriptors from Linear Free Energy Relationship (LFER) and b) the PaDEL descriptors. The datasets were split to training and prediction set and were analysed using two statistical methods; Genetic Algorithm based Multiple Linear Regression (GA-MLR) and Artificial Neural Networks (ANN). The models with LFER and PaDEL descriptors, coupled with ANN, produced satisfying performance results. The fitting performance (R 2 ) of the models, using LFER and PaDEL descriptors, was 0.94 and 0.96, respectively. The Applicability Domain (AD) of the models was assessed and then the models were applied to a large number of chemical compounds with unknown values of adipose/blood partition coefficient. In conclusion, the proposed models were checked for fitting, validity and applicability. It was demonstrated that they are stable, reliable and capable to predict the values of adipose/blood partition coefficient of "data poor" chemical compounds that fall within the applicability domain. Copyright © 2017. Published by Elsevier Ltd.

  9. Dynamically heterogenous partitions and phylogenetic inference: an evaluation of analytical strategies with cytochrome b and ND6 gene sequences in cranes.

    PubMed

    Krajewski, C; Fain, M G; Buckley, L; King, D G

    1999-11-01

    ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.

  10. Unsupervised hierarchical partitioning of hyperspectral images: application to marine algae identification

    NASA Astrophysics Data System (ADS)

    Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.

    2015-10-01

    In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.

  11. The prediction of blood-tissue partitions, water-skin partitions and skin permeation for agrochemicals.

    PubMed

    Abraham, Michael H; Gola, Joelle M R; Ibrahim, Adam; Acree, William E; Liu, Xiangli

    2014-07-01

    There is considerable interest in the blood-tissue distribution of agrochemicals, and a number of researchers have developed experimental methods for in vitro distribution. These methods involve the determination of saline-blood and saline-tissue partitions; not only are they indirect, but they do not yield the required in vivo distribution. The authors set out equations for gas-tissue and blood-tissue distribution, for partition from water into skin and for permeation from water through human skin. Together with Abraham descriptors for the agrochemicals, these equations can be used to predict values for all of these processes. The present predictions compare favourably with experimental in vivo blood-tissue distribution where available. The predictions require no more than simple arithmetic. The present method represents a much easier and much more economic way of estimating blood-tissue partitions than the method that uses saline-blood and saline-tissue partitions. It has the added advantages of yielding the required in vivo partitions and being easily extended to the prediction of partition of agrochemicals from water into skin and permeation from water through skin. © 2013 Society of Chemical Industry.

  12. 3d expansions of 5d instanton partition functions

    NASA Astrophysics Data System (ADS)

    Nieri, Fabrizio; Pan, Yiwen; Zabzine, Maxim

    2018-04-01

    We propose a set of novel expansions of Nekrasov's instanton partition functions. Focusing on 5d supersymmetric pure Yang-Mills theory with unitary gauge group on C_{q,{t}^{-1}}^2× S^1 , we show that the instanton partition function admits expansions in terms of partition functions of unitary gauge theories living on the 3d subspaces C_q× S^1 , C_{t^{-1}}× S^1 and their intersection along S^1 . These new expansions are natural from the BPS/CFT viewpoint, as they can be matched with W q,t correlators involving an arbitrary number of screening charges of two kinds. Our constructions generalize and interpolate existing results in the literature.

  13. An Effective Approach for Clustering InhA Molecular Dynamics Trajectory Using Substrate-Binding Cavity Features

    PubMed Central

    Ruiz, Duncan D. A.; Norberto de Souza, Osmar

    2015-01-01

    Protein receptor conformations, obtained from molecular dynamics (MD) simulations, have become a promising treatment of its explicit flexibility in molecular docking experiments applied to drug discovery and development. However, incorporating the entire ensemble of MD conformations in docking experiments to screen large candidate compound libraries is currently an unfeasible task. Clustering algorithms have been widely used as a means to reduce such ensembles to a manageable size. Most studies investigate different algorithms using pairwise Root-Mean Square Deviation (RMSD) values for all, or part of the MD conformations. Nevertheless, the RMSD only may not be the most appropriate gauge to cluster conformations when the target receptor has a plastic active site, since they are influenced by changes that occur on other parts of the structure. Hence, we have applied two partitioning methods (k-means and k-medoids) and four agglomerative hierarchical methods (Complete linkage, Ward’s, Unweighted Pair Group Method and Weighted Pair Group Method) to analyze and compare the quality of partitions between a data set composed of properties from an enzyme receptor substrate-binding cavity and two data sets created using different RMSD approaches. Ensembles of representative MD conformations were generated by selecting a medoid of each group from all partitions analyzed. We investigated the performance of our new method for evaluating binding conformation of drug candidates to the InhA enzyme, which were performed by cross-docking experiments between a 20 ns MD trajectory and 20 different ligands. Statistical analyses showed that the novel ensemble, which is represented by only 0.48% of the MD conformations, was able to reproduce 75% of all dynamic behaviors within the binding cavity for the docking experiments performed. Moreover, this new approach not only outperforms the other two RMSD-clustering solutions, but it also shows to be a promising strategy to distill biologically relevant information from MD trajectories, especially for docking purposes. PMID:26218832

  14. An Effective Approach for Clustering InhA Molecular Dynamics Trajectory Using Substrate-Binding Cavity Features.

    PubMed

    De Paris, Renata; Quevedo, Christian V; Ruiz, Duncan D A; Norberto de Souza, Osmar

    2015-01-01

    Protein receptor conformations, obtained from molecular dynamics (MD) simulations, have become a promising treatment of its explicit flexibility in molecular docking experiments applied to drug discovery and development. However, incorporating the entire ensemble of MD conformations in docking experiments to screen large candidate compound libraries is currently an unfeasible task. Clustering algorithms have been widely used as a means to reduce such ensembles to a manageable size. Most studies investigate different algorithms using pairwise Root-Mean Square Deviation (RMSD) values for all, or part of the MD conformations. Nevertheless, the RMSD only may not be the most appropriate gauge to cluster conformations when the target receptor has a plastic active site, since they are influenced by changes that occur on other parts of the structure. Hence, we have applied two partitioning methods (k-means and k-medoids) and four agglomerative hierarchical methods (Complete linkage, Ward's, Unweighted Pair Group Method and Weighted Pair Group Method) to analyze and compare the quality of partitions between a data set composed of properties from an enzyme receptor substrate-binding cavity and two data sets created using different RMSD approaches. Ensembles of representative MD conformations were generated by selecting a medoid of each group from all partitions analyzed. We investigated the performance of our new method for evaluating binding conformation of drug candidates to the InhA enzyme, which were performed by cross-docking experiments between a 20 ns MD trajectory and 20 different ligands. Statistical analyses showed that the novel ensemble, which is represented by only 0.48% of the MD conformations, was able to reproduce 75% of all dynamic behaviors within the binding cavity for the docking experiments performed. Moreover, this new approach not only outperforms the other two RMSD-clustering solutions, but it also shows to be a promising strategy to distill biologically relevant information from MD trajectories, especially for docking purposes.

  15. Increasing zooplankton size diversity enhances the strength of top-down control on phytoplankton through diet niche partitioning.

    PubMed

    Ye, Lin; Chang, Chun-Yi; García-Comas, Carmen; Gong, Gwo-Ching; Hsieh, Chih-Hao

    2013-09-01

    1. The biodiversity-ecosystem functioning debate is a central topic in ecology. Recently, there has been a growing interest in size diversity because body size is sensitive to environmental changes and is one of the fundamental characteristics of organisms linking many ecosystem properties. However, how size diversity affects ecosystem functioning is an important yet unclear issue. 2. To fill the gap, with large-scale field data from the East China Sea, we tested the novel hypothesis that increasing zooplankton size diversity enhances top-down control on phytoplankton (H1) and compared it with five conventional hypotheses explaining the top-down control: flatter zooplankton size spectrum enhances the strength of top-down control (H2); nutrient enrichment lessens the strength of top-down control (H3); increasing zooplankton taxonomic diversity enhances the strength of top-down control (H4); increasing fish predation decreases the strength of top-down control of zooplankton on phytoplankton through trophic cascade (H5); increasing temperature intensifies the strength of top-down control (H6). 3. The results of univariate analyses support the hypotheses based on zooplankton size diversity (H1), zooplankton size spectrum (H2), nutrient (H3) and zooplankton taxonomic diversity (H4), but not the hypotheses based on fish predation (H5) and temperature (H6). More in-depth analyses indicate that zooplankton size diversity is the most important factor in determining the strength of top-down control on phytoplankton in the East China Sea. 4. Our results suggest a new potential mechanism that increasing predator size diversity enhances the strength of top-down control on prey through diet niche partitioning. This mechanism can be explained by the optimal predator-prey body-mass ratio concept. Suppose each size group of zooplankton predators has its own optimal phytoplankton prey size, increasing size diversity of zooplankton would promote diet niche partitioning of predators and thus elevates the strength of top-down control. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  16. 47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...

  17. 47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...

  18. 47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...

  19. 47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...

  20. 47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...

  1. A Novel Method for Discovering Fuzzy Sequential Patterns Using the Simple Fuzzy Partition Method.

    ERIC Educational Resources Information Center

    Chen, Ruey-Shun; Hu, Yi-Chung

    2003-01-01

    Discusses sequential patterns, data mining, knowledge acquisition, and fuzzy sequential patterns described by natural language. Proposes a fuzzy data mining technique to discover fuzzy sequential patterns by using the simple partition method which allows the linguistic interpretation of each fuzzy set to be easily obtained. (Author/LRW)

  2. Finding and testing network communities by lumped Markov chains.

    PubMed

    Piccardi, Carlo

    2011-01-01

    Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition that is based on a quality threshold. By means of a lumped Markov chain model of a random walker, a quality measure called "persistence probability" is associated to a cluster, which is then defined as an "α-community" if such a probability is not smaller than α. Consistently, a partition composed of α-communities is an "α-partition." These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired α-level allows one to immediately select the α-partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the quality of each single community. Given its ability in individually assessing each single cluster, this approach can also disclose single well-defined communities even in networks that overall do not possess a definite clusterized structure.

  3. Dry matter partitioning models for the simulation of individual fruit growth in greenhouse cucumber canopies

    PubMed Central

    Wiechers, Dirk; Kahlen, Katrin; Stützel, Hartmut

    2011-01-01

    Background and Aims Growth imbalances between individual fruits are common in indeterminate plants such as cucumber (Cucumis sativus). In this species, these imbalances can be related to differences in two growth characteristics, fruit growth duration until reaching a given size and fruit abortion. Both are related to distribution, and environmental factors as well as canopy architecture play a key role in their differentiation. Furthermore, events leading to a fruit reaching its harvestable size before or simultaneously with a prior fruit can be observed. Functional–structural plant models (FSPMs) allow for interactions between environmental factors, canopy architecture and physiological processes. Here, we tested hypotheses which account for these interactions by introducing dominance and abortion thresholds for the partitioning of assimilates between growing fruits. Methods Using the L-System formalism, an FSPM was developed which combined a model for architectural development, a biochemical model of photosynthesis and a model for assimilate partitioning, the last including a fruit growth model based on a size-related potential growth rate (RP). Starting from a distribution proportional to RP, the model was extended by including abortion and dominance. Abortion was related to source strength and dominance to sink strength. Both thresholds were varied to test their influence on fruit growth characteristics. Simulations were conducted for a dense row and a sparse isometric canopy. Key Results The simple partitioning models failed to simulate individual fruit growth realistically. The introduction of abortion and dominance thresholds gave the best results. Simulations of fruit growth durations and abortion rates were in line with measurements, and events in which a fruit was harvestable earlier than an older fruit were reproduced. Conclusions Dominance and abortion events need to be considered when simulating typical fruit growth traits. By integrating environmental factors, the FSPM can be a valuable tool to analyse and improve existing knowledge about the dynamics of assimilates partitioning. PMID:21715366

  4. Can Supersaturation Affect Protein Crystal Quality?

    NASA Technical Reports Server (NTRS)

    Gorti, Sridhar

    2013-01-01

    In quiescent environments (microgravity, capillary tubes, gels) formation of a depletion zone is to be expected, due either to limited sedimentation, density driven convection or a combination of both. The formation of a depletion zone can: Modify solution supersaturation near crystal; Give rise to impurity partitioning. It is conjectured that both supersaturation and impurity partitioning affect protein crystal quality and size. Further detailed investigations on various proteins are needed to assess above hypothesis.

  5. Dynamic connectivity regression: Determining state-related changes in brain connectivity

    PubMed Central

    Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.

    2014-01-01

    Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408

  6. Combined spectroscopic imaging and chemometric approach for automatically partitioning tissue types in human prostate tissue biopsies

    NASA Astrophysics Data System (ADS)

    Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil

    2001-07-01

    We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.

  7. Sensitivity of Aerosol Mass and Microphysics to Treatments of Condensational Growth of Secondary Organic Compounds in a Regional Model

    NASA Astrophysics Data System (ADS)

    Topping, D. O.; Lowe, D.; McFiggans, G.; Zaveri, R. A.

    2016-12-01

    Gas to particle partitioning of atmospheric compounds occurs through disequilibrium mass transfer rather than through instantaneous equilibrium. However, it is common to treat only the inorganic compounds as partitioning dynamically whilst organic compounds, represented by the Volatility Basis Set (VBS), are partitioned instantaneously. In this study we implement a more realistic dynamic partitioning of organic compounds in a regional framework and assess impact on aerosol mass and microphysics. It is also common to assume condensed phase water is only associated with inorganic components. We thus also assess sensitivity to assuming all organics are hygroscopic according to their prescribed molecular weight.For this study we use WRF-Chem v3.4.1, focusing on anthropogenic dominated North-Western Europe. Gas-phase chemistry is represented using CBM-Z whilst aerosol dynamics are simulated using the 8-section MOSAIC scheme, including a 9-bin volatility basis set (VBS) treatment of organic aerosol. Results indicate that predicted mass loadings can vary significantly. Without gas phase ageing of higher volatility compounds, dynamic partitioning always results in lower mass loadings downwind of emission sources. The inclusion of condensed phase water in both partitioning models increases the predicted PM mass, resulting from a larger contribution from higher volatility organics, if present. If gas phase ageing of VBS compounds is allowed to occur in a dynamic model, this can often lead to higher predicted mass loadings, contrary to expected behaviour from a simple non-reactive gas phase box model. As descriptions of aerosol phase processes improve within regional models, the baseline descriptions of partitioning should retain the ability to treat dynamic partitioning of organic compounds. Using our simulations, we discuss whether derived sensitivities to aerosol processes in existing models may be inherently biased.This work was supported by the Nature Environment Research Council within the RONOCO (NE/F004656/1) and CCN-Vol (NE/L007827/1) projects.

  8. Surveillance system and method having parameter estimation and operating mode partitioning

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.

  9. Phase Partitioning of Soluble Trace Gases with Size-Resolved Aerosols during the Nitrogen, Aerosol Composition, and Halogens on a Tall Tower (NACHTT) Campaign

    NASA Astrophysics Data System (ADS)

    Young, A.; Keene, W. C.; Pszenny, A.; Sander, R.; Maben, J. R.; Warrick-Wriston, C.; Bearekman, R.

    2011-12-01

    During February and March 2011, size-resolved and bulk aerosol were sampled at 22 m above the surface over nominal 12-hour (daytime and nighttime) intervals from the Boulder Atmospheric Observatory tower (40.05 N, 105.01 W, 1584-m elevation). Samples were analyzed for major organic and inorganic ionic constituents by high performance ion chromatography (IC). Soluble trace gases (HCl, HNO3, NH3, HCOOH, and CH3COOH) were sampled in parallel over 2-hour intervals with tandem mist chambers and analyzed on site by IC. NH4+, NO3-, and SO42- were the major ionic components of aerosols (median values of 57.7, 34.5, and 7.3 nmol m-3 at STP, respectively, N = 45) with 86%, 82%, and 82%, respectively, associated with sub-μm size fractions. Cl- and Na+ were present at significant concentrations (median values of 6.8 and 6.6 nmol m-3, respectively) but were associated primarily with super-μm size fractions (75% and 78%, respectively). Median values (and ranges) for HCl, HNO3, and NH3 were 21 (<20-1257), 120 (<45-1638), and 5259 (<1432-48,583) pptv, respectively. Liquid water contents of size-resolved aerosols and activity coefficients for major ionic constituents were calculated with the Extended Aerosol Inorganic Model II and IV (E-AIM) based on the measured aerosol composition, RH, temperature, and pressure. Size-resolved aerosol pHs were inferred from the measured phase partitioning of HCl, HNO3, and NH3. Major controls of phase partitioning and associated chemical dynamics will be presented.

  10. S-HARP: A parallel dynamic spectral partitioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Simon, H.

    1998-01-01

    Computational science problems with adaptive meshes involve dynamic load balancing when implemented on parallel machines. This dynamic load balancing requires fast partitioning of computational meshes at run time. The authors present in this report a fast parallel dynamic partitioner, called S-HARP. The underlying principles of S-HARP are the fast feature of inertial partitioning and the quality feature of spectral partitioning. S-HARP partitions a graph from scratch, requiring no partition information from previous iterations. Two types of parallelism have been exploited in S-HARP, fine grain loop level parallelism and coarse grain recursive parallelism. The parallel partitioner has been implemented in Messagemore » Passing Interface on Cray T3E and IBM SP2 for portability. Experimental results indicate that S-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.2 seconds on a 64 processor Cray T3E. S-HARP is much more scalable than other dynamic partitioners, giving over 15 fold speedup on 64 processors while ParaMeTiS1.0 gives a few fold speedup. Experimental results demonstrate that S-HARP is three to 10 times faster than the dynamic partitioners ParaMeTiS and Jostle on six computational meshes of size over 100,000 vertices.« less

  11. Molecular features determining different partitioning patterns of papain and bromelain in aqueous two-phase systems.

    PubMed

    Rocha, Maria Victoria; Nerli, Bibiana Beatriz

    2013-10-01

    The partitioning patterns of papain (PAP) and bromelain (BR), two well-known cysteine-proteases, in polyethyleneglycol/sodium citrate aqueous two-phase systems (ATPSs) were determined. Polyethyleneglycols of different molecular weight (600, 1000, 2000, 4600 and 8000) were assayed. Thermodynamic characterization of partitioning process, spectroscopy measurements and computational calculations of protein surface properties were also carried out in order to explain their differential partitioning behavior. PAP was observed to be displaced to the salt-enriched phase in all the assayed systems with partition coefficients (KpPAP) values between 0.2 and 0.9, while BR exhibited a high affinity for the polymer phase in systems formed by PEGs of low molecular weight (600 and 1000) with partition coefficients (KpBR) values close to 3. KpBR values resulted higher than KpPAP in all the cases. This difference could be assigned neither to the charge nor to the size of the partitioned biomolecules since PAP and BR possess similar molecular weight (23,000) and isoelectric point (9.60). The presence of highly exposed tryptophans and positively charged residues (Lys, Arg and His) in BR molecule would be responsible for a charge transfer interaction between PEG and the protein and, therefore, the uneven distribution of BR in these systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Automatic reconstruction of fault networks from seismicity catalogs: Three-dimensional optimal anisotropic dynamic clustering

    NASA Astrophysics Data System (ADS)

    Ouillon, G.; Ducorbier, C.; Sornette, D.

    2008-01-01

    We propose a new pattern recognition method that is able to reconstruct the three-dimensional structure of the active part of a fault network using the spatial location of earthquakes. The method is a generalization of the so-called dynamic clustering (or k means) method, that partitions a set of data points into clusters, using a global minimization criterion of the variance of the hypocenters locations about their center of mass. The new method improves on the original k means method by taking into account the full spatial covariance tensor of each cluster in order to partition the data set into fault-like, anisotropic clusters. Given a catalog of seismic events, the output is the optimal set of plane segments that fits the spatial structure of the data. Each plane segment is fully characterized by its location, size, and orientation. The main tunable parameter is the accuracy of the earthquake locations, which fixes the resolution, i.e., the residual variance of the fit. The resolution determines the number of fault segments needed to describe the earthquake catalog: the better the resolution, the finer the structure of the reconstructed fault segments. The algorithm successfully reconstructs the fault segments of synthetic earthquake catalogs. Applied to the real catalog constituted of a subset of the aftershock sequence of the 28 June 1992 Landers earthquake in southern California, the reconstructed plane segments fully agree with faults already known on geological maps or with blind faults that appear quite obvious in longer-term catalogs. Future improvements of the method are discussed, as well as its potential use in the multiscale study of the inner structure of fault zones.

  13. Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning

    NASA Astrophysics Data System (ADS)

    Schumacher, André; Haanpää, Harri

    We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with ns2 simulations.

  14. Generalization of multifractal theory within quantum calculus

    NASA Astrophysics Data System (ADS)

    Olemskoi, A.; Shuda, I.; Borisyuk, V.

    2010-03-01

    On the basis of the deformed series in quantum calculus, we generalize the partition function and the mass exponent of a multifractal, as well as the average of a random variable distributed over a self-similar set. For the partition function, such expansion is shown to be determined by binomial-type combinations of the Tsallis entropies related to manifold deformations, while the mass exponent expansion generalizes the known relation τq=Dq(q-1). We find the equation for the set of averages related to ordinary, escort, and generalized probabilities in terms of the deformed expansion as well. Multifractals related to the Cantor binomial set, exchange currency series, and porous-surface condensates are considered as examples.

  15. Geochemical Constraints on the Size of the Moon — Forming Giant Impact

    NASA Astrophysics Data System (ADS)

    Piet, H.; Badro, J.; Gillet, P.

    2018-05-01

    We use the partitioning of siderophile trace elements to model the geochemical influence of the Moon-forming giant impact on Earth’s mantle during core formation. We find the size of the impactor to be 15% of Earth mass or smaller.

  16. A Parameterization for the Triggering of Landscape Generated Moist Convection

    NASA Technical Reports Server (NTRS)

    Lynn, Barry H.; Tao, Wei-Kuo; Abramopoulos, Frank

    1998-01-01

    A set of relatively high resolution three-dimensional (3D) simulations were produced to investigate the triggering of moist convection by landscape generated mesoscale circulations. The local accumulated rainfall varied monotonically (linearly) with the size of individual landscape patches, demonstrating the need to develop a trigger function that is sensitive to the size of individual patches. A new triggering function that includes the effect of landscapes generated mesoscale circulations over patches of different sizes consists of a parcel's perturbation in vertical velocity (nu(sub 0)), temperature (theta(sub 0)), and moisture (q(sub 0)). Each variable in the triggering function was also sensitive to soil moisture gradients, atmospheric initial conditions, and moist processes. The parcel's vertical velocity, temperature, and moisture perturbation were partitioned into mesoscale and turbulent components. Budget equations were derived for theta(sub 0) and q(sub 0). Of the many terms in this set of budget equations, the turbulent, vertical flux of the mesoscale temperature and moisture contributed most to the triggering of moist convection through the impact of these fluxes on the parcel's temperature and moisture profile. These fluxes needed to be parameterized to obtain theta(sub 0) and q(sub 0). The mesoscale vertical velocity also affected the profile of nu(sub 0). We used similarity theory to parameterize these fluxes as well as the parcel's mesoscale vertical velocity.

  17. Body size mediated coexistence of consumers competing for resources in space

    USGS Publications Warehouse

    Basset, A.; Angelis, D.L.

    2007-01-01

    Body size is a major phenotypic trait of individuals that commonly differentiates co-occurring species. We analyzed inter-specific competitive interactions between a large consumer and smaller competitors, whose energetics, selection and giving-up behaviour on identical resource patches scaled with individual body size. The aim was to investigate whether pure metabolic constraints on patch behaviour of vagile species can determine coexistence conditions consistent with existing theoretical and experimental evidence. We used an individual-based spatially explicit simulation model at a spatial scale defined by the home range of the large consumer, which was assumed to be parthenogenic and semelparous. Under exploitative conditions, competitive coexistence occurred in a range of body size ratios between 2 and 10. Asymmetrical competition and the mechanism underlying asymmetry, determined by the scaling of energetics and patch behaviour with consumer body size, were the proximate determinant of inter-specific coexistence. The small consumer exploited patches more efficiently, but searched for profitable patches less effectively than the larger competitor. Therefore, body-size related constraints induced niche partitioning, allowing competitive coexistence within a set of conditions where the large consumer maintained control over the small consumer and resource dynamics. The model summarises and extends the existing evidence of species coexistence on a limiting resource, and provides a mechanistic explanation for decoding the size-abundance distribution patterns commonly observed at guild and community levels. ?? Oikos.

  18. Partitioning error components for accuracy-assessment of near-neighbor methods of imputation

    Treesearch

    Albert R. Stage; Nicholas L. Crookston

    2007-01-01

    Imputation is applied for two quite different purposes: to supply missing data to complete a data set for subsequent modeling analyses or to estimate subpopulation totals. Error properties of the imputed values have different effects in these two contexts. We partition errors of imputation derived from similar observation units as arising from three sources:...

  19. Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.

    ERIC Educational Resources Information Center

    Goetschel, Roy; Voxman, William

    1987-01-01

    Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)

  20. Plasmid DNA partitioning and separation using poly(ethylene glycol)/poly(acrylate)/salt aqueous two-phase systems.

    PubMed

    Johansson, Hans-Olof; Matos, Tiago; Luz, Juliana S; Feitosa, Eloi; Oliveira, Carla C; Pessoa, Adalberto; Bülow, Leif; Tjerneld, Folke

    2012-04-13

    Phase diagrams of poly(ethylene glycol)/polyacrylate/Na(2)SO(4) systems have been investigated with respect to polymer size and pH. Plasmid DNA from Escherichia coli can depending on pH and polymer molecular weight be directed to a poly(ethylene glycol) or to a polyacrylate-rich phase in an aqueous two-phase system formed by these polymers. Bovine serum albumin (BSA) and E. coli homogenate proteins can be directed opposite to the plasmid partitioning in these systems. Two bioseparation processes have been developed where in the final step the pDNA is partitioned to a salt-rich phase giving a total process yield of 60-70%. In one of them the pDNA is partitioned between the polyacrylate and PEG-phases in order to remove proteins. In a more simplified process the plasmid is partitioned to a PEG-phase and back-extracted into a Na(2)SO(4)-rich phase. The novel polyacrylate/PEG system allows a strong change of the partitioning between the phases with relatively small changes in composition or pH. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Multi-A Graph Patrolling and Partitioning

    NASA Astrophysics Data System (ADS)

    Elor, Y.; Bruckstein, A. M.

    2012-12-01

    We introduce a novel multi agent patrolling algorithm inspired by the behavior of gas filled balloons. Very low capability ant-like agents are considered with the task of patrolling an unknown area modeled as a graph. While executing the proposed algorithm, the agents dynamically partition the graph between them using simple local interactions, every agent assuming the responsibility for patrolling his subgraph. Balanced graph partition is an emergent behavior due to the local interactions between the agents in the swarm. Extensive simulations on various graphs (environments) showed that the average time to reach a balanced partition is linear with the graph size. The simulations yielded a convincing argument for conjecturing that if the graph being patrolled contains a balanced partition, the agents will find it. However, we could not prove this. Nevertheless, we have proved that if a balanced partition is reached, the maximum time lag between two successive visits to any vertex using the proposed strategy is at most twice the optimal so the patrol quality is at least half the optimal. In case of weighted graphs the patrol quality is at least (1)/(2){lmin}/{lmax} of the optimal where lmax (lmin) is the longest (shortest) edge in the graph.

  2. Compressible fluids with Maxwell-type equations, the minimal coupling with electromagnetic field and the Stefan–Boltzmann law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendes, Albert C.R., E-mail: albert@fisica.ufjf.br; Takakura, Flavio I., E-mail: takakura@fisica.ufjf.br; Abreu, Everton M.C., E-mail: evertonabreu@ufrrj.br

    In this work we have obtained a higher-derivative Lagrangian for a charged fluid coupled with the electromagnetic fluid and the Dirac’s constraints analysis was discussed. A set of first-class constraints fixed by noncovariant gauge condition were obtained. The path integral formalism was used to obtain the partition function for the corresponding higher-derivative Hamiltonian and the Faddeev–Popov ansatz was used to construct an effective Lagrangian. Through the partition function, a Stefan–Boltzmann type law was obtained. - Highlights: • Higher-derivative Lagrangian for a charged fluid. • Electromagnetic coupling and Dirac’s constraint analysis. • Partition function through path integral formalism. • Stefan–Boltzmann-kind lawmore » through the partition function.« less

  3. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  4. Geochemical heterogeneity in a sand and gravel aquifer: Effect of sediment mineralogy and particle size on the sorption of chlorobenzenes

    USGS Publications Warehouse

    Barber, Larry B.; Thurman, E. Michael; Runnells, Donald D.

    1992-01-01

    The effect of particle size, mineralogy and sediment organic carbon (SOC) on sorption of tetrachlorobenzene and pentachlorobenzene was evaluated using batch-isotherm experiments on sediment particle-size and mineralogical fractions from a sand and gravel aquifer, Cape Cod, Massachusetts. Concentration of SOC and sorption of chlorobenzenes increase with decreasing particle size. For a given particle size, the magnetic fraction has a higher SOC content and sorption capacity than the bulk or non-magnetic fractions. Sorption appears to be controlled by the magnetic minerals, which comprise only 5–25% of the bulk sediment. Although SOC content of the bulk sediment is <0.1%, the observed sorption of chlorobenzenes is consistent with a partition mechanism and is adequately predicted by models relating sorption to the octanol/water partition coefficient of the solute and SOC content. A conceptual model based on preferential association of dissolved organic matter with positively-charged mineral surfaces is proposed to describe micro-scale, intergranular variability in sorption properties of the aquifer sediments.

  5. Conformal partition functions of critical percolation from D 3 thermodynamic Bethe Ansatz equations

    NASA Astrophysics Data System (ADS)

    Morin-Duchesne, Alexi; Klümper, Andreas; Pearce, Paul A.

    2017-08-01

    Using the planar Temperley-Lieb algebra, critical bond percolation on the square lattice can be reformulated as a loop model. In this form, it is incorporated as {{ L}}{{ M}}(2, 3) in the Yang-Baxter integrable family of logarithmic minimal models {{ L}}{{ M}}( p, p\\prime) . We consider this model of percolation in the presence of boundaries and with periodic boundary conditions. Inspired by Kuniba, Sakai and Suzuki, we rewrite the recently obtained infinite Y-system of functional equations. In this way, we obtain nonlinear integral equations in the form of a closed finite set of TBA equations described by a D 3 Dynkin diagram. Following the methods of Klümper and Pearce, we solve the TBA equations for the conformal finite-size corrections. For the ground states of the standard modules on the strip, these agree with the known central charge c  =  0 and conformal weights Δ1, s for \\renewcommand≥≥slant} s\\in {{ Z}≥slant 1} with Δr, s=\\big((3r-2s){\\hspace{0pt}}^2-1\\big)/24 . For the periodic case, the finite-size corrections agree with the conformal weights Δ0, s , Δ1, s with \\renewcommand{≥{≥slant} s\\in\\frac{1}{2}{{ Z}≥slant 0} . These are obtained analytically using Rogers dilogarithm identities. We incorporate all finite excitations by formulating empirical selection rules for the patterns of zeros of all the eigenvalues of the standard modules. We thus obtain the conformal partition functions on the cylinder and the modular invariant partition function (MIPF) on the torus. By applying q-binomial and q-Narayana identities, it is shown that our refined finitized characters on the strip agree with those of Pearce, Rasmussen and Zuber. For percolation on the torus, the MIPF is a non-diagonal sesquilinear form in affine u(1) characters given by the u(1) partition function Z2, 3(q)=Z2, 3{Circ}(q) . The u(1) operator content is {{ N}}Δ, \\barΔ=1 for Δ=\\barΔ=-\\frac{1}{24}, \\frac{35}{24} and {{ N}}Δ, \\barΔ=2 for Δ=\\barΔ=\\frac{1}{8}, \\frac{1}{3}, \\frac{5}{8} and (Δ, \\barΔ)=(0, 1), (1, 0) . This result is compatible with the general conjecture of Pearce and Rasmussen, namely Zp, p\\prime(q)=Z{Proj}p, p\\prime(q)+np, p\\prime Z{Min}p, p\\prime(q) with np, p\\prime\\in {{ Z}} , where the minimal partition function is Z{Min}2, 3(q)=1 and the lattice derivation fixes n 2,3  =  -1.

  6. Gravitational Instabilities associated with volcanic clouds: new insights from experimental investigations

    NASA Astrophysics Data System (ADS)

    Scollo, Simona; Bonadonna, Costanza; Manzella, Irene

    2016-04-01

    Gravitational instabilities are often observed at the bottom of volcanic plumes and clouds generating fingers that propagate downward enhancing sedimentation of fine ash. Regardless of their potential influence on tephra dispersal and deposition, their dynamics is not completely understood, undermining the accuracy of volcanic ash transport and dispersal models. Here we present new laboratory experiments that investigate the effects of particle size, composition and concentration on finger dynamics and generation. The experimental set-up consists of a Plexiglas tank of 50 x 30.3 x 7.5 cm equipped with a removable banner for the partition of two separate layers. The lower partition is a solution of water and sugar and is therefore characterized by a higher density than the upper partition which is filled with water and particles. The upper layer is quiescent (unmixed experiments), or continually mixed using a rotary stirrer (mixed experiments). After removing the horizontal barrier that separates the two fluids, particles are illuminated with a 2W Nd-YAG laser named RayPower 2000 and filmed with a HD camera (1920x1080 pixels). Images are analysed by the Dynamic Studio Software (DANTEC) that is a tool for the acquisition and analysis of velocity and related properties of particles inside the fluids. Each particle that follows the flow and scatters light captured by the camera is analysed based on velocity vectors. Experiments are carried out in order to evaluate the main features of fingers (number, width and speed) as a function of particle type, size and initial concentration. Particles include Glass Beads (GB) with diameter < 32 μm, 45-63 μm, and 63-90 μm and Andesitic, Rhyolitic, and Basaltic Volcanic Ash with diameter < 32 μm, 45-63 μm, 63-90 μm, 90-125 μm, 125-180 μm and > 180 μm. Three initial particle concentrations in the upper layer were employed: 3 g/l, 4 g/l and 5 g/l. Results show that the number and the speed of fingers increases with particle concentration and the speed increases with particles size while it is independent on particle types. Finally, experiments point out that development of instability leads to particle aggregation inside the fingers.

  7. Subcellular compartmentalization of Cd and Zn in two bivalves. I. Significance of metal-sensitive fractions (MSF) and biologically detoxified metal (BDM)

    USGS Publications Warehouse

    Wallace, W.G.; Lee, B.-G.; Luoma, S.N.

    2003-01-01

    Many aspects of metal accumulation in aquatic invertebrates (i.e. toxicity, tolerance and trophic transfer) can be understood by examining the subcellular partitioning of accumulated metal. In this paper, we use a compartmentalization approach to interpret the significance of metal, species and size dependence in the subcellular partitioning of Cd and Zn in the bivalves Macoma balthica and Potamocorbula amurensis. Of special interest is the compartmentalization of metal as metal-sensitive fractions (MSF) (i.e. organelles and heat-sensitive proteins, termed 'enzymes' hereafter) and biologically detoxified metal (BDM) (i.e. metallothioneins [MT] and metal-rich granules [MRG]). Clams from San Francisco Bay, CA, were exposed for 14 d to seawater (20??? salinity) containing 3.5 ??g l-1 Cd and 20.5 ??g l-1 Zn, including 109Cd and 65Zn as radiotracers. Uptake was followed by 21 d of depuration. The subcellular partitioning of metal within clams was examined following exposure and loss. P. amurensis accumulated ???22x more Cd and ???2x more Zn than M. balthica. MT played an important role in the storage of Cd in P. amurensis, while organelles were the major site of Zn accumulation. In M. balthica, Cd and Zn partitioned similarly, although the pathway of detoxification was metal-specific (MRG for Cd; MRG and MT for Zn). Upon loss, M. balthica depurated ???40% of Cd with Zn being retained; P. amurensis retained Cd and depurated Zn (???40%). During efflux, Cd and Zn concentrations in the MSF compartment of both clams declined with metal either being lost from the animal or being transferred to the BDM compartment. Subcellular compartmentalization was also size-dependent, with the importance of BDM increasing with clam size; MSF decreased accordingly. We hypothesized that progressive retention of metal as BDM (i.e. MRG) with age may lead to size dependency of metal concentrations often observed in some populations of M. balthica.

  8. Ion-pair partition of quarternary ammonium drugs: the influence of counter ions of different lipophilicity, size, and flexibility.

    PubMed

    Takács-Novák, K; Szász, G

    1999-10-01

    The ion-pair partition of quaternary ammonium (QA) pharmacons with organic counter ions of different lipophilicity, size, shape and flexibility was studied to elucidate relationships between ion-pair formation and chemical structure. The apparent partition coefficient (P') of 4 QAs was measured in octanol/pH 7.4 phosphate buffer system by the shake-flask method as a function of molar excess of ten counter ions (Y), namely: mesylate (MES), acetate (AC), pyruvate (PYRU), nicotinate (NIC), hydrogenfumarate (HFUM), hydrogenmaleate (HMAL), p-toluenesulfonate (PTS), caproate (CPR), deoxycholate (DOC) and prostaglandin E1 anion (PGE1). Based on 118 of highly precise logP' values (SD< 0.05), the intrinsic lipophilicity (without external counter ions) and the ion-pair partition of QAs (with different counter ions) were characterized. Linear correlation was found between the logP' of ion-pairs and the size of the counter ions described by the solvent accessible surface area (SASA). The lipophilicity increasing effect of the counter ions were quantified and the following order was established: DOC approximate to PGE1 > CPR approximate to PTS > NIC approximate to HMAL > PYRU approximate to AC approximate to MES approximate to HFUM. Analyzing the lipophilicity/molar ratio (QA:Y) profile, the differences in the ion-pair formation were shown and attributed to the differences in the flexibility/rigidity and size both of QA and Y. Since the largest (in average, 300 X) lipophilicity enhancement was found by the influence of DOC and PGE1 and considerable (on average 40 X) increase was observed by CPR and PTS, it was concluded that bile acids and prostaglandin anions may play a significant role in the ion-pair transport of quaternary ammonium drugs and caproic acid and p-toluenesulfonic acid may be useful salt forming agents to improve the pharmacokinetics of hydrophilic drugs.

  9. New Linear Partitioning Models Based on Experimental Water: Supercritical CO 2 Partitioning Data of Selected Organic Compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burant, Aniela; Thompson, Christopher; Lowry, Gregory V.

    2016-05-17

    Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch reactor system with dual spectroscopic detectors: a near infrared spectrometer for measuring the organic analyte in the CO2 phase, and a UV detector for quantifying the analyte inmore » the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly-parameter linear free energy relationship and to develop five new linear free energy relationships for predicting water-sc-CO2 partitioning coefficients. Four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than the model built for the entire dataset.« less

  10. Multi-scale modularity and motif distributional effect in metabolic networks.

    PubMed

    Gao, Shang; Chen, Alan; Rahmani, Ali; Zeng, Jia; Tan, Mehmet; Alhajj, Reda; Rokne, Jon; Demetrick, Douglas; Wei, Xiaohui

    2016-01-01

    Metabolism is a set of fundamental processes that play important roles in a plethora of biological and medical contexts. It is understood that the topological information of reconstructed metabolic networks, such as modular organization, has crucial implications on biological functions. Recent interpretations of modularity in network settings provide a view of multiple network partitions induced by different resolution parameters. Here we ask the question: How do multiple network partitions affect the organization of metabolic networks? Since network motifs are often interpreted as the super families of evolved units, we further investigate their impact under multiple network partitions and investigate how the distribution of network motifs influences the organization of metabolic networks. We studied Homo sapiens, Saccharomyces cerevisiae and Escherichia coli metabolic networks; we analyzed the relationship between different community structures and motif distribution patterns. Further, we quantified the degree to which motifs participate in the modular organization of metabolic networks.

  11. Data Clustering

    NASA Astrophysics Data System (ADS)

    Wagstaff, Kiri L.

    2012-03-01

    On obtaining a new data set, the researcher is immediately faced with the challenge of obtaining a high-level understanding from the observations. What does a typical item look like? What are the dominant trends? How many distinct groups are included in the data set, and how is each one characterized? Which observable values are common, and which rarely occur? Which items stand out as anomalies or outliers from the rest of the data? This challenge is exacerbated by the steady growth in data set size [11] as new instruments push into new frontiers of parameter space, via improvements in temporal, spatial, and spectral resolution, or by the desire to "fuse" observations from different modalities and instruments into a larger-picture understanding of the same underlying phenomenon. Data clustering algorithms provide a variety of solutions for this task. They can generate summaries, locate outliers, compress data, identify dense or sparse regions of feature space, and build data models. It is useful to note up front that "clusters" in this context refer to groups of items within some descriptive feature space, not (necessarily) to "galaxy clusters" which are dense regions in physical space. The goal of this chapter is to survey a variety of data clustering methods, with an eye toward their applicability to astronomical data analysis. In addition to improving the individual researcher’s understanding of a given data set, clustering has led directly to scientific advances, such as the discovery of new subclasses of stars [14] and gamma-ray bursts (GRBs) [38]. All clustering algorithms seek to identify groups within a data set that reflect some observed, quantifiable structure. Clustering is traditionally an unsupervised approach to data analysis, in the sense that it operates without any direct guidance about which items should be assigned to which clusters. There has been a recent trend in the clustering literature toward supporting semisupervised or constrained clustering, in which some partial information about item assignments or other components of the resulting output are already known and must be accommodated by the solution. Some algorithms seek a partition of the data set into distinct clusters, while others build a hierarchy of nested clusters that can capture taxonomic relationships. Some produce a single optimal solution, while others construct a probabilistic model of cluster membership. More formally, clustering algorithms operate on a data set X composed of items represented by one or more features (dimensions). These could include physical location, such as right ascension and declination, as well as other properties such as brightness, color, temporal change, size, texture, and so on. Let D be the number of dimensions used to represent each item, xi ∈ RD. The clustering goal is to produce an organization P of the items in X that optimizes an objective function f : P -> R, which quantifies the quality of solution P. Often f is defined so as to maximize similarity within a cluster and minimize similarity between clusters. To that end, many algorithms make use of a measure d : X x X -> R of the distance between two items. A partitioning algorithm produces a set of clusters P = {c1, . . . , ck} such that the clusters are nonoverlapping (c_i intersected with c_j = empty set, i != j) subsets of the data set (Union_i c_i=X). Hierarchical algorithms produce a series of partitions P = {p1, . . . , pn }. For a complete hierarchy, the number of partitions n’= n, the number of items in the data set; the top partition is a single cluster containing all items, and the bottom partition contains n clusters, each containing a single item. For model-based clustering, each cluster c_j is represented by a model m_j , such as the cluster center or a Gaussian distribution. The wide array of available clustering algorithms may seem bewildering, and covering all of them is beyond the scope of this chapter. Choosing among them for a particular application involves considerations of the kind of data being analyzed, algorithm runtime efficiency, and how much prior knowledge is available about the problem domain, which can dictate the nature of clusters sought. Fundamentally, the clustering method and its representations of clusters carries with it a definition of what a cluster is, and it is important that this be aligned with the analysis goals for the problem at hand. In this chapter, I emphasize this point by identifying for each algorithm the cluster representation as a model, m_j , even for algorithms that are not typically thought of as creating a “model.” This chapter surveys a basic collection of clustering methods useful to any practitioner who is interested in applying clustering to a new data set. The algorithms include k-means (Section 25.2), EM (Section 25.3), agglomerative (Section 25.4), and spectral (Section 25.5) clustering, with side mentions of variants such as kernel k-means and divisive clustering. The chapter also discusses each algorithm’s strengths and limitations and provides pointers to additional in-depth reading for each subject. Section 25.6 discusses methods for incorporating domain knowledge into the clustering process. This chapter concludes with a brief survey of interesting applications of clustering methods to astronomy data (Section 25.7). The chapter begins with k-means because it is both generally accessible and so widely used that understanding it can be considered a necessary prerequisite for further work in the field. EM can be viewed as a more sophisticated version of k-means that uses a generative model for each cluster and probabilistic item assignments. Agglomerative clustering is the most basic form of hierarchical clustering and provides a basis for further exploration of algorithms in that vein. Spectral clustering permits a departure from feature-vector-based clustering and can operate on data sets instead represented as affinity, or similarity matrices—cases in which only pairwise information is known. The list of algorithms covered in this chapter is representative of those most commonly in use, but it is by no means comprehensive. There is an extensive collection of existing books on clustering that provide additional background and depth. Three early books that remain useful today are Anderberg’s Cluster Analysis for Applications [3], Hartigan’s Clustering Algorithms [25], and Gordon’s Classification [22]. The latter covers basics on similarity measures, partitioning and hierarchical algorithms, fuzzy clustering, overlapping clustering, conceptual clustering, validations methods, and visualization or data reduction techniques such as principal components analysis (PCA),multidimensional scaling, and self-organizing maps. More recently, Jain et al. provided a useful and informative survey [27] of a variety of different clustering algorithms, including those mentioned here as well as fuzzy, graph-theoretic, and evolutionary clustering. Everitt’s Cluster Analysis [19] provides a modern overview of algorithms, similarity measures, and evaluation methods.

  12. Spectral partitioning in equitable graphs.

    PubMed

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  13. Spectral partitioning in equitable graphs

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  14. Copula-based analysis of rhythm

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. Lanfredi

    2016-06-01

    In this paper we establish stochastic profiles of the rhythm for three languages: English, Japanese and Spanish. We model the increase or decrease of the acoustical energy, collected into three bands coming from the acoustic signal. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination of the partitions corresponding to the three marginal processes, one for each band of energy, and the partition coming from to the multivariate Markov chain. Then, all the partitions are linked using a copula, in order to estimate the transition probabilities.

  15. Size distribution of rare earth elements in coal ash

    USGS Publications Warehouse

    Scott, Clinton T.; Deonarine, Amrika; Kolker, Allan; Adams, Monique; Holland, James F.

    2015-01-01

    Rare earth elements (REEs) are utilized in various applications that are vital to the automotive, petrochemical, medical, and information technology industries. As world demand for REEs increases, critical shortages are expected. Due to the retention of REEs during coal combustion, coal fly ash is increasingly considered a potential resource. Previous studies have demonstrated that coal fly ash is variably enriched in REEs relative to feed coal (e.g, Seredin and Dai, 2012) and that enrichment increases with decreasing size fractions (Blissett et al., 2014). In order to further explore the REE resource potential of coal ash, and determine the partitioning behavior of REE as a function of grain size, we studied whole coal and fly ash size-fractions collected from three U.S commercial-scale coal-fired generating stations burning Appalachian or Powder River Basin coal. Whole fly ash was separated into , 5 um, to 5 to 10 um and 10 to 100 um particle size fractions by mechanical shaking using trace-metal clean procedures. In these samples REE enrichments in whole fly ash ranges 5.6 to 18.5 times that of feedcoals. Partitioning results for size separates relative to whole coal and whole fly ash will also be reported. 

  16. Surveillance system and method having parameter estimation and operating mode partitioning

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2003-01-01

    A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.

  17. Predatory birds and ants partition caterpillar prey by body size and diet breadth.

    PubMed

    Singer, Michael S; Clark, Robert E; Lichter-Marck, Issac H; Johnson, Emily R; Mooney, Kailen A

    2017-10-01

    The effects of predator assemblages on herbivores are predicted to depend critically on predator-predator interactions and the extent to which predators partition prey resources. The role of prey heterogeneity in generating such multiple predator effects has received limited attention. Vertebrate and arthropod insectivores constitute two co-dominant predatory taxa in many ecosystems, and the emergent properties of their joint effects on insect herbivores inform theory on multiple predator effects as well as biological control of insect herbivores. Here we use a large-scale factorial manipulation to assess the extent to which birds and ants engage in antagonistic predator-predator interactions and the consequences of heterogeneity in herbivore body size and diet breadth (i.e. the diversity of host plants used) for prey partitioning. We excluded birds and reduced ant density (by 60%) in the canopies of eight northeastern USA deciduous tree species during two consecutive years and measured the community composition and traits of lepidopteran larvae (caterpillars). Birds did not affect ant density, implying limited intraguild predation between these taxa in this system. Birds preyed selectively upon large-bodied caterpillars (reducing mean caterpillar length by 12%) and ants preyed selectively upon small-bodied caterpillars (increasing mean caterpillar length by 6%). Birds and ants also partitioned caterpillar prey by diet breadth. Birds reduced the frequency dietary generalist caterpillars by 24%, while ants had no effect. In contrast, ants reduced the frequency of dietary specialists by 20%, while birds had no effect, but these effects were non-additive; under bird exclusion, ants had no detectable effect, while in the presence of birds, they reduced the frequency of specialists by 40%. As a likely result of prey partitioning by body size and diet breadth, the combined effects of birds and ants on total caterpillar density were additive, with birds and ants reducing caterpillar density by 44% and 20% respectively. These results show evidence for the role of prey heterogeneity in driving functional complementarity among predators and enhanced top-down control. Heterogeneity in herbivore body size and diet breadth, as well as other prey traits, may represent key predictors of the strength of top-down control from predator communities. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.

  18. Performance Comparison of a Set of Periodic and Non-Periodic Tridiagonal Solvers on SP2 and Paragon Parallel Computers

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Moitra, Stuti

    1996-01-01

    Various tridiagonal solvers have been proposed in recent years for different parallel platforms. In this paper, the performance of three tridiagonal solvers, namely, the parallel partition LU algorithm, the parallel diagonal dominant algorithm, and the reduced diagonal dominant algorithm, is studied. These algorithms are designed for distributed-memory machines and are tested on an Intel Paragon and an IBM SP2 machines. Measured results are reported in terms of execution time and speedup. Analytical study are conducted for different communication topologies and for different tridiagonal systems. The measured results match the analytical results closely. In addition to address implementation issues, performance considerations such as problem sizes and models of speedup are also discussed.

  19. Size distribution and clothing-air partitioning of polycyclic aromatic hydrocarbons generated by barbecue.

    PubMed

    Lao, Jia-Yong; Wu, Chen-Chou; Bao, Lian-Jun; Liu, Liang-Ying; Shi, Lei; Zeng, Eddy Y

    2018-10-15

    Barbecue (BBQ) is one of the most popular cooking activities with charcoal worldwide and produces abundant polycyclic aromatic hydrocarbons (PAHs) and particulate matter. Size distribution and clothing-air partitioning of particle-bound PAHs are significant for assessing potential health hazards to humans due to exposure to BBQ fumes, but have not been examined adequately. To address this issue, particle and gaseous samples were collected at 2-m and 10-m distances from a cluster of four BBQ stoves. Personal samplers and cotton clothes were carried by volunteers sitting near the BBQ stoves. Particle-bound PAHs (especially 4-6 rings) derived from BBQ fumes were mostly affiliated with fine particles in the size range of 0.18-1.8 μm. High molecular-weight PAHs were mostly unimodal peaking in fine particles and consequently had small geometric mean diameters and standard deviations. Source diagnostics indicated that particle-bound PAHs in BBQ fumes were generated primarily by combustion of charcoal, fat content in food, and oil. The influences of BBQ fumes on the occurrence of particle-bound PAHs decreased with increasing distance from BBQ stoves, due to increased impacts of ambient sources, especially by petrogenic sources and to a lesser extent by wind speed and direction. Octanol-air and clothing-air partition coefficients of PAHs obtained from personal air samples were significantly correlated to each other. High molecular-weight PAHs had higher area-normalized clothing-air partition coefficients in cotton clothes, i.e., cotton fabrics may be a significant reservoir of higher molecular-weight PAHs. Particle-bound PAHs from barbecue fumes are generated largely from charcoal combustion and food-charred emissions and mainly affiliated with fine particles. Copyright © 2018. Published by Elsevier B.V.

  20. Fast depth decision for HEVC inter prediction based on spatial and temporal correlation

    NASA Astrophysics Data System (ADS)

    Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi

    2016-07-01

    High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.

  1. COMPARISON OF PARTICLE SIZE DISTRIBUTIONS AND ELEMENTAL PARTITIONING FROM THE COMBUSTION OF PULVERIZED COAL AND RESIDUAL FUEL OIL

    EPA Science Inventory

    The paper gives results of experimental efforts in which three coals and a residual fuel oil were combusted in three different systems simulating process and utility boilers. Particloe size distributions (PSDs) were determined using atmospheric and low-pressure impaction, electr...

  2. Understanding the I/O Performance Gap Between Cori KNL and Haswell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jialin; Koziol, Quincey; Tang, Houjun

    2017-05-01

    The Cori system at NERSC has two compute partitions with different CPU architectures: a 2,004 node Haswell partition and a 9,688 node KNL partition, which ranked as the 5th most powerful and fastest supercomputer on the November 2016 Top 500 list. The compute partitions share a common storage configuration, and understanding the IO performance gap between them is important, impacting not only to NERSC/LBNL users and other national labs, but also to the relevant hardware vendors and software developers. In this paper, we have analyzed performance of single core and single node IO comprehensively on the Haswell and KNL partitions,more » and have discovered the major bottlenecks, which include CPU frequencies and memory copy performance. We have also extended our performance tests to multi-node IO and revealed the IO cost difference caused by network latency, buffer size, and communication cost. Overall, we have developed a strong understanding of the IO gap between Haswell and KNL nodes and the lessons learned from this exploration will guide us in designing optimal IO solutions in many-core era.« less

  3. Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudin, Pablo, E-mail: baudin.pablo@gmail.com; qLEAP – Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C; Marín, José Sánchez

    2014-03-14

    A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well asmore » the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.« less

  4. Experimental investigation of As, Sb and Cs behavior during olivine serpentinization in hydrothermal alkaline systems

    NASA Astrophysics Data System (ADS)

    Lafay, Romain; Montes-Hernandez, German; Janots, Emilie; Munoz, Manuel; Auzende, Anne Line; Gehin, Antoine; Chiriac, Rodica; Proux, Olivier

    2016-04-01

    While Fluid-Mobile Elements (FMEs) such as B, Sb, Li, As or Cs are particularly concentrated in serpentinites, data on FME fluid-serpentine partitioning, distribution, and sequestration mechanisms are missing. In the present experimental study, the behavior of Sb, As and Cs during San Carlos olivine serpentinization was investigated using accurate mineralogical, geochemical, and spectroscopic characterization. Static-batch experiments were conducted at 200 °C, under saturated vapor pressure (≈1.6 MPa), for initial olivine grain sizes of <30 μm (As), 30-56 μm (As, Cs, Sb) and 56-150 μm (Cs) and for periods comprised between 3 and 90 days. High-hydroxyl-alkaline fluid enriched with 200 mg L-1 of a single FME was used and a fluid/solid weight ratio of 15. For these particular conditions, olivine is favorably replaced by a mixture of chrysotile, polygonal serpentine and brucite. Arsenic, Cs or Sb reaction product content was determined as a function of reaction advancement for the different initial olivine grain sizes investigated. The results confirm that serpentinization products have a high FME uptake capacity with the partitioning coefficient increasing such as CsDp/fl = 1.5-1.6 < AsDp/fl = 3.5-4.5 < SbDp/fl = 28 after complete reaction of the 30-56 μm grain-sized olivine. The sequestration pathways of the three elements are however substantially different. While the As partition coefficient remains constant throughout the serpentinization reaction, the Cs partition coefficient decreases abruptly in the first stages of the reaction to reach a constant value after the reaction is 40-60% complete. Both As and Cs partitioning appear to decrease with increasing initial olivine grain size, but there is no significant difference in the partitioning coefficient between the 30-56 and 56-150 μm grain size after complete serpentinization. X-ray absorption spectroscopy (XAS) measurements combined with X-ray chemical measurements reveal that the As(V) is mainly adsorbed onto the serpentinization products, especially brucite. In contrast, mineralogical characterization combined with XAS spectroscopy reveal redox sensitivity for Sb sequestration within serpentine products, depending on the progress of the reaction. When serpentinization is <50%, initial Sb(III) is oxidized into Sb(V) and substantially adsorbed onto serpentine. For higher degrees of reaction, a decrease in Sb sequestration by serpentine products is observed and is attributed to a reduction of Sb(V) into Sb(III). This stage is characterized by the precipitation of Sb-Ni-rich phases and a lower bulk partitioning coefficient compared to that of the serpentine and brucite assemblage. Antimony reduction appears linked to water reduction accompanying the bulk iron oxidation, as half the initial Fe(II) is oxidized into Fe(III) and incorporated into the serpentine products once the reaction is over. The reduction of Sb implies a decrease of its solubility, but the type of secondary Sb-rich phases identified here might not be representative of natural systems where Sb concentrations are lower. These results bring new insights into the uptake of FME by sorption on serpentine products that may form in hydrothermal environments at low temperatures. FME sequestration here appears to be sensitive to various physicochemical parameters and more particularly to redox conditions that appear to play a preponderant role in the concentrations and mechanism of sequestration of redox-sensitive elements.

  5. Cylindric partitions, {{\\boldsymbol{ W }}}_{r} characters and the Andrews-Gordon-Bressoud identities

    NASA Astrophysics Data System (ADS)

    Foda, O.; Welsh, T. A.

    2016-04-01

    We study the Andrews-Gordon-Bressoud (AGB) generalisations of the Rogers-Ramanujan q-series identities in the context of cylindric partitions. We recall the definition of r-cylindric partitions, and provide a simple proof of Borodin’s product expression for their generating functions, that can be regarded as a limiting case of an unpublished proof by Krattenthaler. We also recall the relationships between the r-cylindric partition generating functions, the principal characters of {\\hat{{sl}}}r algebras, the {{\\boldsymbol{ M }}}r r,r+d minimal model characters of {{\\boldsymbol{ W }}}r algebras, and the r-string abaci generating functions, providing simple proofs for each. We then set r = 2, and use two-cylindric partitions to re-derive the AGB identities as follows. Firstly, we use Borodin’s product expression for the generating functions of the two-cylindric partitions with infinitely long parts, to obtain the product sides of the AGB identities, times a factor {(q;q)}∞ -1, which is the generating function of ordinary partitions. Next, we obtain a bijection from the two-cylindric partitions, via two-string abaci, into decorated versions of Bressoud’s restricted lattice paths. Extending Bressoud’s method of transforming between restricted paths that obey different restrictions, we obtain sum expressions with manifestly non-negative coefficients for the generating functions of the two-cylindric partitions which contains a factor {(q;q)}∞ -1. Equating the product and sum expressions of the same two-cylindric partitions, and canceling a factor of {(q;q)}∞ -1 on each side, we obtain the AGB identities.

  6. Common y-intercept and single compound regressions of gas-particle partitioning data vs 1/T

    NASA Astrophysics Data System (ADS)

    Pankow, James F.

    Confidence intervals are placed around the log Kp vs 1/ T correlation equations obtained using simple linear regressions (SLR) with the gas-particle partitioning data set of Yamasaki et al. [(1982) Env. Sci. Technol.16, 189-194]. The compounds and groups of compounds studied include the polycylic aromatic hydrocarbons phenanthrene + anthracene, me-phenanthrene + me-anthracene, fluoranthene, pyrene, benzo[ a]fluorene + benzo[ b]fluorene, chrysene + benz[ a]anthracene + triphenylene, benzo[ b]fluoranthene + benzo[ k]fluoranthene, and benzo[ a]pyrene + benzo[ e]pyrene (note: me = methyl). For any given compound, at equilibrium, the partition coefficient Kp equals ( F/ TSP)/ A where F is the particulate-matter associated concentration (ng m -3), A is the gas-phase concentration (ng m -3), and TSP is the concentration of particulate matter (μg m -3). At temperatures more than 10°C from the mean sampling temperature of 17°C, the confidence intervals are quite wide. Since theory predicts that similar compounds sorbing on the same particulate matter should possess very similar y-intercepts, the data set was also fitted using a special common y-intercept regression (CYIR). For most of the compounds, the CYIR equations fell inside of the SLR 95% confidence intervals. The CYIR y-intercept value is -18.48, and is reasonably close to the type of value that can be predicted for PAH compounds. The set of CYIR regression equations is probably more reliable than the set of SLR equations. For example, the CYIR-derived desorption enthalpies are much more highly correlated with vaporization enthalpies than are the SLR-derived desorption enthalpies. It is recommended that the CYIR approach be considered whenever analysing temperature-dependent gas-particle partitioning data.

  7. What are the structural features that drive partitioning of proteins in aqueous two-phase systems?

    PubMed

    Wu, Zhonghua; Hu, Gang; Wang, Kui; Zaslavsky, Boris Yu; Kurgan, Lukasz; Uversky, Vladimir N

    2017-01-01

    Protein partitioning in aqueous two-phase systems (ATPSs) represents a convenient, inexpensive, and easy to scale-up protein separation technique. Since partition behavior of a protein dramatically depends on an ATPS composition, it would be highly beneficial to have reliable means for (even qualitative) prediction of partitioning of a target protein under different conditions. Our aim was to understand which structural features of proteins contribute to partitioning of a query protein in a given ATPS. We undertook a systematic empirical analysis of relations between 57 numerical structural descriptors derived from the corresponding amino acid sequences and crystal structures of 10 well-characterized proteins and the partition behavior of these proteins in 29 different ATPSs. This analysis revealed that just a few structural characteristics of proteins can accurately determine behavior of these proteins in a given ATPS. However, partition behavior of proteins in different ATPSs relies on different structural features. In other words, we could not find a unique set of protein structural features derived from their crystal structures that could be used for the description of the protein partition behavior of all proteins in all ATPSs analyzed in this study. We likely need to gain better insight into relationships between protein-solvent interactions and protein structure peculiarities, in particular given limitations of the used here crystal structures, to be able to construct a model that accurately predicts protein partition behavior across all ATPSs. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Framework for making better predictions by directly estimating variables' predictivity.

    PubMed

    Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa

    2016-12-13

    We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.

  9. Framework for making better predictions by directly estimating variables’ predictivity

    PubMed Central

    Chernoff, Herman; Lo, Shaw-Hwa

    2016-01-01

    We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830

  10. The threshold bootstrap clustering: a new approach to find families or transmission clusters within molecular quasispecies.

    PubMed

    Prosperi, Mattia C F; De Luca, Andrea; Di Giambenedetto, Simona; Bracciale, Laura; Fabbiani, Massimiliano; Cauda, Roberto; Salemi, Marco

    2010-10-25

    Phylogenetic methods produce hierarchies of molecular species, inferring knowledge about taxonomy and evolution. However, there is not yet a consensus methodology that provides a crisp partition of taxa, desirable when considering the problem of intra/inter-patient quasispecies classification or infection transmission event identification. We introduce the threshold bootstrap clustering (TBC), a new methodology for partitioning molecular sequences, that does not require a phylogenetic tree estimation. The TBC is an incremental partition algorithm, inspired by the stochastic Chinese restaurant process, and takes advantage of resampling techniques and models of sequence evolution. TBC uses as input a multiple alignment of molecular sequences and its output is a crisp partition of the taxa into an automatically determined number of clusters. By varying initial conditions, the algorithm can produce different partitions. We describe a procedure that selects a prime partition among a set of candidate ones and calculates a measure of cluster reliability. TBC was successfully tested for the identification of type-1 human immunodeficiency and hepatitis C virus subtypes, and compared with previously established methodologies. It was also evaluated in the problem of HIV-1 intra-patient quasispecies clustering, and for transmission cluster identification, using a set of sequences from patients with known transmission event histories. TBC has been shown to be effective for the subtyping of HIV and HCV, and for identifying intra-patient quasispecies. To some extent, the algorithm was able also to infer clusters corresponding to events of infection transmission. The computational complexity of TBC is quadratic in the number of taxa, lower than other established methods; in addition, TBC has been enhanced with a measure of cluster reliability. The TBC can be useful to characterise molecular quasipecies in a broad context.

  11. Heavy metal partitioning of suspended particulate matter-water and sediment-water in the Yangtze Estuary.

    PubMed

    Feng, Chenghong; Guo, Xiaoyu; Yin, Su; Tian, Chenhao; Li, Yangyang; Shen, Zhenyao

    2017-10-01

    The partitioning of ten heavy metals (As, Cd, Co, Cr, Cu, Hg, Ni, Pb, Sb, and Zn) between the water, suspended particulate matter (SPM), and sediments in seven channel sections during three hydrologic seasons in the Yangtze Estuary was comprehensively investigated. Special attention was paid to the role of tides, influential factors (concentrations of SPM and dissolved organic carbon, and particle size), and heavy metal speciation. The SPM-water and sediment-water partition coefficients (K p ) of the heavy metals exhibited similar changes along the channel sections, though the former were larger throughout the estuary. Because of the higher salinity, the K p values of most of the metals were higher in the north branch than in the south branch. The K p values of Cd, Co, and As generally decreased from the wet season to the dry season. Both the diagonal line method and paired samples t-test showed that no specific phase transfer of heavy metals existed during the flood and ebb tides, but the sediment-water K p was more concentrated for the diagonal line method, owing to the relatively smaller tidal influences on the sediment. The partition coefficients (especially the K p for SPM-water) had negative correlations with the dissolved organic carbon (DOC) but positive correlations were noted with the particle size for most of the heavy metals in sediment. Two types of significant correlations were observed between K p and metal speciation (i.e., exchangeable, carbonate, reducible, organic, and residual fractions), which can be used to identify the dominant phase-partition mechanisms (e.g., adsorption or desorption) of heavy metals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A Meinardus Theorem with Multiple Singularities

    NASA Astrophysics Data System (ADS)

    Granovsky, Boris L.; Stark, Dudley

    2012-09-01

    Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.

  13. Traditional and novel halogenated flame retardants in urban ambient air: Gas-particle partitioning, size distribution and health implications.

    PubMed

    de la Torre, A; Barbas, B; Sanz, P; Navarro, I; Artíñano, B; Martínez, M A

    2018-07-15

    Urban ambient air samples, including gas-phase (PUF), total suspended particulates (TSP), PM 10 , PM 2.5 and PM 1 airborne particle fractions were collected to evaluate gas-particle partitioning and size particle distribution of traditional and novel halogenated flame retardants. Simultaneously, passive air samplers (PAS) were deployed in the same location. Analytes included 33 polybrominated diphenyl ether (PBDE), 2,2',4,4',5,5'-hexabromobiphenyl (BB-153), hexabromobenzene (HBB), pentabromoethylbenzene (PBEB), 1,2-bis(2,4,6-tribromophenoxy)ethane (BTBPE), decabromodiphenyl ethane (DBDPE), dechloranes (Dec 602, 603, 604, 605 or Dechorane plus (DP)) and chlordane plus (CP). Clausius-Clapeyron equation, gas-particle partition coefficient (K p ), fraction partitioned onto particles (φ) and human respiratory risk assessment were used to evaluate local or long-distance transport sources, gas-particle partitioning sorption mechanisms, and implications for health, respectively. PBDEs were the FR with the highest levels (13.9pgm -3 , median TSP+PUF), followed by DP (1.56pgm -3 ), mirex (0.78pgm -3 ), PBEB (0.05pgm -3 ), and BB-153 (0.04pgm -3 ). PBDE congener pattern in particulate matter was dominated by BDE-209, while the contribution of more volatile congeners, BDE-28, -47, -99, and -100 was higher in gas-phase. Congener contribution increases with particle size and bromination degree, being BDE-47 mostly bounded to particles≤PM 1 , BDE-99 to > PM 1 and BDE-209 to > PM 2.5 . No significant differences were found for PBDE and DP concentrations obtained with passive and active samplers, demonstrating the ability of the formers to collect particulate material. Deposition efficiencies and fluxes on inhaled PBDEs and DP in human respiratory tract were calculated. Contribution in respiratory track was dominated by head airway (2.16 and 0.26pgh -1 , for PBDE and DP), followed by tracheobronchial (0.12 and 0.02pgh -1 ) and alveoli (0.01-0.002pgh -1 ) regions. Finally, hazard quotient values on inhalation were proposed (6.3×10 -7 and 1.1×10 -8 for PBDEs and DP), reflecting a low cancer risk through inhalation. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. The subtle intracapsular survival of the fittest: maternal investment, sibling conflict, or environmental effects?

    PubMed

    Smith, Kathryn E; Thatje, Sven

    2013-10-01

    Developmental resource partitioning and the consequent offspring size variations are of fundamental importance for marine invertebrates, in both an ecological and evolutionary context. Typically, differences are attributed to maternal investment and the environmental factors determining this; additional variables, such as environmental factors affecting development, are rarely discussed. During intracapsular development, for example, sibling conflict has the potential to affect resource partitioning. Here, we investigate encapsulated development in the marine gastropod Buccinum undatum. We examine the effects of maternal investment and temperature on intracapsular resource partitioning in this species. Reproductive output was positively influenced by maternal investment, but additionally, temperature and sibling conflict significantly affected offspring size, number, and quality during development. Increased temperature led to reduced offspring number, and a combination of high sibling competition and asynchronous early development resulted in a common occurrence of "empty" embryos, which received no nutrition at all. The proportion of empty embryos increased with both temperature and capsule size. Additionally, a novel example ofa risk in sibling conflict was observed; embryos cannibalized by others during early development ingested nurse eggs from inside the consumer, killing it in a "Trojan horse" scenario. Our results highlight the complexity surrounding offspring fitness. Encapsulation should be considered as significant in determining maternal output. Considering predicted increases in ocean temperatures, this may impact offspring quality and consequently species distribution and abundance.

  15. Covariance Partition Priors: A Bayesian Approach to Simultaneous Covariance Estimation for Longitudinal Data.

    PubMed

    Gaskins, J T; Daniels, M J

    2016-01-02

    The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.

  16. Hydraulic geometry of the Platte River in south-central Nebraska

    USGS Publications Warehouse

    Eschner, T.R.

    1982-01-01

    At-a-station hydraulic-geometry of the Platte River in south-central Nebraska is complex. The range of exponents of simple power-function relations is large, both between different reaches of the river, and among different sections within a given reach. The at-a-station exponents plot in several fields of the b-f-m diagram, suggesting that morphologic and hydraulic changes with increasing discharge vary considerably. Systematic changes in the plotting positions of the exponents with time indicate that in general, the width exponent has decreased, although trends are not readily apparent in the other exponents. Plots of the hydraulic-geometry relations indicate that simple power functions are not the proper model in all instances. For these sections, breaks in the slopes of the hydraulic geometry relations serve to partition the data sets. Power functions fit separately to the partitioned data described the width-, depth-, and velocity-discharge relations more accurately than did a single power function. Plotting positions of the exponents from hydraulic geometry relations of partitioned data sets on b-f-m diagrams indicate that much of the apparent variations of plotting positions of single power functions results because the single power functions compromise both subsets of partitioned data. For several sections, the shape of the channel primarily accounts for the better fit of two-power functions to partitioned data than a single power function over the entire range of data. These non-log linear relations may have significance for channel maintenance. (USGS)

  17. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less

  18. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  19. A Bayesian partition modelling approach to resolve spatial variability in climate records from borehole temperature inversion

    NASA Astrophysics Data System (ADS)

    Hopcroft, Peter O.; Gallagher, Kerry; Pain, Christopher C.

    2009-08-01

    Collections of suitably chosen borehole profiles can be used to infer large-scale trends in ground-surface temperature (GST) histories for the past few hundred years. These reconstructions are based on a large database of carefully selected borehole temperature measurements from around the globe. Since non-climatic thermal influences are difficult to identify, representative temperature histories are derived by averaging individual reconstructions to minimize the influence of these perturbing factors. This may lead to three potentially important drawbacks: the net signal of non-climatic factors may not be zero, meaning that the average does not reflect the best estimate of past climate; the averaging over large areas restricts the useful amount of more local climate change information available; and the inversion methods used to reconstruct the past temperatures at each site must be mathematically identical and are therefore not necessarily best suited to all data sets. In this work, we avoid these issues by using a Bayesian partition model (BPM), which is computed using a trans-dimensional form of a Markov chain Monte Carlo algorithm. This then allows the number and spatial distribution of different GST histories to be inferred from a given set of borehole data by partitioning the geographical area into discrete partitions. Profiles that are heavily influenced by non-climatic factors will be partitioned separately. Conversely, profiles with climatic information, which is consistent with neighbouring profiles, will then be inferred to lie in the same partition. The geographical extent of these partitions then leads to information on the regional extent of the climatic signal. In this study, three case studies are described using synthetic and real data. The first demonstrates that the Bayesian partition model method is able to correctly partition a suite of synthetic profiles according to the inferred GST history. In the second, more realistic case, a series of temperature profiles are calculated using surface air temperatures of a global climate model simulation. In the final case, 23 real boreholes from the United Kingdom, previously used for climatic reconstructions, are examined and the results compared with a local instrumental temperature series and the previous estimate derived from the same borehole data. The results indicate that the majority (17) of the 23 boreholes are unsuitable for climatic reconstruction purposes, at least without including other thermal processes in the forward model.

  20. Crystal-chemistry and partitioning of REE in whitlockite

    NASA Technical Reports Server (NTRS)

    Colson, R. O.; Jolliff, B. L.

    1993-01-01

    Partitioning of Rare Earth Elements (REE) in whitlockite is complicated by the fact that two or more charge-balancing substitutions are involved and by the fact that concentrations of REE in natural whitlockites are sufficiently high such that simple partition coefficients are not expected to be constant even if mixing in the system is completely ideal. The present study combines preexisting REE partitioning data in whitlockites with new experiments in the same compositional system and at the same temperature (approximately 1030 C) to place additional constraints on the complex variations of REE partition coefficients and to test theoretical models for how REE partitioning should vary with REE concentration and other compositional variables. With this data set, and by combining crystallographic and thermochemical constraints with a SAS simultaneous-equation best-fitting routine, it is possible to infer answers to the following questions: what is the speciation on the individual sites Ca(B), Mg, and Ca(IIA) (where the ideal structural formula is Ca(B)18 Mg2Ca(IIA)2P14O56); how are REE's charge-balanced in the crystal; and is mixing of REE in whitlockite ideal or non-ideal. This understanding is necessary in order to extrapolate derived partition coefficients to other compositional systems and provides a broadened understanding of the crystal chemistry of whitlockite.

  1. Physicochemical properties/descriptors governing the solubility and partitioning of chemicals in water-solvent-gas systems. Part 1. Partitioning between octanol and air.

    PubMed

    Raevsky, O A; Grigor'ev, V J; Raevskaja, O E; Schaper, K-J

    2006-06-01

    QSPR analyses of a data set containing experimental partition coefficients in the three systems octanol-water, water-gas, and octanol-gas for 98 chemicals have shown that it is possible to calculate any partition coefficient in the system 'gas phase/octanol/water' by three different approaches: (1) from experimental partition coefficients obtained in the corresponding two other subsystems. However, in many cases these data may not be available. Therefore, a solution may be approached (2), a traditional QSPR analysis based on e.g. HYBOT descriptors (hydrogen bond acceptor and donor factors, SigmaCa and SigmaCd, together with polarisability alpha, a steric bulk effect descriptor) and supplemented with substructural indicator variables. (3) A very promising approach which is a combination of the similarity concept and QSPR based on HYBOT descriptors. In this approach observed partition coefficients of structurally nearest neighbours of a compound-of-interest are used. In addition, contributions arising from differences in alpha, SigmaCa, and SigmaCd values between the compound-of-interest and its nearest neighbour(s), respectively, are considered. In this investigation highly significant relationships were obtained by approaches (1) and (3) for the octanol/gas phase partition coefficient (log Log).

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salama, A.; Mikhail, M.

    Comprehensive software packages have been developed at the Western Research Centre as tools to help coal preparation engineers analyze, evaluate, and control coal cleaning processes. The COal Preparation Software package (COPS) performs three functions: (1) data handling and manipulation, (2) data analysis, including the generation of washability data, performance evaluation and prediction, density and size modeling, evaluation of density and size partition characteristics and attrition curves, and (3) generation of graphics output. The Separation ChARacteristics Estimation software packages (SCARE) are developed to balance raw density or size separation data. The cases of density and size separation data are considered. Themore » generated balanced data can take the balanced or normalized forms. The scaled form is desirable for direct determination of the partition functions (curves). The raw and generated separation data are displayed in tabular and/or graphical forms. The computer softwares described in this paper are valuable tools for coal preparation plant engineers and operators for evaluating process performance, adjusting plant parameters, and balancing raw density or size separation data. These packages have been applied very successfully in many projects carried out by WRC for the Canadian coal preparation industry. The software packages are designed to run on a personal computer (PC).« less

  3. On the estimation of the domain of attraction for discrete-time switched and hybrid nonlinear systems

    NASA Astrophysics Data System (ADS)

    Kit Luk, Chuen; Chesi, Graziano

    2015-11-01

    This paper addresses the estimation of the domain of attraction for discrete-time nonlinear systems where the vector field is subject to changes. First, the paper considers the case of switched systems, where the vector field is allowed to arbitrarily switch among the elements of a finite family. Second, the paper considers the case of hybrid systems, where the state space is partitioned into several regions described by polynomial inequalities, and the vector field is defined on each region independently from the other ones. In both cases, the problem consists of computing the largest sublevel set of a Lyapunov function included in the domain of attraction. An approach is proposed for solving this problem based on convex programming, which provides a guaranteed inner estimate of the sought sublevel set. The conservatism of the provided estimate can be decreased by increasing the size of the optimisation problem. Some numerical examples illustrate the proposed approach.

  4. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  5. Platinum Partitioning at Low Oxygen Fugacity: Implications for Core Formation Processes

    NASA Technical Reports Server (NTRS)

    Medard, E.; Martin, A. M.; Righter, K.; Lanziroti, A.; Newville, M.

    2016-01-01

    Highly siderophile elements (HSE = Au, Re, and the Pt-group elements) are tracers of silicate / metal interactions during planetary processes. Since most core-formation models involve some state of equilibrium between liquid silicate and liquid metal, understanding the partioning of highly siderophile elements (HSE) between silicate and metallic melts is a key issue for models of core / mantle equilibria and for core formation scenarios. However, partitioning models for HSE are still inaccurate due to the lack of sufficient experimental constraints to describe the variations of partitioning with key variable like temperature, pressure, and oxygen fugacity. In this abstract, we describe a self-consistent set of experiments aimed at determining the valence of platinum, one of the HSE, in silicate melts. This is a key information required to parameterize the evolution of platinum partitioning with oxygen fugacity.

  6. Ion distribution and selectivity of ionic liquids in microporous electrodes.

    PubMed

    Neal, Justin N; Wesolowski, David J; Henderson, Douglas; Wu, Jianzhong

    2017-05-07

    The energy density of an electric double layer capacitor, also known as supercapacitor, depends on ion distributions in the micropores of its electrodes. Herein we study ion selectivity and partitioning of symmetric, asymmetric, and mixed ionic liquids among different pores using the classical density functional theory. We find that a charged micropore in contact with mixed ions of the same valence is always selective to the smaller ions, and the ion selectivity, which is strongest when the pore size is comparable to the ion diameters, drastically falls as the pore size increases. The partitioning behavior in ionic liquids is fundamentally different from those corresponding to ion distributions in aqueous systems whereby the ion selectivity is dominated by the surface energy and entropic effects insensitive to the degree of confinement.

  7. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    NASA Astrophysics Data System (ADS)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  8. Testing and extension of a sea lamprey feeding model

    USGS Publications Warehouse

    Cochran, Philip A.; Swink, William D.; Kinziger, Andrew P.

    1999-01-01

    A previous model of feeding by sea lamprey Petromyzon marinus predicted energy intake and growth by lampreys as a function of lamprey size, host size, and duration of feeding attachments, but it was applicable only to lampreys feeding at 10°C and it was tested against only a single small data set of limited scope. We extended the model to other temperatures and tested it against an extensive data set (more than 700 feeding bouts) accumulated during experiments with captive sea lampreys. Model predictions of instantaneous growth were highly correlated with observed growth, and a partitioning of mean squared error between model predictions and observed results showed that 88.5% of the variance was due to random variation rather than to systematic errors. However, deviations between observed and predicted values varied substantially, especially for short feeding bouts. Predicted and observed growth trajectories of individual lampreys during multiple feeding bouts during the summer tended to correspond closely, but predicted growth was generally much higher than observed growth late in the year. This suggests the possibility that large overwintering lampreys reduce their feeding rates while attached to hosts. Seasonal or size-related shifts in the fate of consumed energy may provide an alternative explanation. The lamprey feeding model offers great flexibility in assessing growth of captive lampreys within various experimental protocols (e.g., different host species or thermal regimes) because it controls for individual differences in feeding history.

  9. Partitioning of 13C-photosynthate from Spur Leaves during Fruit Growth of Three Japanese Pear (Pyrus pyrifolia) Cultivars Differing in Maturation Date

    PubMed Central

    ZHANG, CAIXI; TANABE, KENJI; TAMURA, FUMIO; ITAI, AKIHIRO; WANG, SHIPING

    2005-01-01

    • Background and Aims In fruit crops, fruit size at harvest is an important aspect of quality. With Japanese pears (Pyrus pyrifolia), later maturing cultivars usually have larger fruits than earlier maturing cultivars. It is considered that the supply of photosynthate during fruit development is a critical determinant of size. To assess the interaction of assimilate supply and early/late maturity of cultivars and its effect on final fruit size, the pattern of carbon assimilate partitioning from spur leaves (source) to fruit and other organs (sinks) during fruit growth was investigated using three genotypes differing in maturation date. • Methods Partitioning of photosynthate from spur leaves during fruit growth was investigated by exposure of spurs to 13CO2 and measurement of the change in 13C abundance in dry matter with time. Leaf number and leaf area per spur, fresh fruit weight, cell number and cell size of the mesocarp were measured and used to model the development of the spur leaf and fruit. • Key Results Compared with the earlier-maturing cultivars ‘Shinsui’ and ‘Kousui’, the larger-fruited, later-maturing cultivar ‘Shinsetsu’ had a greater total leaf area per spur, greater source strength (source weight × source specific activity), with more 13C assimilated per spur and allocated to fruit, smaller loss of 13C in respiration and export over the season, and longer duration of cell division and enlargement. Histology shows that cultivar differences in final fruit size were mainly attributable to the number of cells in the mesocarp. • Conclusions Assimilate availability during the period of cell division was crucial for early fruit growth and closely correlated with final fruit size. Early fruit growth of the earlier-maturing cultivars, but not the later-maturing ones, was severely restrained by assimilate supply rather than by sink limitation. PMID:15655106

  10. Partitioning behavior of aromatic components in jet fuel into diverse membrane-coated fibers.

    PubMed

    Baynes, Ronald E; Xia, Xin-Rui; Barlow, Beth M; Riviere, Jim E

    2007-11-01

    Jet fuel components are known to partition into skin and produce occupational irritant contact dermatitis (OICD) and potentially adverse systemic effects. The purpose of this study was to determine how jet fuel components partition (1) from solvent mixtures into diverse membrane-coated fibers (MCFs) and (2) from biological media into MCFs to predict tissue distribution. Three diverse MCFs, polydimethylsiloxane (PDMS, lipophilic), polyacrylate (PA, polarizable), and carbowax (CAR, polar), were selected to simulate the physicochemical properties of skin in vivo. Following an appropriate equilibrium time between the MCF and dosing solutions, the MCF was injected directly into a gas chromatograph/mass spectrometer (GC-MS) to quantify the amount that partitioned into the membrane. Three vehicles (water, 50% ethanol-water, and albumin-containing media solution) were studied for selected jet fuel components. The more hydrophobic the component, the greater was the partitioning into the membranes across all MCF types, especially from water. The presence of ethanol as a surrogate solvent resulted in significantly reduced partitioning into the MCFs with discernible differences across the three fibers based on their chemistries. The presence of a plasma substitute (media) also reduced partitioning into the MCF, with the CAR MCF system being better correlated to the predicted partitioning of aromatic components into skin. This study demonstrated that a single or multiple set of MCF fibers may be used as a surrogate for octanol/water systems and skin to assess partitioning behavior of nine aromatic components frequently formulated with jet fuels. These diverse inert fibers were able to assess solute partitioning from a blood substitute such as media into a membrane possessing physicochemical properties similar to human skin. This information may be incorporated into physiologically based pharmacokinetic (PBPK) models to provide a more accurate assessment of tissue dosimetry of related toxicants.

  11. Seventy years of stream‐fish collections reveal invasions and native range contractions in an Appalachian (USA) watershed

    USGS Publications Warehouse

    Buckwalter, Joseph D.; Frimpong, Emmanuel A.; Angermeier, Paul L.; Barney, Jacob N.

    2018-01-01

    AimKnowledge of expanding and contracting ranges is critical for monitoring invasions and assessing conservation status, yet reliable data on distributional trends are lacking for most freshwater species. We developed a quantitative technique to detect the sign (expansion or contraction) and functional form of range‐size changes for freshwater species based on collections data, while accounting for possible biases due to variable collection effort. We applied this technique to quantify stream‐fish range expansions and contractions in a highly invaded river system.LocationUpper and middle New River (UMNR) basin, Appalachian Mountains, USA.MethodsWe compiled a 77‐year stream‐fish collections dataset partitioned into ten time periods. To account for variable collection effort among time periods, we aggregated the collections into 100 watersheds and expressed a species’ range size as detections per watershed (HUC) sampled (DPHS). We regressed DPHS against time by species and used an information‐theoretic approach to compare linear and nonlinear functional forms fitted to the data points and to classify each species as spreader, stable or decliner.ResultsWe analysed changes in range size for 74 UMNR fishes, including 35 native and 39 established introduced species. We classified the majority (51%) of introduced species as spreaders, compared to 31% of natives. An exponential functional form fits best for 84% of spreaders. Three natives were among the most rapid spreaders. All four decliners were New River natives.Main conclusionsOur DPHS‐based approach facilitated quantitative analyses of distributional trends for stream fishes based on collections data. Partitioning the dataset into multiple time periods allowed us to distinguish long‐term trends from population fluctuations and to examine nonlinear forms of spread. Our framework sets the stage for further study of drivers of stream‐fish invasions and declines in the UMNR and is widely transferable to other freshwater taxa and geographic regions.

  12. Reducing I/O variability using dynamic I/O path characterization in petascale storage systems

    DOE PAGES

    Son, Seung Woo; Sehrish, Saba; Liao, Wei-keng; ...

    2016-11-01

    In petascale systems with a million CPU cores, scalable and consistent I/O performance is becoming increasingly difficult to sustain mainly because of I/O variability. Furthermore, the I/O variability is caused by concurrently running processes/jobs competing for I/O or a RAID rebuild when a disk drive fails. We present a mechanism that stripes across a selected subset of I/O nodes with the lightest workload at runtime to achieve the highest I/O bandwidth available in the system. In this paper, we propose a probing mechanism to enable application-level dynamic file striping to mitigate I/O variability. We also implement the proposed mechanism inmore » the high-level I/O library that enables memory-to-file data layout transformation and allows transparent file partitioning using subfiling. Subfiling is a technique that partitions data into a set of files of smaller size and manages file access to them, making data to be treated as a single, normal file to users. Here, we demonstrate that our bandwidth probing mechanism can successfully identify temporally slower I/O nodes without noticeable runtime overhead. Experimental results on NERSC’s systems also show that our approach isolates I/O variability effectively on shared systems and improves overall collective I/O performance with less variation.« less

  13. In silico screening of drug-membrane thermodynamics reveals linear relations between bulk partitioning and the potential of mean force

    NASA Astrophysics Data System (ADS)

    Menichetti, Roberto; Kanekal, Kiran H.; Kremer, Kurt; Bereau, Tristan

    2017-09-01

    The partitioning of small molecules in cell membranes—a key parameter for pharmaceutical applications—typically relies on experimentally available bulk partitioning coefficients. Computer simulations provide a structural resolution of the insertion thermodynamics via the potential of mean force but require significant sampling at the atomistic level. Here, we introduce high-throughput coarse-grained molecular dynamics simulations to screen thermodynamic properties. This application of physics-based models in a large-scale study of small molecules establishes linear relationships between partitioning coefficients and key features of the potential of mean force. This allows us to predict the structure of the insertion from bulk experimental measurements for more than 400 000 compounds. The potential of mean force hereby becomes an easily accessible quantity—already recognized for its high predictability of certain properties, e.g., passive permeation. Further, we demonstrate how coarse graining helps reduce the size of chemical space, enabling a hierarchical approach to screening small molecules.

  14. Rational design of polymer-based absorbents: application to the fermentation inhibitor furfural.

    PubMed

    Nwaneshiudu, Ikechukwu C; Schwartz, Daniel T

    2015-01-01

    Reducing the amount of water-soluble fermentation inhibitors like furfural is critical for downstream bio-processing steps to biofuels. A theoretical approach for tailoring absorption polymers to reduce these pretreatment contaminants would be useful for optimal bioprocess design. Experiments were performed to measure aqueous furfural partitioning into polymer resins of 5 bisphenol A diglycidyl ether (epoxy) and polydimethylsiloxane (PDMS). Experimentally measured partitioning of furfural between water and PDMS, the more hydrophobic polymer, showed poor performance, with the logarithm of PDMS-to-water partition coefficient falling between -0.62 and -0.24 (95% confidence). In contrast, the fast setting epoxy was found to effectively partition furfural with the logarithm of the epoxy-to-water partition coefficient falling between 0.41 and 0.81 (95% confidence). Flory-Huggins theory is used to predict the partitioning of furfural into diverse polymer absorbents and is useful for predicting these results. We show that Flory-Huggins theory can be adapted to guide the selection of polymer adsorbents for the separation of low molecular weight organic species from aqueous solutions. This work lays the groundwork for the general design of polymers for the separation of a wide range of inhibitory compounds in biomass pretreatment streams.

  15. Evaluation of Hierarchical Clustering Algorithms for Document Datasets

    DTIC Science & Technology

    2002-06-03

    link, complete-link, and group average ( UPGMA )) and a new set of merging criteria derived from the six partitional criterion functions. Overall, we...used the single-link, complete-link, and UPGMA schemes, as well as, the various partitional criterion functions described in Section 3.1. The single-link...other (complete-link approach). The UPGMA scheme [16] (also known as group average) overcomes these problems by measuring the similarity of two clusters

  16. Binary mesh partitioning for cache-efficient visualization.

    PubMed

    Tchiboukdjian, Marc; Danjean, Vincent; Raffin, Bruno

    2010-01-01

    One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cache-aware (CA) and cache-oblivious (CO) algorithms take into consideration the memory hierarchy to design cache efficient algorithms. CO approaches have the advantage to adapt to unknown and varying memory hierarchies. Recent CA and CO algorithms developed for 3D mesh layouts significantly improve performance of previous approaches, but they lack of theoretical performance guarantees. We present in this paper a {\\schmi O}(N\\log N) algorithm to compute a CO layout for unstructured but well shaped meshes. We prove that a coherent traversal of a N-size mesh in dimension d induces less than N/B+{\\schmi O}(N/M;{1/d}) cache-misses where B and M are the block size and the cache size, respectively. Experiments show that our layout computation is faster and significantly less memory consuming than the best known CO algorithm. Performance is comparable to this algorithm for classical visualization algorithm access patterns, or better when the BSP tree produced while computing the layout is used as an acceleration data structure adjusted to the layout. We also show that cache oblivious approaches lead to significant performance increases on recent GPU architectures.

  17. Electronic structures of GeSi nanoislands grown on pit-patterned Si(001) substrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Han, E-mail: Dabombyh@aliyun.com; Yu, Zhongyuan

    2014-11-15

    Patterning pit on Si(001) substrate prior to Ge deposition is an important approach to achieve GeSi nanoislands with high ordering and size uniformity. In present work, the electronic structures of realistic uncapped pyramid, dome, barn and cupola nanoislands grown in (105) pits are systematically investigated by solving Schrödinger equation for heavy-hole, which resorts to inhomogeneous strain distribution and nonlinear composition-dependent band parameters. Uniform, partitioned and equilibrium composition profile (CP) in nanoisland and inverted pyramid structure are simulated separately. We demonstrate the huge impact of composition profile on localization of heavy-hole: wave function of ground state is confined near pit facetsmore » for uniform CP, at bottom of nanoisland for partitioned CP and at top of nanoisland for equilibrium CP. Moreover, such localization is gradually compromised by the size effect as pit filling ratio or pit size decreases. The results pave the fundamental guideline of designing nanoislands on pit-patterned substrates for desired applications.« less

  18. Fully polynomial-time approximation scheme for a special case of a quadratic Euclidean 2-clustering problem

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khandeev, V. I.

    2016-02-01

    The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.

  19. Linear scaling computation of the Fock matrix. VI. Data parallel computation of the exchange-correlation matrix

    NASA Astrophysics Data System (ADS)

    Gan, Chee Kwan; Challacombe, Matt

    2003-05-01

    Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.

  20. The relationship between leaf area growth and biomass accumulation in Arabidopsis thaliana

    PubMed Central

    Weraduwage, Sarathi M.; Chen, Jin; Anozie, Fransisca C.; Morales, Alejandro; Weise, Sean E.; Sharkey, Thomas D.

    2015-01-01

    Leaf area growth determines the light interception capacity of a crop and is often used as a surrogate for plant growth in high-throughput phenotyping systems. The relationship between leaf area growth and growth in terms of mass will depend on how carbon is partitioned among new leaf area, leaf mass, root mass, reproduction, and respiration. A model of leaf area growth in terms of photosynthetic rate and carbon partitioning to different plant organs was developed and tested with Arabidopsis thaliana L. Heynh. ecotype Columbia (Col-0) and a mutant line, gigantea-2 (gi-2), which develops very large rosettes. Data obtained from growth analysis and gas exchange measurements was used to train a genetic programming algorithm to parameterize and test the above model. The relationship between leaf area and plant biomass was found to be non-linear and variable depending on carbon partitioning. The model output was sensitive to the rate of photosynthesis but more sensitive to the amount of carbon partitioned to growing thicker leaves. The large rosette size of gi-2 relative to that of Col-0 resulted from relatively small differences in partitioning to new leaf area vs. leaf thickness. PMID:25914696

  1. The relationship between leaf area growth and biomass accumulation in Arabidopsis thaliana

    DOE PAGES

    Weraduwage, Sarathi M.; Chen, Jin; Anozie, Fransisca C.; ...

    2015-04-09

    Leaf area growth determines the light interception capacity of a crop and is often used as a surrogate for plant growth in high-throughput phenotyping systems. The relationship between leaf area growth and growth in terms of mass will depend on how carbon is partitioned among new leaf area, leaf mass, root mass, reproduction, and respiration. A model of leaf area growth in terms of photosynthetic rate and carbon partitioning to different plant organs was developed and tested with Arabidopsis thaliana L. Heynh. ecotype Columbia (Col-0) and a mutant line, gigantea-2 (gi-2), which develops very large rosettes. Data obtained from growthmore » analysis and gas exchange measurements was used to train a genetic programming algorithm to parameterize and test the above model. The relationship between leaf area and plant biomass was found to be non-linear and variable depending on carbon partitioning. The model output was sensitive to the rate of photosynthesis but more sensitive to the amount of carbon partitioned to growing thicker leaves. The large rosette size of gi-2 relative to that of Col-0 resulted from relatively small differences in partitioning to new leaf area vs. leaf thickness.« less

  2. Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL): adapting the Partial Phylogenetic Profiling algorithm to scan sequences for signatures that predict protein function

    PubMed Central

    2010-01-01

    Background Comparative genomics methods such as phylogenetic profiling can mine powerful inferences from inherently noisy biological data sets. We introduce Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL), a method that applies the Partial Phylogenetic Profiling (PPP) approach locally within a protein sequence to discover short sequence signatures associated with functional sites. The approach is based on the basic scoring mechanism employed by PPP, namely the use of binomial distribution statistics to optimize sequence similarity cutoffs during searches of partitioned training sets. Results Here we illustrate and validate the ability of the SIMBAL method to find functionally relevant short sequence signatures by application to two well-characterized protein families. In the first example, we partitioned a family of ABC permeases using a metabolic background property (urea utilization). Thus, the TRUE set for this family comprised members whose genome of origin encoded a urea utilization system. By moving a sliding window across the sequence of a permease, and searching each subsequence in turn against the full set of partitioned proteins, the method found which local sequence signatures best correlated with the urea utilization trait. Mapping of SIMBAL "hot spots" onto crystal structures of homologous permeases reveals that the significant sites are gating determinants on the cytosolic face rather than, say, docking sites for the substrate-binding protein on the extracellular face. In the second example, we partitioned a protein methyltransferase family using gene proximity as a criterion. In this case, the TRUE set comprised those methyltransferases encoded near the gene for the substrate RF-1. SIMBAL identifies sequence regions that map onto the substrate-binding interface while ignoring regions involved in the methyltransferase reaction mechanism in general. Neither method for training set construction requires any prior experimental characterization. Conclusions SIMBAL shows that, in functionally divergent protein families, selected short sequences often significantly outperform their full-length parent sequence for making functional predictions by sequence similarity, suggesting avenues for improved functional classifiers. When combined with structural data, SIMBAL affords the ability to localize and model functional sites. PMID:20102603

  3. Partitioning CloudSat Ice Water Content for Comparison with Upper-Tropospheric Ice in Global Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Chen, W. A.; Woods, C. P.; Li, J. F.; Waliser, D. E.; Chern, J.; Tao, W.; Jiang, J. H.; Tompkins, A. M.

    2010-12-01

    CloudSat provides important estimates of vertically resolved ice water content (IWC) on a global scale based on radar reflectivity. These estimates of IWC have proven beneficial in evaluating the representations of ice clouds in global models. An issue when performing model-data comparisons of IWC particularly germane to this investigation, is the question of which component(s) of the frozen water mass are represented by retrieval estimates and how they relate to what is represented in models. The present study developed and applied a new technique to partition CloudSat total IWC into small and large ice hydrometeors, based on the CloudSat-retrieved ice particle size distribution (PSD) parameters. The new method allows one to make relevant model-data comparisons and provides new insights into the model’s representation of atmospheric IWC. The partitioned CloudSat IWC suggests that the small ice particles contribute to 20-30% of the total IWC in the upper troposphere when a threshold size of 100 μm is used. Sensitivity measures with respect to the threshold size, the PSD parameters, and the retrieval algorithms are presented. The new dataset is compared to model estimates, pointing to areas for model improvement. Cloud ice analyses from the European Centre for Medium-Range Weather Forecasts model agree well with the small IWC from CloudSat. The finite-volume multi-scale modeling framework model underestimates total IWC at 147 and 215 hPa, while overestimating the fractional contribution from the small ice species. These results are discussed in terms of their applications to, and implications for, the evaluation of global atmospheric models, providing constraints on the representations of cloud feedback and precipitation in global models, which in turn can help reduce uncertainties associated with climate change projections. Figure 1. A sample lognormal ice number distribution (red curve), and the corresponding mass distribution (black curve). The dotted line represents the cutoff size for IWC partitioning (Dc = 100 µm as an example). The partial integrals of the mass distribution for particles smaller and larger than Dc correspond to IWC<100 (green area) and IWC>100 (blue area), respectively.

  4. An in situ approach to study trace element partitioning in the laser heated diamond anvil cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petitgirard, S.; Mezouar, M.; Borchert, M.

    2012-01-15

    Data on partitioning behavior of elements between different phases at in situ conditions are crucial for the understanding of element mobility especially for geochemical studies. Here, we present results of in situ partitioning of trace elements (Zr, Pd, and Ru) between silicate and iron melts, up to 50 GPa and 4200 K, using a modified laser heated diamond anvil cell (DAC). This new experimental set up allows simultaneous collection of x-ray fluorescence (XRF) and x-ray diffraction (XRD) data as a function of time using the high pressure beamline ID27 (ESRF, France). The technique enables the simultaneous detection of sample meltingmore » based to the appearance of diffuse scattering in the XRD pattern, characteristic of the structure factor of liquids, and measurements of elemental partitioning of the sample using XRF, before, during and after laser heating in the DAC. We were able to detect elements concentrations as low as a few ppm level (2-5 ppm) on standard solutions. In situ measurements are complimented by mapping of the chemical partitions of the trace elements after laser heating on the quenched samples to constrain the partitioning data. Our first results indicate a strong partitioning of Pd and Ru into the metallic phase, while Zr remains clearly incompatible with iron. This novel approach extends the pressure and temperature range of partitioning experiments derived from quenched samples from the large volume presses and could bring new insight to the early history of Earth.« less

  5. Partitioning of organic carbon among density fractions in surface sediments of Fiordland, New Zealand

    NASA Astrophysics Data System (ADS)

    Cui, Xingqian; Bianchi, Thomas S.; Hutchings, Jack A.; Savage, Candida; Curtis, Jason H.

    2016-03-01

    Transport of particles plays a major role in redistributing organic carbon (OC) along coastal regions. In particular, the global importance of fjords as sites of carbon burial has recently been shown to be even more important than previously thought. In this study, we used six surface sediments from Fiordland, New Zealand, to investigate the transport of particles and OC based on density fractionation. Bulk, biomarker, and principle component analysis were applied to density fractions with ranges of <1.6, 1.6-2.0, 2.0-2.5, and >2.5 g cm-3. Our results found various patterns of OC partitioning at different locations along fjords, likely due to selective transport of higher density but smaller size particles along fjord head-to-mouth transects. We also found preferential leaching of certain biomarkers (e.g., lignin) over others (e.g., fatty acids) during the density fractionation procedure, which altered lignin-based degradation indices. Finally, our results indicated various patterns of OC partitioning on density fractions among different coastal systems. We further propose that a combination of particle size-density fractionation is needed to better understand transport and distribution of particles and OC.

  6. CD process control through machine learning

    NASA Astrophysics Data System (ADS)

    Utzny, Clemens

    2016-10-01

    For the specific requirements of the 14nm and 20nm site applications a new CD map approach was developed at the AMTC. This approach relies on a well established machine learning technique called recursive partitioning. Recursive partitioning is a powerful technique which creates a decision tree by successively testing whether the quantity of interest can be explained by one of the supplied covariates. The test performed is generally a statistical test with a pre-supplied significance level. Once the test indicates significant association between the variable of interest and a covariate a split performed at a threshold value which minimizes the variation within the newly attained groups. This partitioning is recurred until either no significant association can be detected or the resulting sub group size falls below a pre-supplied level.

  7. Partitioning of Nanoparticles into Organic Phases and Model Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Posner, J.D.; Westerhoff, P.; Hou, W-C.

    2011-08-25

    There is a recognized need to understand and predict the fate, transport and bioavailability of engineered nanoparticles (ENPs) in aquatic and soil ecosystems. Recent research focuses on either collection of empirical data (e.g., removal of a specific NP through water or soil matrices under variable experimental conditions) or precise NP characterization (e.g. size, degree of aggregation, morphology, zeta potential, purity, surface chemistry, and stability). However, it is almost impossible to transition from these precise measurements to models suitable to assess the NP behavior in the environment with complex and heterogeneous matrices. For decades, the USEPA has developed and applies basicmore » partitioning parameters (e.g., octanol-water partition coefficients) and models (e.g., EPI Suite, ECOSAR) to predict the environmental fate, bioavailability, and toxicity of organic pollutants (e.g., pesticides, hydrocarbons, etc.). In this project we have investigated the hypothesis that NP partition coefficients between water and organic phases (octanol or lipid bilayer) is highly dependent on their physiochemical properties, aggregation, and presence of natural constituents in aquatic environments (salts, natural organic matter), which may impact their partitioning into biological matrices (bioaccumulation) and human exposure (bioavailability) as well as the eventual usage in modeling the fate and bioavailability of ENPs. In this report, we use the terminology "partitioning" to operationally define the fraction of ENPs distributed among different phases. The mechanisms leading to this partitioning probably involve both chemical force interactions (hydrophobic association, hydrogen bonding, ligand exchange, etc.) and physical forces that bring the ENPs in close contact with the phase interfaces (diffusion, electrostatic interactions, mixing turbulence, etc.). Our work focuses on partitioning, but also provides insight into the relative behavior of ENPs as either "more like dissolved substances" or "more like colloids" as the division between behaviors of macromolecules versus colloids remains ill-defined. Below we detail our work on two broadly defined objectives: (i) Partitioning of ENP into octanol, lipid bilayer, and water, and (ii) disruption of lipid bilayers by ENPs. We have found that the partitioning of NP reaches pseudo-equilibrium distributions between water and organic phases. The equilibrium partitioning most strongly depends on the particle surface charge, which leads us to the conclusion that electrostatic interactions are critical to understanding the fate of NP in the environment. We also show that the kinetic rate at which particle partition is a function of their size (small particles partition faster by number) as can be predicted from simple DLVO models. We have found that particle number density is the most effective dosimetry to present our results and provide quantitative comparison across experiments and experimental platforms. Cumulatively, our work shows that lipid bilayers are a more effective organic phase than octanol because of the definable surface area and ease of interpretation of the results. Our early comparison of NP partitioning between water and lipids suggest that this measurement can be predictive of bioaccumulation in aquatic organisms. We have shown that nanoparticle disrupt lipid bilayer membranes and detail how NP-bilayer interaction leads to the malfunction of lipid bilayers in regulating the fluxes of ionic charges and molecules. Our results show that the disruption of the lipid membranes is similar to that of toxin melittin, except single particles can disrupt a bilayer. We show that only a single particle is required to disrupt a 150 nm DOPC liposome. The equilibrium leakage of membranes is a function of the particle number density and particle surface charge, consistent with results from our partitioning experiments. Our disruption experiments with varying surface functionality show that positively charged particles (poly amine) are most disruptive, consistent with in in vitro toxicity panels using cell cultures. Overall, this project has resulted in 8 published or submitted archival papers and has been presented 12 times. We have trained five students and provided growth opportunities for a postdoc.« less

  8. Application-Controlled Demand Paging for Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)

    1997-01-01

    In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.

  9. Partition-free approach to open quantum systems in harmonic environments: An exact stochastic Liouville equation

    NASA Astrophysics Data System (ADS)

    McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.

    2017-03-01

    We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.

  10. Lake Michigan Diversion Accounting land cover change estimation by use of the National Land Cover Dataset and raingage network partitioning analysis

    USGS Publications Warehouse

    Sharpe, Jennifer B.; Soong, David T.

    2015-01-01

    This study used the National Land Cover Dataset (NLCD) and developed an automated process for determining the area of the three land cover types, thereby allowing faster updating of future models, and for evaluating land cover changes by use of historical NLCD datasets. The study also carried out a raingage partitioning analysis so that the segmentation of land cover and rainfall in each modeled unit is directly applicable to the HSPF modeling. Historical and existing impervious, grass, and forest land acreages partitioned by percentages covered by two sets of raingages for the Lake Michigan diversion SCAs, gaged basins, and ungaged basins are presented.

  11. Structural partitioning of complex structures in the medium-frequency range. An application to an automotive vehicle

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2011-02-01

    In a recent work [ Journal of Sound and Vibration 323 (2009) 849-863] the authors presented an energy-density field approach for the vibroacoustic analysis of complex structures in the low and medium frequency ranges. In this approach, a local vibroacoustic energy model as well as a simplification of this model were constructed. In this paper, firstly an extension of the previous theory is performed in order to include the case of general input forces and secondly, a structural partitioning methodology is presented along with a set of tools used for the construction of a partitioning. Finally, an application is presented for an automotive vehicle.

  12. A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems

    DTIC Science & Technology

    2005-05-01

    Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT

  13. Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.

    PubMed

    Zhao, Jian; Chen, Lian-Kuan

    2017-04-17

    We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.

  14. Partitioning and packing mathematical simulation models for calculation on parallel computers

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.; Milner, E. J.

    1986-01-01

    The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.

  15. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  16. Going from microscopic to macroscopic on nonuniform growing domains.

    PubMed

    Yates, Christian A; Baker, Ruth E; Erban, Radek; Maini, Philip K

    2012-08-01

    Throughout development, chemical cues are employed to guide the functional specification of underlying tissues while the spatiotemporal distributions of such chemicals can be influenced by the growth of the tissue itself. These chemicals, termed morphogens, are often modeled using partial differential equations (PDEs). The connection between discrete stochastic and deterministic continuum models of particle migration on growing domains was elucidated by Baker, Yates, and Erban [Bull. Math. Biol. 72, 719 (2010)] in which the migration of individual particles was modeled as an on-lattice position-jump process. We build on this work by incorporating a more physically reasonable description of domain growth. Instead of allowing underlying lattice elements to instantaneously double in size and divide, we allow incremental element growth and splitting upon reaching a predefined threshold size. Such a description of domain growth necessitates a nonuniform partition of the domain. We first demonstrate that an individual-based stochastic model for particle diffusion on such a nonuniform domain partition is equivalent to a PDE model of the same phenomenon on a nongrowing domain, providing the transition rates (which we derive) are chosen correctly and we partition the domain in the correct manner. We extend this analysis to the case where the domain is allowed to change in size, altering the transition rates as necessary. Through application of the master equation formalism we derive a PDE for particle density on this growing domain and corroborate our findings with numerical simulations.

  17. The heavy metal partition in size-fractions of the fine particles in agricultural soils contaminated by waste water and smelter dust.

    PubMed

    Zhang, Haibo; Luo, Yongming; Makino, Tomoyuki; Wu, Longhua; Nanzyo, Masami

    2013-03-15

    The partitioning of pollutant in the size-fractions of fine particles is particularly important to its migration and bioavailability in soil environment. However, the impact of pollution sources on the partitioning was seldom addressed in the previous studies. In this study, the method of continuous flow ultra-centrifugation was developed to separate three size fractions (<1 μm, <0.6 μm and <0.2 μm) of the submicron particles from the soil polluted by wastewater and smelter dust respectively. The mineralogy and physicochemical properties of each size-fraction were characterized by X-ray diffraction, transmission electron microscope etc. Total content of the polluted metals and their chemical speciation were measured. A higher enrichment factor of the metals in the fractions of <1 μm or less were observed in the soil contaminated by wastewater than by smelter dust. The organic substance in the wastewater and calcite from lime application were assumed to play an important role in the metal accumulation in the fine particles of the wastewater polluted soil. While the metal accumulation in the fine particles of the smelter dust polluted soil is mainly associated with Mn oxides. Cadmium speciation in both soils is dominated by dilute acid soluble form and lead speciation in the smelter dust polluted soil is dominated by reducible form in all particles. This implied that the polluted soils might be a high risk to human health and ecosystem due to the high bioaccessibility of the metals as well as the mobility of the fine particles in soil. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Food resource partitioning by nine sympatric darter species

    USGS Publications Warehouse

    van Snik, Gray E.; Boltz, J.M.; Kellogg, K.A.; Stauffer, J.R.

    1997-01-01

    We compared the diets among members of the diverse darter community of French Creek, Pennsylvania, in relation to seasonal prey availability, feeding ontogeny, and sex. Prey taxa and size attributes were characterized for nine syntopic darter species; taxon, size, and availability of macroinvertebrate prey were also analyzed from Surber samples. In general, darters fed opportunistically on immature insects; few taxa were consumed in greater proportions than they were found in the environment. Some variation in diet composition was expressed, however, among different life stages and species. Juvenile darters consumed smaller prey and more chironomids than did adults. Etheostoma blennioides and E. zonale consumed the fewest taxa (2-3), whereas E. maculatum, E. variatum, and Percina evides bad the most diverse diets (7-10 taxa). Etheostoma maculatum, E. flabellare, E. variatum, and P. evides consumed larger prey (1-13 mm in standard length), whereas E. blennioides, E. caeruleum, E. camurum, E. tippecanoe, and E. zonale rarely consumed prey longer than 6 mm. Percina evides fed on larger prey, fewer chironomids, and more fish eggs than Etheostoma species. Females consumed more prey than males and overlapped less in diet composition with males during the spawning season than afterwards. Fish diets did not seem related to habitat use. Greater trophic partitioning was observed in April, when prey resources were scarce, than in July, when prey were abundant. Darter species fed opportunistically when prey were dense, whereas they partitioned food resources mainly through the prey size dimension when prey were less abundant. The divergence of darter diets during a period of low food availability may be attributed to interspecific competition. Alternatively, the greater abundance of large prey in April may have facilitated better prey size selectivity, resulting in less overlap among darter species.

  19. Evidence of Niche Partitioning under Ontogenetic Influences among Three Morphologically Similar Siluriformes in Small Subtropical Streams

    PubMed Central

    Bonato, Karine Orlandi; Fialho, Clarice Bernhardt

    2014-01-01

    Ontogenetic influences in patterns of niche breadth and feeding overlap were investigated in three species of Siluriformes (Heptapterus sp., Rhamdia quelen and Trichomycterus poikilos) aiming at understanding the species coexistence. Samplings were conducted bimonthly by electrofishing technique from June/2012 to June/2013 in ten streams of the northwestern state of Rio Grande do Sul, Brazil. The stomach contents of 1,948 individuals were analyzed by volumetric method, with 59 food items identified. In general Heptapterus sp. consumed a high proportion of Aegla sp., terrestrial plant remains and Megaloptera; R. quelen consumed fish, and Oligochaeta, followed by Aegla sp.; while the diet of T. poikilos was based on Simuliidae, Ephemeroptera and Trichoptera. Specie segregation was observed in the NMDS. Through PERMANOVA analysis feeding differences among species, and between a combination of species plus size classes were observed. IndVal showed which items were indicators of these differences. Niche breadth values were high for all species. The niche breadth values were low only for the larger size of R. quelen and Heptapterus sp. while T. poikilos values were more similar. Overall the species were a low feeding overlap values. The higher frequency of high feeding overlap was observed for interaction between Heptapterus sp. and T. poikilos. The null model confirmed the niche partitioning between the species. The higher frequency of high and intermediate feeding overlap values were reported to smaller size classes. The null model showed resource sharing between the species/size class. Therefore, overall species showed a resource partitioning because of the use of occasional items. However, these species share resources mainly in the early ontogenetic stages until the emphasized change of morphological characteristics leading to trophic niche expansion and the apparent segregation observed. PMID:25340614

  20. Summer-winter concentrations and gas-particle partitioning of short chain chlorinated paraffins in the atmosphere of an urban setting.

    PubMed

    Wang, Thanh; Han, Shanlong; Yuan, Bo; Zeng, Lixi; Li, Yingming; Wang, Yawei; Jiang, Guibin

    2012-12-01

    Short chain chlorinated paraffins (SCCPs) are semi-volatile chemicals that are considered persistent in the environment, potential toxic and subject to long-range transport. This study investigates the concentrations and gas-particle partitioning of SCCPs at an urban site in Beijing during summer and wintertime. The total atmospheric SCCP levels ranged 1.9-33.0 ng/m(3) during wintertime. Significantly higher levels were found during the summer (range 112-332 ng/m(3)). The average fraction of total SCCPs in the particle phase (ϕ) was 0.67 during wintertime but decreased significantly during the summer (ϕ = 0.06). The ten and eleven carbon chain homologues with five to eight chlorine atoms were the predominant SCCP formula groups in air. Significant linear correlations were found between the gas-particle partition coefficients and the predicted subcooled vapor pressures and octanol-air partition coefficients. The gas-particle partitioning of SCCPs was further investigated and compared with both the Junge-Pankow adsorption and K(oa)-based absorption models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Food and disturbance effects on Arctic benthic biomass and production size spectra

    NASA Astrophysics Data System (ADS)

    Górska, Barbara; Włodarska-Kowalczuk, Maria

    2017-03-01

    Body size is a fundamental biological unit that is closely coupled to key ecological properties and processes. At the community level, changes in size distributions may influence energy transfer pathways in benthic food webs and ecosystem carbon cycling; nevertheless they remain poorly explored in benthic systems, particularly in the polar regions. Here, we present the first assessment of the patterns of benthic biomass size spectra in Arctic coastal sediments and explore the effects of glacial disturbance and food availability on the partitioning of biomass and secondary productivity among size-defined components of benthic communities. The samples were collected in two Arctic fjords off west Spitsbergen (76 and 79°N), at 6 stations that represent three regimes of varying food availability (indicated by chlorophyll a concentration in the sediments) and glacial sedimentation disturbance intensity (indicated by sediment accumulation rates). The organisms were measured using image analysis to assess the biovolume, biomass and the annual production of each individual. The shape of benthic biomass size spectra at most stations was bimodal, with the location of a trough and peaks similar to those previously reported in lower latitudes. In undisturbed sediments macrofauna comprised 89% of the total benthic biomass and 56% of the total production. The lower availability of food resources seemed to suppress the biomass and secondary production across the whole size spectra (a 6-fold decrease in biomass and a 4-fold decrease in production in total) rather than reshape the spectrum. At locations where poor nutritional conditions were coupled with disturbance, the biomass was strongly reduced in selected macrofaunal size classes (class 10 and 11), while meiofaunal biomass and production were much higher, most likely due to a release from macrofaunal predation and competition pressure. As a result, the partitioning of benthic biomass and production shifted towards meiofauna (39% of biomass and 83% of production), which took over the benthic metazoan key-player role in terms of processing organic matter in sediments. Macrofaunal nematodes composed a considerable portion of the benthic community in terms of biomass (up to 9%) and production (up to 12%), but only in undisturbed sediments with high organic matter content. Our study indicates that food availability and disturbance controls the total bulk and partitioning of biomass and production among the size classes in Arctic benthic communities.

  2. Emission characteristics for gaseous- and size-segregated particulate PAHs in coal combustion flue gas from circulating fluidized bed (CFB) boiler.

    PubMed

    Wang, Ruwei; Liu, Guijian; Sun, Ruoyu; Yousaf, Balal; Wang, Jizhong; Liu, Rongqiong; Zhang, Hong

    2018-07-01

    The partitioning behavior of polycyclic aromatic hydrocarbons (PAHs) between gaseous and particulate phases from coal-fired power plants (CFPPs) is critically important to predict PAH removal by dust control devices. In this study, 16 US-EPA priority PAHs in gaseous and size-segregated particulate phases at the inlet and outlet of the fabric filter unit (FFs) of a circulating fluidized bed (CFB) boiler were analyzed. The partitioning mechanisms of PAHs between gaseous and particulate phases and in particles of different size classes were investigated. We found that the removal efficiencies of PAHs are 45.59% and 70.67-89.06% for gaseous and particulate phases, respectively. The gaseous phase mainly contains low molecular weight (LMW) PAHs (2- and 3-ring PAHs), which is quite different from the particulate phase that mainly contains medium and high molecular weight (MMW and HMW) PAHs (4- to 6-ring PAHs). The fractions of LMW PAHs show a declining trend with the decrease of particle size. The gas-particle partitioning of PAHs is primarily controlled by organic carbon absorption, in addition, it has a clear dependence on the particle sizes. Plot of log (TPAH/PM) against logD p shows that all slope values were below -1, suggesting that PAHs were mainly adsorbed to particulates. The adsorption effect of PAHs in size-segregated PMs for HMW PAHs is more evident than LMW PAHs. The particle size distributions (PSDs) of individual PAHs show that most of PAHs exhibit bi-model structures, with one mode peaking in the accumulation size range (2.1-1.1 μm) and another mode peaking in coarse size range (5.8-4.7 μm). The intensities of these two peaks vary in function of ring number of PAHs, which is likely attributed to Kelvin effect that the less volatile HMW PAH species preferentially condense onto the finer particulates. The emission factor of PAHs was calculated as 3.53 mg/kg of coal burned, with overall mean EF PAH of 0.55 and 2.98 mg/kg for gaseous and particulate phase, respectively. Moreover, the average emission amount of PAHs for the investigated CFPP was 1016.6 g/day and 371073.6 g/y, respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Size exclusion chromatographic analysis of refuse-derived fuel for mycotoxins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicking, M.K.; Kniseley, R.N.

    1980-11-01

    A Styragel packing material is characterized in several solvent systems by using a series of test solutes and mycotoxins. Differences in interpretation with other work are discussed. Three different separation modes are generated on one stationary phase. An improved separation of mycotoxins from a compilcated matrix results by simultaneously using size exclusion and liquid-liquid partitioning. 4 figures, 3 tables.

  4. Nudging physician prescription decisions by partitioning the order set: results of a vignette-based study.

    PubMed

    Tannenbaum, David; Doctor, Jason N; Persell, Stephen D; Friedberg, Mark W; Meeker, Daniella; Friesema, Elisha M; Goldstein, Noah J; Linder, Jeffrey A; Fox, Craig R

    2015-03-01

    Healthcare professionals are rapidly adopting electronic health records (EHRs). Within EHRs, seemingly innocuous menu design configurations can influence provider decisions for better or worse. The purpose of this study was to examine whether the grouping of menu items systematically affects prescribing practices among primary care providers. We surveyed 166 primary care providers in a research network of practices in the greater Chicago area, of whom 84 responded (51% response rate). Respondents and non-respondents were similar on all observable dimensions except that respondents were more likely to work in an academic setting. The questionnaire consisted of seven clinical vignettes. Each vignette described typical signs and symptoms for acute respiratory infections, and providers chose treatments from a menu of options. For each vignette, providers were randomly assigned to one of two menu partitions. For antibiotic-inappropriate vignettes, the treatment menu either listed over-the-counter (OTC) medications individually while grouping prescriptions together, or displayed the reverse partition. For antibiotic-appropriate vignettes, the treatment menu either listed narrow-spectrum antibiotics individually while grouping broad-spectrum antibiotics, or displayed the reverse partition. The main outcome was provider treatment choice. For antibiotic-inappropriate vignettes, we categorized responses as prescription drugs or OTC-only options. For antibiotic-appropriate vignettes, we categorized responses as broad- or narrow-spectrum antibiotics. Across vignettes, there was an 11.5 percentage point reduction in choosing aggressive treatment options (e.g., broad-spectrum antibiotics) when aggressive options were grouped compared to when those same options were listed individually (95% CI: 2.9 to 20.1%; p = .008). Provider treatment choice appears to be influenced by the grouping of menu options, suggesting that the layout of EHR order sets is not an arbitrary exercise. The careful crafting of EHR order sets can serve as an important opportunity to improve patient care without constraining physicians' ability to prescribe what they believe is best for their patients.

  5. Controlled drug release from hydrogels for contact lenses: Drug partitioning and diffusion.

    PubMed

    Pimenta, A F R; Ascenso, J; Fernandes, J C S; Colaço, R; Serro, A P; Saramago, B

    2016-12-30

    Optimization of drug delivery from drug loaded contact lenses assumes understanding the drug transport mechanisms through hydrogels which relies on the knowledge of drug partition and diffusion coefficients. We chose, as model systems, two materials used in contact lens, a poly-hydroxyethylmethacrylate (pHEMA) based hydrogel and a silicone based hydrogel, and three drugs with different sizes and charges: chlorhexidine, levofloxacin and diclofenac. Equilibrium partition coefficients were determined at different ionic strength and pH, using water (pH 5.6) and PBS (pH 7.4). The measured partition coefficients were related with the polymer volume fraction in the hydrogel, through the introduction of an enhancement factor following the approach developed by the group of C. J. Radke (Kotsmar et al., 2012; Liu et al., 2013). This factor may be decomposed in the product of three other factors E HS , E el and E ad which account for, respectively, hard-sphere size exclusion, electrostatic interactions, and specific solute adsorption. While E HS and E el are close to 1, E ad >1 in all cases suggesting strong specific interactions between the drugs and the hydrogels. Adsorption was maximal for chlorhexidine on the silicone based hydrogel, in water, due to strong hydrogen bonding. The effective diffusion coefficients, D e , were determined from the drug release profiles. Estimations of diffusion coefficients of the non-adsorbed solutes D=D e ×E ad allowed comparison with theories for solute diffusion in the absence of specific interaction with the polymeric membrane. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.

    2007-09-01

    Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.

  7. Comparative analyses of plastid genomes from fourteen Cornales species: inferences for phylogenetic relationships and genome evolution.

    PubMed

    Fu, Chao-Nan; Li, Hong-Tao; Milne, Richard; Zhang, Ting; Ma, Peng-Fei; Yang, Jing; Li, De-Zhu; Gao, Lian-Ming

    2017-12-08

    The Cornales is the basal lineage of the asterids, the largest angiosperm clade. Phylogenetic relationships within the order were previously not fully resolved. Fifteen plastid genomes representing 14 species, ten genera and seven families of Cornales were newly sequenced for comparative analyses of genome features, evolution, and phylogenomics based on different partitioning schemes and filtering strategies. All plastomes of the 14 Cornales species had the typical quadripartite structure with a genome size ranging from 156,567 bp to 158,715 bp, which included two inverted repeats (25,859-26,451 bp) separated by a large single-copy region (86,089-87,835 bp) and a small single-copy region (18,250-18,856 bp) region. These plastomes encoded the same set of 114 unique genes including 31 transfer RNA, 4 ribosomal RNA and 79 coding genes, with an identical gene order across all examined Cornales species. Two genes (rpl22 and ycf15) contained premature stop codons in seven and five species respectively. The phylogenetic relationships among all sampled species were fully resolved with maximum support. Different filtering strategies (none, light and strict) of sequence alignment did not have an effect on these relationships. The topology recovered from coding and noncoding data sets was the same as for the whole plastome, regardless of filtering strategy. Moreover, mutational hotspots and highly informative regions were identified. Phylogenetic relationships among families and intergeneric relationships within family of Cornales were well resolved. Different filtering strategies and partitioning schemes do not influence the relationships. Plastid genomes have great potential to resolve deep phylogenetic relationships of plants.

  8. National Land Cover Database 2001 (NLCD01)

    USGS Publications Warehouse

    LaMotte, Andrew E.

    2016-01-01

    This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  9. National Land Cover Database 2001 (NLCD01) Imperviousness Layer Tile 1, Northwest United States: IMPV01_1

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  10. National Land Cover Database 2001 (NLCD01) Tree Canopy Layer Tile 2, Northeast United States: CNPY01_2

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  11. National Land Cover Database 2001 (NLCD01) Imperviousness Layer Tile 4, Southeast United States: IMPV01_4

    USGS Publications Warehouse

    Wieczorek, Michael; LaMotte, Andrew E.

    2010-01-01

    This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  12. National Land Cover Database 2001 (NLCD01) Tree Canopy Layer Tile 1, Northwest United States: CNPY01_1

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov

  13. National Land Cover Database 2001 (NLCD01) Imperviousness Layer Tile 2, Northeast United States: IMPV01_2

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  14. National Land Cover Database 2001 (NLCD01) Tree Canopy Layer Tile 4, Southeast United States: CNPY01_4

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  15. National Land Cover Database 2001 (NLCD01) Imperviousness Layer Tile 3, Southwest United States: IMPV01_3

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  16. National Land Cover Database 2001 (NLCD01) Tree Canopy Layer Tile 3, Southwest United States: CNPY01_3

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  17. The final cut: cell polarity meets cytokinesis at the bud neck in S. cerevisiae.

    PubMed

    Juanes, Maria Angeles; Piatti, Simonetta

    2016-08-01

    Cell division is a fundamental but complex process that gives rise to two daughter cells. It includes an ordered set of events, altogether called "the cell cycle", that culminate with cytokinesis, the final stage of mitosis leading to the physical separation of the two daughter cells. Symmetric cell division equally partitions cellular components between the two daughter cells, which are therefore identical to one another and often share the same fate. In many cases, however, cell division is asymmetrical and generates two daughter cells that differ in specific protein inheritance, cell size, or developmental potential. The budding yeast Saccharomyces cerevisiae has proven to be an excellent system to investigate the molecular mechanisms governing asymmetric cell division and cytokinesis. Budding yeast is highly polarized during the cell cycle and divides asymmetrically, producing two cells with distinct sizes and fates. Many components of the machinery establishing cell polarization during budding are relocalized to the division site (i.e., the bud neck) for cytokinesis. In this review we recapitulate how budding yeast cells undergo polarized processes at the bud neck for cell division.

  18. Adaptive hybrid simulations for multiscale stochastic reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less

  19. Adaptive hybrid simulations for multiscale stochastic reaction networks.

    PubMed

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  20. Cluster Free Energies from Simple Simulations of Small Numbers of Aggregants: Nucleation of Liquid MTBE from Vapor and Aqueous Phases.

    PubMed

    Patel, Lara A; Kindt, James T

    2017-03-14

    We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.

  1. Does the oviparity-viviparity transition alter the partitioning of yolk in embryonic snakes?

    PubMed

    Wu, Yan-Qing; Qu, Yan-Fu; Wang, Xue-Ji; Gao, Jian-Fang; Ji, Xiang

    2017-11-29

    The oviparity-viviparity transition is a major evolutionary event, likely altering the reproductive process of the organisms involved. Residual yolk, a portion of yolk remaining unutilized at hatching or birth as parental investment in care, has been investigated in many oviparous amniotes but remained largely unknown in viviparous species. Here, we used data from 20 (12 oviparous and 8 viviparous) species of snakes to see if the oviparity-viviparity transition alters the partitioning of yolk in embryonic snakes. We used ANCOVA to test whether offspring size, mass and components at hatching or birth differed between the sexes in each species. We used both ordinary least squares and phylogenetic generalized least squares regressions to test whether relationships between selected pairs of offspring components were significant. We used phylogenetic ANOVA to test whether offspring components differed between oviparous and viviparous species and, more specifically, the hypothesis that viviparous snakes invest more in the yolk as parental investment in embryogenesis to produce more well developed offspring that are larger in linear size. In none of the 20 species was sex a significant source of variation in any offspring component examined. Newborn viviparous snakes on average contained proportionally more water and, after accounting for body dry mass, had larger carcasses but smaller residual yolks than did newly hatched oviparous snakes. The rates at which carcass dry mass (CDM) and fat body dry mass (FDM) increased with residual yolk dry mass (YDM) did not differ between newborn oviparous and viviparous snakes. Neither CDM nor FDM differed between newborn oviparous and viviparous snakes after accounting for YDM. Our results are not consistent with the hypothesis that the partitioning of yolk between embryonic and post-embryonic stages differs between snakes that differ in parity mode, but instead show that the partitioning of yolk in embryonic snakes is species-specific or phylogenetically related. We conclude that the oviparity-viviparity transition does not alter yolk partitioning in embryonic snakes.

  2. Achieving microaggregation for secure statistical databases using fixed-structure partitioning-based learning automata.

    PubMed

    Fayyoumi, Ebaa; Oommen, B John

    2009-10-01

    We consider the microaggregation problem (MAP) that involves partitioning a set of individual records in a microdata file into a number of mutually exclusive and exhaustive groups. This problem, which seeks for the best partition of the microdata file, is known to be NP-hard and has been tackled using many heuristic solutions. In this paper, we present the first reported fixed-structure-stochastic-automata-based solution to this problem. The newly proposed method leads to a lower value of the information loss (IL), obtains a better tradeoff between the IL and the disclosure risk (DR) when compared with state-of-the-art methods, and leads to a superior value of the scoring index, which is a criterion involving a combination of the IL and the DR. The scheme has been implemented, tested, and evaluated for different real-life and simulated data sets. The results clearly demonstrate the applicability of learning automata to the MAP and its ability to yield a solution that obtains the best tradeoff between IL and DR when compared with the state of the art.

  3. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    PubMed

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  4. Solute partitioning under continuous cooling conditions as a cooling rate indicator. [in lunar rocks

    NASA Technical Reports Server (NTRS)

    Onorato, P. I. K.; Hopper, R. W.; Yinnon, H.; Uhlmann, D. R.; Taylor, L. A.; Garrison, J. R.; Hunter, R.

    1981-01-01

    A model of solute partitioning in a finite body under conditions of continuous cooling is developed for the determination of cooling rates from concentration profile data, and applied to the partitioning of zirconium between ilmenite and ulvospinel in the Apollo 15 Elbow Crater rocks. Partitioning in a layered composite solid is described numerically in terms of concentration profiles and diffusion coefficients which are functions of time and temperature, respectively; a program based on the model can be used to calculate concentration profiles for various assumed cooling rates given the diffusion coefficients in the two phases and the equilibrium partitioning ratio over a range of temperatures. In the case of the Elbow Rock gabbros, the cooling rates are calculated from measured concentration ratios 10 microns from the interphase boundaries under the assumptions of uniform and equilibrium initial conditions at various starting temperatures. It is shown that the specimens could not have had uniform concentrations profiles at the previously suggested initial temperature of 1350 K. It is concluded that even under conditions where the initial temperature, grain sizes and solute diffusion coefficients are not well characterized, the model can be used to estimate the cooling rate of a grain assemblage to within an order of magnitude.

  5. Shift in Mass Transfer of Wastewater Contaminants from Microplastics in the Presence of Dissolved Substances.

    PubMed

    Seidensticker, Sven; Zarfl, Christiane; Cirpka, Olaf A; Fellenberg, Greta; Grathwohl, Peter

    2017-11-07

    In aqueous environments, hydrophobic organic contaminants are often associated with particles. Besides natural particles, microplastics have raised public concern. The release of pollutants from such particles depends on mass transfer, either in an aqueous boundary layer or by intraparticle diffusion. Which of these mechanisms controls the mass-transfer kinetics depends on partition coefficients, particle size, boundary conditions, and time. We have developed a semianalytical model accounting for both processes and performed batch experiments on the desorption kinetics of typical wastewater pollutants (phenanthrene, tonalide, and benzophenone) at different dissolved-organic-matter concentrations, which change the overall partitioning between microplastics and water. Initially, mass transfer is externally dominated, while finally, intraparticle diffusion controls release kinetics. Under boundary conditions typical for batch experiments (finite bath), desorption accelerates with increasing partition coefficients for intraparticle diffusion, while it becomes independent of partition coefficients if film diffusion prevails. On the contrary, under field conditions (infinite bath), the pollutant release controlled by intraparticle diffusion is not affected by partitioning of the compound while external mass transfer slows down with increasing sorption. Our results clearly demonstrate that sorption/desorption time scales observed in batch experiments may not be transferred to field conditions without an appropriate model accounting for both the mass-transfer mechanisms and the specific boundary conditions at hand.

  6. Understanding the Scalability of Bayesian Network Inference Using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.

    2010-01-01

    One of the main approaches to performing computation in Bayesian networks (BNs) is clique tree clustering and propagation. The clique tree approach consists of propagation in a clique tree compiled from a Bayesian network, and while it was introduced in the 1980s, there is still a lack of understanding of how clique tree computation time depends on variations in BN size and structure. In this article, we improve this understanding by developing an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN s non-root nodes to the number of root nodes, and (ii) the expected number of moral edges in their moral graphs. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for the total size of each set. For the special case of bipartite BNs, there are two sets and two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, where random bipartite BNs generated using the BPART algorithm are studied, we systematically increase the out-degree of the root nodes in bipartite Bayesian networks, by increasing the number of leaf nodes. Surprisingly, root clique growth is well-approximated by Gompertz growth curves, an S-shaped family of curves that has previously been used to describe growth processes in biology, medicine, and neuroscience. We believe that this research improves the understanding of the scaling behavior of clique tree clustering for a certain class of Bayesian networks; presents an aid for trade-off studies of clique tree clustering using growth curves; and ultimately provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms.

  7. Parallel fast multipole boundary element method applied to computational homogenization

    NASA Astrophysics Data System (ADS)

    Ptaszny, Jacek

    2018-01-01

    In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.

  8. Partitioning net ecosystem carbon exchange into net assimilation and respiration using 13CO2 measurements: A cost-effective sampling strategy

    NASA Astrophysics Data System (ADS)

    OgéE, J.; Peylin, P.; Ciais, P.; Bariac, T.; Brunet, Y.; Berbigier, P.; Roche, C.; Richard, P.; Bardoux, G.; Bonnefond, J.-M.

    2003-06-01

    The current emphasis on global climate studies has led the scientific community to set up a number of sites for measuring the long-term biosphere-atmosphere net CO2 exchange (net ecosystem exchange, NEE). Partitioning this flux into its elementary components, net assimilation (FA), and respiration (FR), remains necessary in order to get a better understanding of biosphere functioning and design better surface exchange models. Noting that FR and FA have different isotopic signatures, we evaluate the potential of isotopic 13CO2 measurements in the air (combined with CO2 flux and concentration measurements) to partition NEE into FR and FA on a routine basis. The study is conducted at a temperate coniferous forest where intensive isotopic measurements in air, soil, and biomass were performed in summer 1997. The multilayer soil-vegetation-atmosphere transfer model MuSICA is adapted to compute 13CO2 flux and concentration profiles. Using MuSICA as a "perfect" simulator and taking advantage of the very dense spatiotemporal resolution of the isotopic data set (341 flasks over a 24-hour period) enable us to test each hypothesis and estimate the performance of the method. The partitioning works better in midafternoon when isotopic disequilibrium is strong. With only 15 flasks, i.e., two 13CO2 nighttime profiles (to estimate the isotopic signature of FR) and five daytime measurements (to perform the partitioning) we get mean daily estimates of FR and FA that agree with the model within 15-20%. However, knowledge of the mesophyll conductance seems crucial and may be a limitation to the method.

  9. Boundaries on Range-Range Constrained Admissible Regions for Optical Space Surveillance

    NASA Astrophysics Data System (ADS)

    Gaebler, J. A.; Axelrad, P.; Schumacher, P. W., Jr.

    We propose a new type of admissible-region analysis for track initiation in multi-satellite problems when apparent angles measured at known stations are the only observable. The goal is to create an efficient and parallelizable algorithm for computing initial candidate orbits for a large number of new targets. It takes at least three angles-only observations to establish an orbit by traditional means. Thus one is faced with a problem that requires N-choose-3 sets of calculations to test every possible combination of the N observations. An alternative approach is to reduce the number of combinations by making hypotheses of the range to a target along the observed line-of-sight. If realistic bounds on the range are imposed, consistent with a given partition of the space of orbital elements, a pair of range possibilities can be evaluated via Lambert’s method to find candidate orbits for that that partition, which then requires Nchoose- 2 times M-choose-2 combinations, where M is the average number of range hypotheses per observation. The contribution of this work is a set of constraints that establish bounds on the range-range hypothesis region for a given element-space partition, thereby minimizing M. Two effective constraints were identified, which together, constrain the hypothesis region in range-range space to nearly that of the true admissible region based on an orbital partition. The first constraint is based on the geometry of the vacant orbital focus. The second constraint is based on time-of-flight and Lagrange’s form of Kepler’s equation. A complete and efficient parallelization of the problem is possible on this approach because the element partitions can be arbitrary and can be handled independently of each other.

  10. Influence of Silicate Melt Composition on Metal/Silicate Partitioning of W, Ge, Ga and Ni

    NASA Technical Reports Server (NTRS)

    Singletary, S. J.; Domanik, K.; Drake, M. J.

    2005-01-01

    The depletion of the siderophile elements in the Earth's upper mantle relative to the chondritic meteorites is a geochemical imprint of core segregation. Therefore, metal/silicate partition coefficients (Dm/s) for siderophile elements are essential to investigations of core formation when used in conjunction with the pattern of elemental abundances in the Earth's mantle. The partitioning of siderophile elements is controlled by temperature, pressure, oxygen fugacity, and by the compositions of the metal and silicate phases. Several recent studies have shown the importance of silicate melt composition on the partitioning of siderophile elements between silicate and metallic liquids. It has been demonstrated that many elements display increased solubility in less polymerized (mafic) melts. However, the importance of silicate melt composition was believed to be minor compared to the influence of oxygen fugacity until studies showed that melt composition is an important factor at high pressures and temperatures. It was found that melt composition is also important for partitioning of high valency siderophile elements. Atmospheric experiments were conducted, varying only silicate melt composition, to assess the importance of silicate melt composition for the partitioning of W, Co and Ga and found that the valence of the dissolving species plays an important role in determining the effect of composition on solubility. In this study, we extend the data set to higher pressures and investigate the role of silicate melt composition on the partitioning of the siderophile elements W, Ge, Ga and Ni between metallic and silicate liquid.

  11. Volcanic forcing for climate modeling: a new microphysics-based data set covering years 1600-present

    NASA Astrophysics Data System (ADS)

    Arfeuille, F.; Weisenstein, D.; Mack, H.; Rozanov, E.; Peter, T.; Brönnimann, S.

    2014-02-01

    As the understanding and representation of the impacts of volcanic eruptions on climate have improved in the last decades, uncertainties in the stratospheric aerosol forcing from large eruptions are now linked not only to visible optical depth estimates on a global scale but also to details on the size, latitude and altitude distributions of the stratospheric aerosols. Based on our understanding of these uncertainties, we propose a new model-based approach to generating a volcanic forcing for general circulation model (GCM) and chemistry-climate model (CCM) simulations. This new volcanic forcing, covering the 1600-present period, uses an aerosol microphysical model to provide a realistic, physically consistent treatment of the stratospheric sulfate aerosols. Twenty-six eruptions were modeled individually using the latest available ice cores aerosol mass estimates and historical data on the latitude and date of eruptions. The evolution of aerosol spatial and size distribution after the sulfur dioxide discharge are hence characterized for each volcanic eruption. Large variations are seen in hemispheric partitioning and size distributions in relation to location/date of eruptions and injected SO2 masses. Results for recent eruptions show reasonable agreement with observations. By providing these new estimates of spatial distributions of shortwave and long-wave radiative perturbations, this volcanic forcing may help to better constrain the climate model responses to volcanic eruptions in the 1600-present period. The final data set consists of 3-D values (with constant longitude) of spectrally resolved extinction coefficients, single scattering albedos and asymmetry factors calculated for different wavelength bands upon request. Surface area densities for heterogeneous chemistry are also provided.

  12. 7 CFR 3550.117 - WWD grant purposes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Construction and/or partitioning off a portion of the dwelling for a bathroom, not to exceed 4.6 square meters (48 square feet) in size. (f) Pay reasonable costs for closing abandoned septic tanks and water wells...

  13. 7 CFR 3550.117 - WWD grant purposes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Construction and/or partitioning off a portion of the dwelling for a bathroom, not to exceed 4.6 square meters (48 square feet) in size. (f) Pay reasonable costs for closing abandoned septic tanks and water wells...

  14. Parallel Clustering Algorithm for Large-Scale Biological Data Sets

    PubMed Central

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246

  15. Role of single-point mutations and deletions on transition temperatures in ideal proteinogenic heteropolymer chains in the gas phase.

    PubMed

    Olivares-Quiroz, L

    2016-07-01

    A coarse-grained statistical mechanics-based model for ideal heteropolymer proteinogenic chains of non-interacting residues is presented in terms of the size K of the chain and the set of helical propensities [Formula: see text] associated with each residue j along the chain. For this model, we provide an algorithm to compute the degeneracy tensor [Formula: see text] associated with energy level [Formula: see text] where [Formula: see text] is the number of residues with a native contact in a given conformation. From these results, we calculate the equilibrium partition function [Formula: see text] and characteristic temperature [Formula: see text] at which a transition from a low to a high entropy states is observed. The formalism is applied to analyze the effect on characteristic temperatures [Formula: see text] of single-point mutations and deletions of specific amino acids [Formula: see text] along the chain. Two probe systems are considered. First, we address the case of a random heteropolymer of size K and given helical propensities [Formula: see text] on a conformational phase space. Second, we focus our attention to a particular set of neuropentapeptides, [Met-5] and [Leu-5] enkephalins whose thermodynamic stability is a key feature on their coupling to [Formula: see text] and [Formula: see text] receptors and the triggering of biochemical responses.

  16. Size-related bioconcentration kinetics of hydrophobic chemicals in fish

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sijm, D.T.H.M.; Linde, A. van der

    1994-12-31

    Uptake and elimination of hydrophobic chemicals by fish can be regarded as passive diffusive transport processes. Diffusion coefficients, lipid/water partitioning, diffusion pathlenghts, concentration gradients and surface exchange areas are key parameters describing this bioconcentration distribution process. In the present study two of these parameters were studied: the influence of lipid/water partitioning was studied by using hydrophobic chemicals of different hydrophobicity, and the surface exchange area by using different sizes of fish. By using one species of fish it was assumed that all other parameters were kept constant. Seven age classes of fish were exposed to a series of hydrophobic, formore » five days, which was followed by a deputation phase lasting up to 6 months. Bioconcentration parameters, such as uptake and elimination rate constants, and bioconcentration factors were determined. Uptake of the hydrophobic compounds was compared to that of oxygen. Uptake and elimination rates were compared to weight and estimated (gill) exchange areas. The role of weight and its implications for extrapolations of bioconcentration parameters to other species and sizes will be discussed.« less

  17. The Effect of Superior Semicircular Canal Dehiscence on Intracochlear Sound Pressures

    NASA Astrophysics Data System (ADS)

    Nakajima, Hideko Heidi; Pisano, Dominic V.; Merchant, Saumil N.; Rosowski, John J.

    2011-11-01

    Semicircular canal dehiscence (SCD) is a pathological opening in the bony wall of the inner ear that can result in conductive hearing loss. The hearing loss is variable across patients, and the precise mechanism and source of variability is not fully understood. We use intracochlear sound pressure measurements in cadaveric preparations to study the effects of SCD size. Simultaneous measurement of basal intracochlear sound pressures in scala vestibuli (SV) and scala tympani (ST) quantifies the complex differential pressure across the cochlear partition, the stimulus that excites the partition. Sound-induced pressures in SV and ST, as well as stapes velocity and ear-canal pressure are measured simultaneously for various sizes of SCD followed by SCD patching. At low frequencies (<600 Hz) our results show that SCD decreases the pressure in both SV and ST, as well as differential pressure, and these effects become more pronounced as dehiscence size is increased. For frequencies above 1 kHz, the smallest pinpoint dehiscence can have the larger effect on the differential pressure in some ears. These effects due to SCD are reversible by patching the dehiscence.

  18. Sea urchins in a high-CO2 world: partitioned effects of body size, ocean warming and acidification on metabolic rate.

    PubMed

    Carey, Nicholas; Harianto, Januar; Byrne, Maria

    2016-04-15

    Body size and temperature are the major factors explaining metabolic rate, and the additional factor of pH is a major driver at the biochemical level. These three factors have frequently been found to interact, complicating the formulation of broad models predicting metabolic rates and hence ecological functioning. In this first study of the effects of warming and ocean acidification, and their potential interaction, on metabolic rate across a broad range in body size (two to three orders of magnitude difference in body mass), we addressed the impact of climate change on the sea urchin ITALIC! Heliocidaris erythrogrammain context with climate projections for southeast Australia, an ocean warming hotspot. Urchins were gradually introduced to two temperatures (18 and 23°C) and two pH levels (7.5 and 8.0), at which they were maintained for 2 months. Identical experimental trials separated by several weeks validated the fact that a new physiological steady state had been reached, otherwise known as acclimation. The relationship between body size, temperature and acidification on the metabolic rate of ITALIC! H. erythrogrammawas strikingly stable. Both stressors caused increases in metabolic rate: 20% for temperature and 19% for pH. Combined effects were additive: a 44% increase in metabolism. Body size had a highly stable relationship with metabolic rate regardless of temperature or pH. None of these diverse drivers of metabolism interacted or modulated the effects of the others, highlighting the partitioned nature of how each influences metabolic rate, and the importance of achieving a full acclimation state. Despite these increases in energetic demand there was very limited capacity for compensatory modulating of feeding rate; food consumption increased only in the very smallest specimens, and only in response to temperature, and not pH. Our data show that warming, acidification and body size all substantially affect metabolism and are highly consistent and partitioned in their effects, and for ITALIC! H. erythrogramma, near-future climate change will incur a substantial energetic cost. © 2016. Published by The Company of Biologists Ltd.

  19. Detecting recurrence domains of dynamical systems by symbolic dynamics.

    PubMed

    beim Graben, Peter; Hutt, Axel

    2013-04-12

    We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.

  20. Scalability and Portability of Two Parallel Implementations of ADI

    NASA Technical Reports Server (NTRS)

    Phung, Thanh; VanderWijngaart, Rob F.

    1994-01-01

    Two domain decompositions for the implementation of the NAS Scalar Penta-diagonal Parallel Benchmark on MIMD systems are investigated, namely transposition and multi-partitioning. Hardware platforms considered are the Intel iPSC/860 and Paragon XP/S-15, and clusters of SGI workstations on ethernet, communicating through PVM. It is found that the multi-partitioning strategy offers the kind of coarse granularity that allows scaling up to hundreds of processors on a massively parallel machine. Moreover, efficiency is retained when the code is ported verbatim (save message passing syntax) to a PVM environment on a modest size cluster of workstations.

  1. Determination of zircon/melt trace element partition coefficients from SIMS analysis of melt inclusions in zircon

    NASA Astrophysics Data System (ADS)

    Thomas, J. B.; Bodnar, R. J.; Shimizu, N.; Sinha, A. K.

    2002-09-01

    Partition coefficients ( zircon/meltD M) for rare earth elements (REE) (La, Ce, Nd, Sm, Dy, Er and Yb) and other trace elements (Ba, Rb, B, Sr, Ti, Y and Nb) between zircon and melt have been calculated from secondary ion mass spectrometric (SIMS) analyses of zircon/melt inclusion pairs. The melt inclusion-mineral (MIM) technique shows that D REE increase in compatibility with increasing atomic number, similar to results of previous studies. However, D REE determined using the MIM technique are, in general, lower than previously reported values. Calculated D REE indicate that light REE with atomic numbers less than Sm are incompatible in zircon and become more incompatible with decreasing atomic number. This behavior is in contrast to most previously published results which indicate D > 1 and define a flat partitioning pattern for elements from La through Sm. The partition coefficients for the heavy REE determined using the MIM technique are lower than previously published results by factors of ≈15 to 20 but follow a similar trend. These differences are thought to reflect the effects of mineral and/or glass contaminants in samples from earlier studies which employed bulk analysis techniques. D REE determined using the MIM technique agree well with values predicted using the equations of Brice (1975), which are based on the size and elasticity of crystallographic sites. The presence of Ce 4+ in the melt results in elevated D Ce compared to neighboring REE due to the similar valence and size of Ce 4+ and Zr 4+. Predicted zircon/meltD values for Ce 4+ and Ce 3+ indicate that the Ce 4+/Ce 3+ ratios of the melt ranged from about 10 -3 to 10 -2. Partition coefficients for other trace elements determined in this study increase in compatibility in the order Ba < Rb < B < Sr < Ti < Y < Nb, with Ba, Rb, B and Sr showing incompatible behavior (D M < 1.0), and Ti, Y and Nb showing compatible behavior (D M > 1.0). The effect of partition coefficients on melt evolution during petrogenetic modeling was examined using partition coefficients determined in this study and compared to trends obtained using published partition coefficients. The lower D REE determined in this study result in smaller REE bulk distribution coefficients, for a given mineral assemblage, compared to those calculated using previously reported values. As an example, fractional crystallization of an assemblage composed of 35% hornblende, 64.5% plagioclase and 0.5% zircon produces a melt that becomes increasingly more enriched in Yb using the D Yb from this study. Using D Yb from Fujimaki (1986) results in a melt that becomes progressively depleted in Yb during crystallization.

  2. Solving Multi-variate Polynomial Equations in a Finite Field

    DTIC Science & Technology

    2013-06-01

    Algebraic Background In this section, some algebraic definitions and basics are discussed as they pertain to this re- search. For a more detailed...definitions and basics are discussed as they pertain to this research. For a more detailed treatment, consult a graph theory text such as [10]. A graph G...graph if V(G) can be partitioned into k subsets V1,V2, ...,Vk such that uv is only an edge of G if u and v belong to different partite sets. If, in

  3. Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation

    NASA Astrophysics Data System (ADS)

    Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.

    2018-05-01

    Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.

  4. Analysis of Noise Mechanisms in Cell-Size Control.

    PubMed

    Modi, Saurabh; Vargas-Garcia, Cesar Augusto; Ghusinga, Khem Raj; Singh, Abhyudai

    2017-06-06

    At the single-cell level, noise arises from multiple sources, such as inherent stochasticity of biomolecular processes, random partitioning of resources at division, and fluctuations in cellular growth rates. How these diverse noise mechanisms combine to drive variations in cell size within an isoclonal population is not well understood. Here, we investigate the contributions of different noise sources in well-known paradigms of cell-size control, such as adder (division occurs after adding a fixed size from birth), sizer (division occurs after reaching a size threshold), and timer (division occurs after a fixed time from birth). Analysis reveals that variation in cell size is most sensitive to errors in partitioning of volume among daughter cells, and not surprisingly, this process is well regulated among microbes. Moreover, depending on the dominant noise mechanism, different size-control strategies (or a combination of them) provide efficient buffering of size variations. We further explore mixer models of size control, where a timer phase precedes/follows an adder, as has been proposed in Caulobacter crescentus. Although mixing a timer and an adder can sometimes attenuate size variations, it invariably leads to higher-order moments growing unboundedly over time. This results in a power-law distribution for the cell size, with an exponent that depends inversely on the noise in the timer phase. Consistent with theory, we find evidence of power-law statistics in the tail of C. crescentus cell-size distribution, although there is a discrepancy between the observed power-law exponent and that predicted from the noise parameters. The discrepancy, however, is removed after data reveal that the size added by individual newborns in the adder phase itself exhibits power-law statistics. Taken together, this study provides key insights into the role of noise mechanisms in size homeostasis, and suggests an inextricable link between timer-based models of size control and heavy-tailed cell-size distributions. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Two-lattice models of trace element behavior: A response

    NASA Astrophysics Data System (ADS)

    Ellison, Adam J. G.; Hess, Paul C.

    1990-08-01

    Two-lattice melt components of Bottinga and Weill (1972), Nielsen and Drake (1979), and Nielsen (1985) are applied to major and trace element partitioning between coexisting immiscible liquids studied by RYERSON and Hess (1978) and Watson (1976). The results show that (1) the set of components most successful in one system is not necessarily portable to another system; (2) solution non-ideality within a sublattice severely limits applicability of two-lattice models; (3) rigorous application of two-lattice melt components may yield effective partition coefficients for major element components with no physical interpretation; and (4) the distinction between network-forming and network-modifying components in the sense of the two-lattice models is not clear cut. The algebraic description of two-lattice models is such that they will most successfully limit the compositional dependence of major and trace element solution behavior when the effective partition coefficient of the component of interest is essentially the same as the bulk partition coefficient of all other components within its sublattice.

  6. Toward prediction of alkane/water partition coefficients.

    PubMed

    Toulmin, Anita; Wood, J Matthew; Kenny, Peter W

    2008-07-10

    Partition coefficients were measured for 47 compounds in the hexadecane/water ( P hxd) and 1-octanol/water ( P oct) systems. Some types of hydrogen bond acceptor presented by these compounds to the partitioning systems are not well represented in the literature of alkane/water partitioning. The difference, DeltalogP, between logP oct and logP hxd is a measure of the hydrogen bonding potential of a molecule and is identified as a target for predictive modeling. Minimized molecular electrostatic potential ( V min) was shown to be an effective predictor of the contribution of hydrogen bond acceptors to DeltalogP. Carbonyl oxygen atoms were found to be stronger hydrogen bond acceptors for their electrostatic potential than heteroaromatic nitrogen or oxygen bound to hypervalent sulfur or nitrogen. Values of V min calculated for hydrogen-bonded complexes were used to explore polarization effects. Predicted logP hxd and DeltalogP were shown to be more effective than logP oct for modeling brain penetration for a data set of 18 compounds.

  7. Sharing the cell's bounty - organelle inheritance in yeast.

    PubMed

    Knoblach, Barbara; Rachubinski, Richard A

    2015-02-15

    Eukaryotic cells replicate and partition their organelles between the mother cell and the daughter cell at cytokinesis. Polarized cells, notably the budding yeast Saccharomyces cerevisiae, are well suited for the study of organelle inheritance, as they facilitate an experimental dissection of organelle transport and retention processes. Much progress has been made in defining the molecular players involved in organelle partitioning in yeast. Each organelle uses a distinct set of factors - motor, anchor and adaptor proteins - that ensures its inheritance by future generations of cells. We propose that all organelles, regardless of origin or copy number, are partitioned by the same fundamental mechanism involving division and segregation. Thus, the mother cell keeps, and the daughter cell receives, their fair and equitable share of organelles. This mechanism of partitioning moreover facilitates the segregation of organelle fragments that are not functionally equivalent. In this Commentary, we describe how this principle of organelle population control affects peroxisomes and other organelles, and outline its implications for yeast life span and rejuvenation. © 2015. Published by The Company of Biologists Ltd.

  8. Partitioning of polar and non-polar neutral organic chemicals into human and cow milk.

    PubMed

    Geisler, Anett; Endo, Satoshi; Goss, Kai-Uwe

    2011-10-01

    The aim of this work was to develop a predictive model for milk/water partition coefficients of neutral organic compounds. Batch experiments were performed for 119 diverse organic chemicals in human milk and raw and processed cow milk at 37°C. No differences (<0.3 log units) in the partition coefficients of these types of milk were observed. The polyparameter linear free energy relationship model fit the calibration data well (SD=0.22 log units). An experimental validation data set including hormones and hormone active compounds was predicted satisfactorily by the model. An alternative modelling approach based on log K(ow) revealed a poorer performance. The model presented here provides a significant improvement in predicting enrichment of potentially hazardous chemicals in milk. In combination with physiologically based pharmacokinetic modelling this improvement in the estimation of milk/water partitioning coefficients may allow a better risk assessment for a wide range of neutral organic chemicals. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE PAGES

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-25

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  10. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krogel, Jaron T.; Reboredo, Fernando A.

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  11. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  12. The "p"-Median Model as a Tool for Clustering Psychological Data

    ERIC Educational Resources Information Center

    Kohn, Hans-Friedrich; Steinley, Douglas; Brusco, Michael J.

    2010-01-01

    The "p"-median clustering model represents a combinatorial approach to partition data sets into disjoint, nonhierarchical groups. Object classes are constructed around "exemplars", that is, manifest objects in the data set, with the remaining instances assigned to their closest cluster centers. Effective, state-of-the-art implementations of…

  13. A Novel Space Partitioning Algorithm to Improve Current Practices in Facility Placement

    PubMed Central

    Jimenez, Tamara; Mikler, Armin R; Tiwari, Chetan

    2012-01-01

    In the presence of naturally occurring and man-made public health threats, the feasibility of regional bio-emergency contingency plans plays a crucial role in the mitigation of such emergencies. While the analysis of in-place response scenarios provides a measure of quality for a given plan, it involves human judgment to identify improvements in plans that are otherwise likely to fail. Since resource constraints and government mandates limit the availability of service provided in case of an emergency, computational techniques can determine optimal locations for providing emergency response assuming that the uniform distribution of demand across homogeneous resources will yield and optimal service outcome. This paper presents an algorithm that recursively partitions the geographic space into sub-regions while equally distributing the population across the partitions. For this method, we have proven the existence of an upper bound on the deviation from the optimal population size for sub-regions. PMID:23853502

  14. Stability of coefficients in the Kronecker product of a hook and a rectangle

    NASA Astrophysics Data System (ADS)

    Ballantine, Cristina M.; Hallahan, William T.

    2016-02-01

    We use recent work of Jonah Blasiak (2012 arXiv:1209.2018) to prove a stability result for the coefficients in the Kronecker product of two Schur functions: one indexed by a hook partition and one indexed by a rectangle partition. We also give nearly sharp bounds for the size of the partition starting with which the Kronecker coefficients are stable. Moreover, we show that once the bound is reached, no new Schur functions appear in the decomposition of Kronecker product. We call this property superstability. Thus, one can recover the Schur decomposition of the Kronecker product from the smallest case in which the superstability holds. The bound for superstability is sharp. Our study of this particular case of the Kronecker product is motivated by its usefulness for the understanding of the quantum Hall effect (Scharf T et al 1994 J. Phys. A: Math. Gen 27 4211-9).

  15. Finite-size effects for anisotropic 2D Ising model with various boundary conditions

    NASA Astrophysics Data System (ADS)

    Izmailian, N. Sh

    2012-12-01

    We analyze the exact partition function of the anisotropic Ising model on finite M × N rectangular lattices under four different boundary conditions (periodic-periodic (pp), periodic-antiperiodic (pa), antiperiodic-periodic (ap) and antiperiodic-antiperiodic (aa)) obtained by Kaufman (1949 Phys. Rev. 76 1232), Wu and Hu (2002 J. Phys. A: Math. Gen. 35 5189) and Kastening (2002 Phys. Rev. E 66 057103)). We express the partition functions in terms of the partition functions Zα, β(J, k) with (α, β) = (0, 0), (1/2, 0), (0, 1/2) and (1/2, 1/2), J is an interaction coupling and k is an anisotropy parameter. Based on such expressions, we then extend the algorithm of Ivashkevich et al (2002 J. Phys. A: Math. Gen. 35 5543) to derive the exact asymptotic expansion of the logarithm of the partition function for all boundary conditions mentioned above. Our result is f = fbulk + ∑∞p = 0fp(ρ, k)S-p - 1, where f is the free energy of the system, fbulk is the free energy of the bulk, S = MN is the area of the lattice and ρ = M/N is the aspect ratio. All coefficients in this expansion are expressed through analytical functions. We have introduced the effective aspect ratio ρeff = ρ/sinh 2Jc and show that for pp and aa boundary conditions all finite size correction terms are invariant under the transformation ρeff → 1/ρeff. This article is part of ‘Lattice models and integrability’, a special issue of Journal of Physics A: Mathematical and Theoretical in honour of F Y Wu's 80th birthday.

  16. Exact deconstruction of the 6D (2,0) theory

    NASA Astrophysics Data System (ADS)

    Hayling, J.; Papageorgakis, C.; Pomoni, E.; Rodríguez-Gómez, D.

    2017-06-01

    The dimensional-deconstruction prescription of Arkani-Hamed, Cohen, Kaplan, Karch and Motl provides a mechanism for recovering the A-type (2,0) theories on T 2, starting from a four-dimensional N=2 circular-quiver theory. We put this conjecture to the test using two exact-counting arguments: in the decompactification limit, we compare the Higgs-branch Hilbert series of the 4D N=2 quiver to the "half-BPS" limit of the (2,0) superconformal index. We also compare the full partition function for the 4D quiver on S 4 to the (2,0) partition function on S 4 × T 2. In both cases we find exact agreement. The partition function calculation sets up a dictionary between exact results in 4D and 6D.

  17. Implementation of a partitioned algorithm for simulation of large CSI problems

    NASA Technical Reports Server (NTRS)

    Alvin, Kenneth F.; Park, K. C.

    1991-01-01

    The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.

  18. Metatranscriptome analyses indicate resource partitioning between diatoms in the field.

    PubMed

    Alexander, Harriet; Jenkins, Bethany D; Rynearson, Tatiana A; Dyhrman, Sonya T

    2015-04-28

    Diverse communities of marine phytoplankton carry out half of global primary production. The vast diversity of the phytoplankton has long perplexed ecologists because these organisms coexist in an isotropic environment while competing for the same basic resources (e.g., inorganic nutrients). Differential niche partitioning of resources is one hypothesis to explain this "paradox of the plankton," but it is difficult to quantify and track variation in phytoplankton metabolism in situ. Here, we use quantitative metatranscriptome analyses to examine pathways of nitrogen (N) and phosphorus (P) metabolism in diatoms that cooccur regularly in an estuary on the east coast of the United States (Narragansett Bay). Expression of known N and P metabolic pathways varied between diatoms, indicating apparent differences in resource utilization capacity that may prevent direct competition. Nutrient amendment incubations skewed N/P ratios, elucidating nutrient-responsive patterns of expression and facilitating a quantitative comparison between diatoms. The resource-responsive (RR) gene sets deviated in composition from the metabolic profile of the organism, being enriched in genes associated with N and P metabolism. Expression of the RR gene set varied over time and differed significantly between diatoms, resulting in opposite transcriptional responses to the same environment. Apparent differences in metabolic capacity and the expression of that capacity in the environment suggest that diatom-specific resource partitioning was occurring in Narragansett Bay. This high-resolution approach highlights the molecular underpinnings of diatom resource utilization and how cooccurring diatoms adjust their cellular physiology to partition their niche space.

  19. Electric-field-induced association of colloidal particles

    NASA Astrophysics Data System (ADS)

    Fraden, Seth; Hurd, Alan J.; Meyer, Robert B.

    1989-11-01

    Dilute suspensions of micron diameter dielectric spheres confined to two dimensions are induced to aggregate linearly by application of an electric field. The growth of the average cluster size agrees well with the Smoluchowski equation, but the evolution of the measured cluster size distribution exhibits significant departures from theory at large times due to the formation of long linear clusters which effectively partition space into isolated one-dimensional strips.

  20. K-Partite RNA Secondary Structures

    NASA Astrophysics Data System (ADS)

    Jiang, Minghui; Tejada, Pedro J.; Lasisi, Ramoni O.; Cheng, Shanhong; Fechser, D. Scott

    RNA secondary structure prediction is a fundamental problem in structural bioinformatics. The prediction problem is difficult because RNA secondary structures may contain pseudoknots formed by crossing base pairs. We introduce k-partite secondary structures as a simple classification of RNA secondary structures with pseudoknots. An RNA secondary structure is k-partite if it is the union of k pseudoknot-free sub-structures. Most known RNA secondary structures are either bipartite or tripartite. We show that there exists a constant number k such that any secondary structure can be modified into a k-partite secondary structure with approximately the same free energy. This offers a partial explanation of the prevalence of k-partite secondary structures with small k. We give a complete characterization of the computational complexities of recognizing k-partite secondary structures for all k ≥ 2, and show that this recognition problem is essentially the same as the k-colorability problem on circle graphs. We present two simple heuristics, iterated peeling and first-fit packing, for finding k-partite RNA secondary structures. For maximizing the number of base pair stackings, our iterated peeling heuristic achieves a constant approximation ratio of at most k for 2 ≤ k ≤ 5, and at most frac6{1-(1-6/k)^k} le frac6{1-e^{-6}} < 6.01491 for k ≥ 6. Experiment on sequences from PseudoBase shows that our first-fit packing heuristic outperforms the leading method HotKnots in predicting RNA secondary structures with pseudoknots. Source code, data set, and experimental results are available at http://www.cs.usu.edu/ mjiang/rna/kpartite/.

  1. Gas/particle partitioning, particle-size distribution of atmospheric polybrominated diphenyl ethers in southeast Shanghai rural area and size-resolved predicting model.

    PubMed

    Su, Peng-Hao; Tomy, Gregg T; Hou, Chun-Yan; Yin, Fang; Feng, Dao-Lun; Ding, Yong-Sheng; Li, Yi-Fan

    2018-04-01

    A size-segregated gas/particle partitioning coefficient K Pi was proposed and evaluated in the predicting models on the basis of atmospheric polybrominated diphenyl ether (PBDE) field data comparing with the bulk coefficient K P . Results revealed that the characteristics of atmospheric PBDEs in southeast Shanghai rural area were generally consistent with previous investigations, suggesting that this investigation was representative to the present pollution status of atmospheric PBDEs. K Pi was generally greater than bulk K P , indicating an overestimate of TSP (the mass concentration of total suspended particles) in the expression of bulk K P . In predicting models, K Pi led to a significant shift in regression lines as compared to K P , thus it should be more cautious to investigate sorption mechanisms using the regression lines. The differences between the performances of K Pi and K P were helpful to explain some phenomenon in predicting investigations, such as P L 0 and K OA models overestimate the particle fractions of PBDEs and the models work better at high temperature than at low temperature. Our findings are important because they enabled an insight into the influence of particle size on predicting models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Application of New Partition Coefficients to Modeling Plagioclase

    NASA Technical Reports Server (NTRS)

    Fagan, A. L.; Neal, C. R.; Rapp, J. F.; Draper, D. S.; Lapen, T. J.

    2017-01-01

    Previously, studies that determined the partition coefficient for an element, i, between plagioclase and the residual basaltic melt (Di plag) have been conducted using experimental conditions dissimilar from the Moon, and thus these values are not ideal for modeling plagioclase fractionation in a lunar system. However, recent work [1] has determined partition coefficients for plagioclase at lunar oxygen fugacities, and resulted in plagioclase with Anorthite contents =An90; these are significantly more calcic than plagioclase in previous studies, and the An content has a profound effect on partition coefficient values [2,3]. Plagioclase D-values, which are dependent on the An content of the crystal [e.g., 2-6], can be determined using published experimental data and the correlative An contents. Here, we examine new experimental data from [1] to ascertain their effect on the calculation of equilibrium liquids from Apollo 16 sample 60635,2. This sample is a coarse grained, subophitic impact melt composed of 55% plagioclase laths with An94.4-98.7 [7,8], distinctly more calcic than of previous partition coefficient studies (e.g., [3-6, 9-10]). Sample 60635,2 is notable as having several plagioclase trace element analyses containing a negative Europium anomaly (-Eu) in the rare-earth element (REE) profile, rather than the typical positive Eu anomaly (+Eu) [7-8] (Fig. 1). The expected +Eu is due to the similarity in size and charge with Ca2+, thereby allowing Eu2+ to be easily taken up by the plagioclase crystal structure, in contrast to the remaining REE3+. Some 60635,2 plagioclase crystals only have +Eu REE profiles, some only have -Eu REE profiles, and some +Eu and -Eu analyses in different areas on a single crystal [7, 8]. Moreover, there does not seem to be any core-rim association with the +Eu or -Eu analyses, nor does there appear to be a correlation between the size, shape, or location of a particular crystal within the sample and the sign of its Eu anomaly, which suggests a complex evolution. In order to investigate this sample further, we can calculate the equilibrium liquids, but with An contents distinct from previous experimental studies, we must calculate the appropriate partition coefficients for each trace element analysis.

  3. DEMONSTRATION BULLETIN: SOIL/SEDIMENT WASHING SYSTEM BERGMANN USA

    EPA Science Inventory

    The Bergmann USA Soil/Sediment Washing System is a waste minimization technique designed to separate or "partition" soils and sediments by grain size and density. In this water-based volume reduction process, hazardous contaminants are concentrated into a small residual portion...

  4. Weights and topology: a study of the effects of graph construction on 3D image segmentation.

    PubMed

    Grady, Leo; Jolly, Marie-Pierre

    2008-01-01

    Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.

  5. A discrete scattering series representation for lattice embedded models of chain cyclization

    NASA Astrophysics Data System (ADS)

    Fraser, Simon J.; Winnik, Mitchell A.

    1980-01-01

    In this paper we develop a lattice based model of chain cyclization in the presence of a set of occupied sites V in the lattice. We show that within the approximation of a Markovian chain propagator the effect of V on the partition function for the system can be written as a time-ordered exponential series in which V behaves like a scattering potential and chainlength is the timelike parameter. The discrete and finite nature of this model allows us to obtain rigorous upper and lower bounds to the series limit. We adapt these formulas to calculation of the partition functions and cyclization probabilities of terminally and globally cyclizing chains. Two classes of cyclization are considered: in the first model the target set H may be visited repeatedly (the Markovian model); in the second case vertices in H may be visited at most once(the non-Markovian or taboo model). This formulation depends on two fundamental combinatorial structures, namely the inclusion-exclusion principle and the set of subsets of a set. We have tried to interpret these abstract structures with physical analogies throughout the paper.

  6. Fragment-based prediction of skin sensitization using recursive partitioning

    NASA Astrophysics Data System (ADS)

    Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2011-09-01

    Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.

  7. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  8. Water-soluble drug partitioning and adsorption in HEMA/MAA hydrogels.

    PubMed

    Dursch, Thomas J; Taylor, Nicole O; Liu, David E; Wu, Rong Y; Prausnitz, John M; Radke, Clayton J

    2014-01-01

    Two-photon confocal microscopy and back extraction with UV/Vis-absorption spectrophotometry quantify equilibrium partition coefficients, k, for six prototypical drugs in five soft-contact-lens-material hydrogels over a range of water contents from 40 to 92%. Partition coefficients were obtained for acetazolamide, caffeine, hydrocortisone, Oregon Green 488, sodium fluorescein, and theophylline in 2-hydroxyethyl methacrylate/methacrylic acid (HEMA/MAA, pKa≈5.2) copolymer hydrogels as functions of composition, aqueous pH (2 and 7.4), and salinity. At pH 2, the hydrogels are nonionic, whereas at pH 7.4, hydrogels are anionic due to MAA ionization. Solute adsorption on and nonspecific electrostatic interaction with the polymer matrix are pronounced. To express deviation from ideal partitioning, we define an enhancement or exclusion factor, E ≡ k/φ1, where φ1 is hydrogel water volume fraction. All solutes exhibit E > 1 in 100 wt % HEMA hydrogels owing to strong specific adsorption to HEMA strands. For all solutes, E significantly decreases upon incorporation of anionic MAA into the hydrogel due to lack of adsorption onto charged MAA moieties. For dianionic sodium fluorescein and Oregon Green 488, and partially ionized monoanionic acetazolamide at pH 7.4, however, the decrease in E is more severe than that for similar-sized nonionic solutes. Conversely, at pH 2, E generally increases with addition of the nonionic MAA copolymer due to strong preferential adsorption to the uncharged carboxylic-acid group of MAA. For all cases, we quantitatively predict enhancement factors for the six drugs using only independently obtained parameters. In dilute solution for solute i, Ei is conveniently expressed as a product of individual enhancement factors for size exclusion (Ei(ex)), electrostatic interaction (Ei(el)), and specific adsorption (Ei(ad)):Ei≡Ei(ex)Ei(el)Ei(ad). To obtain the individual enhancement factors, we employ an extended Ogston mesh-size distribution for Ei(ex); Donnan equilibrium for Ei(el); and Henry's law characterizing specific adsorption to the polymer chains for Ei(ad). Predicted enhancement factors are in excellent agreement with experiment. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Partitioning in parallel processing of production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oflazer, K.

    1987-01-01

    This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less

  10. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information

    PubMed Central

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102

  11. Harnessing the Bethe free energy†

    PubMed Central

    Bapst, Victor

    2016-01-01

    ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178

  12. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures

    PubMed Central

    Sloma, Michael F.; Mathews, David H.

    2016-01-01

    RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924

  13. Phobos MRO/CRISM visible and near-infrared (0.5-2.5 μm) spectral modeling

    NASA Astrophysics Data System (ADS)

    Pajola, Maurizio; Roush, Ted; Dalle Ore, Cristina; Marzo, Giuseppe A.; Simioni, Emanuele

    2018-05-01

    This paper focuses on the spectral modeling of the surface of Phobos in the wavelength range between 0.5 and 2.5 μm. We exploit the Phobos Mars Reconnaissance Orbiter/Compact Reconnaissance Imaging Spectrometer for Mars (MRO/CRISM) dataset and extend the study area presented by Fraeman et al. (2012) including spectra from nearly the entire surface observed. Without a priori selection of surface locations we use the unsupervised K-means partitioning algorithm developed by Marzo et al. (2006) to investigate the spectral variability across Phobos surface. The statistical partitioning identifies seven clusters. We investigate the compositional information contained within the average spectra of four clusters using the radiative transfer model of Shkuratov et al. (1999). We use optical constants of Tagish Lake meteorite (TL), from Roush (2003), and pyroxene glass (PM80), from Jaeger et al. (1994) and Dorschner et al. (1995), as previously suggested by Pajola et al. (2013) as inputs for the calculations. The model results show good agreement in slope when compared to the averages of the CRISM spectral clusters. In particular, the best fitting model of the cluster with the steepest spectral slope yields relative abundances that are equal to those of Pajola et al. (2013), i.e. 20% PM80 and 80% TL, but grain sizes that are 12 μm smaller for PM80 and 4 μm smaller for TL (the grain sizes are 11 μm for PM80 and 20 μm for TL in Pajola et al. (2013), respectively). This modest discrepancy may arise from the fact that the areas observed by CRISM and those analyzed in Pajola et al. (2013) are on opposite locations on Phobos and are characterized by different morphological and weathering settings. Instead, as the clusters spectral slopes decrease, the best fits obtained show trends related to both relative abundance and grain size that is not observed for the cluster with the steepest spectral slope. With a decrease in slope there is general increase of relative percentage of PM80 from 12% to 18% and the associated decrease of TL from 88% to 82%. Simultaneously the PM80 grain sizes decrease from 9 to 5 μm and TL grain sizes increase from 13 to 16 μm. The best fitting models show relative abundances and grain sizes that partially overlap. This supports the hypothesis that from a compositional perspective the transition between the highest and lowest slopes on Phobos is subtle, and it is characterized by a smooth change of relative abundances and grain sizes, instead of a distinct dichotomy between the areas.

  14. Constraining Aggregate-Scale Solar Energy Partitioning in Arctic Sea Ice Through Synthesis of Remote Sensing and Autonomous In-Situ Observations.

    NASA Astrophysics Data System (ADS)

    Wright, N.; Polashenski, C. M.; Deeb, E. J.; Morriss, B. F.; Song, A.; Chen, J.

    2015-12-01

    One of the key processes controlling sea ice mass balance in the Arctic is the partitioning of solar energy between reflection back to the atmosphere and absorption into the ice and upper ocean. We investigate the solar energy balance in the ice-ocean system using in-situ data collected from Arctic Observing Network (AON) sea ice sites and imagery from high resolution optical satellites. AON assets, including ice mass balance buoys and ice tethered profilers, monitor the storage and fluxes of heat in the ice-ocean system. High resolution satellite imagery, processed using object-based image classification techniques, allows us to quantify the evolution of surrounding ice conditions, including melt pond coverage and floe size distribution, at aggregate scale. We present results from regionally representative sites that constrain the partitioning of absorbed solar energy between ice melt and ocean storage, and quantify the strength of the ice-albedo feedback. We further demonstrate how the results can be used to validate model representations of the physical processes controlling ice-albedo feedbacks. The techniques can be extended to understand solar partitioning across the Arctic basin using additional sites and model based data integration.

  15. Macroecology: A Primer for Biological Oceanography

    NASA Astrophysics Data System (ADS)

    Li, W. K. W.

    2016-02-01

    Macroecology is the study of ecological patterns discerned at a spatial, temporal, or organization scale higher than that at which the focal entities interact. Such patterns are statistical or emergent manifestations arising from the ensemble of component entities. Although macroecology is a neologism largely based in terrestrial and avian ecology, macroscopic patterns have long been recognised in biological oceanography. Familiar examples include Redfield elemental stoichiometry, Elton trophic pyramids, Sheldon biomass spectrum, and Margalef life-forms mandala. Macroecological regularities can often be found along various continua, such as along body size in power-law scaling or along habitat temperature in metabolic theory. Uniquely in oceanography, a partition of the world ocean continuum into Longhurst biogeochemical provinces provides a spatial organization well-suited for macroecological investigations. In this rational discrete approach, fundamental processes in physical and biological oceanography that differentiate a set of non-overlapping ocean regions also appear to shape the macroecological structure of phytoplankton communities.

  16. Forming an ad-hoc nearby storage, based on IKAROS and social networking services

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2014-06-01

    We present an ad-hoc "nearby" storage, based on IKAROS and social networking services, such as Facebook. By design, IKAROS is capable to increase or decrease the number of nodes of the I/O system instance on the fly, without bringing everything down or losing data. IKAROS is capable to decide the file partition distribution schema, by taking on account requests from the user or an application, as well as a domain or a Virtual Organization policy. In this way, it is possible to form multiple instances of smaller capacity higher bandwidth storage utilities capable to respond in an ad-hoc manner. This approach, focusing on flexibility, can scale both up and down and so can provide more cost effective infrastructures for both large scale and smaller size systems. A set of experiments is performed comparing IKAROS with PVFS2 by using multiple clients requests under HPC IOR benchmark and MPICH2.

  17. Epidemic Reconstruction in a Phylogenetics Framework: Transmission Trees as Partitions of the Node Set

    PubMed Central

    Hall, Matthew; Woolhouse, Mark; Rambaut, Andrew

    2015-01-01

    The use of genetic data to reconstruct the transmission tree of infectious disease epidemics and outbreaks has been the subject of an increasing number of studies, but previous approaches have usually either made assumptions that are not fully compatible with phylogenetic inference, or, where they have based inference on a phylogeny, have employed a procedure that requires this tree to be fixed. At the same time, the coalescent-based models of the pathogen population that are employed in the methods usually used for time-resolved phylogeny reconstruction are a considerable simplification of epidemic process, as they assume that pathogen lineages mix freely. Here, we contribute a new method that is simultaneously a phylogeny reconstruction method for isolates taken from an epidemic, and a procedure for transmission tree reconstruction. We observe that, if one or more samples is taken from each host in an epidemic or outbreak and these are used to build a phylogeny, a transmission tree is equivalent to a partition of the set of nodes of this phylogeny, such that each partition element is a set of nodes that is connected in the full tree and contains all the tips corresponding to samples taken from one and only one host. We then implement a Monte Carlo Markov Chain (MCMC) procedure for simultaneous sampling from the spaces of both trees, utilising a newly-designed set of phylogenetic tree proposals that also respect node partitions. We calculate the posterior probability of these partitioned trees based on a model that acknowledges the population structure of an epidemic by employing an individual-based disease transmission model and a coalescent process taking place within each host. We demonstrate our method, first using simulated data, and then with sequences taken from the H7N7 avian influenza outbreak that occurred in the Netherlands in 2003. We show that it is superior to established coalescent methods for reconstructing the topology and node heights of the phylogeny and performs well for transmission tree reconstruction when the phylogeny is well-resolved by the genetic data, but caution that this will often not be the case in practice and that existing genetic and epidemiological data should be used to configure such analyses whenever possible. This method is available for use by the research community as part of BEAST, one of the most widely-used packages for reconstruction of dated phylogenies. PMID:26717515

  18. Modeling homeorhetic trajectories of milk component yields, body composition and dry-matter intake in dairy cows: Influence of parity, milk production potential and breed.

    PubMed

    Daniel, J B; Friggens, N C; van Laar, H; Ingvartsen, K L; Sauvant, D

    2018-06-01

    The control of nutrient partitioning is complex and affected by many factors, among them physiological state and production potential. Therefore, the current model aims to provide for dairy cows a dynamic framework to predict a consistent set of reference performance patterns (milk component yields, body composition change, dry-matter intake) sensitive to physiological status across a range of milk production potentials (within and between breeds). Flows and partition of net energy toward maintenance, growth, gestation, body reserves and milk components are described in the model. The structure of the model is characterized by two sub-models, a regulating sub-model of homeorhetic control which sets dynamic partitioning rules along the lactation, and an operating sub-model that translates this into animal performance. The regulating sub-model describes lactation as the result of three driving forces: (1) use of previously acquired resources through mobilization, (2) acquisition of new resources with a priority of partition towards milk and (3) subsequent use of resources towards body reserves gain. The dynamics of these three driving forces were adjusted separately for fat (milk and body), protein (milk and body) and lactose (milk). Milk yield is predicted from lactose and protein yields with an empirical equation developed from literature data. The model predicts desired dry-matter intake as an outcome of net energy requirements for a given dietary net energy content. The parameters controlling milk component yields and body composition changes were calibrated using two data sets in which the diet was the same for all animals. Weekly data from Holstein dairy cows was used to calibrate the model within-breed across milk production potentials. A second data set was used to evaluate the model and to calibrate it for breed differences (Holstein, Danish Red and Jersey) on the mobilization/reconstitution of body composition and on the yield of individual milk components. These calibrations showed that the model framework was able to adequately simulate milk yield, milk component yields, body composition changes and dry-matter intake throughout lactation for primiparous and multiparous cows differing in their production level.

  19. Various forms of indexing HDMR for modelling multivariate classification problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksu, Çağrı; Tunga, M. Alper

    2014-12-10

    The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less

  20. Resource partitioning of sonar frequency bands in rhinolophoid bats.

    PubMed

    Heller, Klaus-Gerhard; Helversen, Otto V

    1989-08-01

    In the Constant Frequency portions of the orientation calls of various Rhinolophus and Hipposideros species, the frequency with the strongest amplitude was studied comparatively. (1) In the five European species of the genus Rhinolophus call frequencies are either species-specific (R. ferrumequinum, R. blasii and R. euryale) or they overlap (R. hipposideros and R. mehelyi). The call frequency distributions are approximately 5-9 kHz wide, thus their ranges spead less than ±5% from the mean (Fig. 1). Frequency distributions are considerably narrower within smaller geographic areas. (2) As in other bat groups, call frequencies of the Rhinolophoidea are negatively correlated with body size (Fig. 3). Regression lines for the genera Rhinolophus and Rhinolophus, species from dryer climates have on the average higher call frequencies than species from tropical rain forests. (4) The Krau Game Reserve, a still largely intact rain forest area in Malaysia, harbours at least 12 syntopic Rhinolophus and Hipposiderso species. Their call frequencies lie between 40 and 200 kHz (Fig. 2). Distribution over the available frequency range is significantly more even than could be expected from chance alone. Two different null hypotheses to test for random character distribution were derived from frequency-size-relations and by sampling species assemblages from a species pool (Monte Carlo method); both were rejected. In particular, call frequencies lying close together are avoided (Figs. 4, 5). Conversely, the distribution of size ratios complied with a corresponding null hypothesis. This even distribution may be a consequence of resource partitioning with respect to prey type. Alternatively, the importance of these calls as social signals (e.g. recognition of conspecifics) might have necessitated a communication channel partitioning.

  1. A Parallel Pipelined Renderer for the Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Chiueh, Tzi-Cker; Ma, Kwan-Liu

    1997-01-01

    This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.

  2. Sting, Carry and Stock: How Corpse Availability Can Regulate De-Centralized Task Allocation in a Ponerine Ant Colony

    PubMed Central

    Schmickl, Thomas; Karsai, Istvan

    2014-01-01

    We develop a model to produce plausible patterns of task partitioning in the ponerine ant Ectatomma ruidum based on the availability of living prey and prey corpses. The model is based on the organizational capabilities of a “common stomach” through which the colony utilizes the availability of a natural (food) substance as a major communication channel to regulate the income and expenditure of the very same substance. This communication channel has also a central role in regulating task partitioning of collective hunting behavior in a supply&demand-driven manner. Our model shows that task partitioning of the collective hunting behavior in E. ruidum can be explained by regulation due to a common stomach system. The saturation of the common stomach provides accessible information to individual ants so that they can adjust their hunting behavior accordingly by engaging in or by abandoning from stinging or transporting tasks. The common stomach is able to establish and to keep stabilized an effective mix of workforce to exploit the prey population and to transport food into the nest. This system is also able to react to external perturbations in a de-centralized homeostatic way, such as to changes in the prey density or to accumulation of food in the nest. In case of stable conditions the system develops towards an equilibrium concerning colony size and prey density. Our model shows that organization of work through a common stomach system can allow Ectatomma ruidum to collectively forage for food in a robust, reactive and reliable way. The model is compared to previously published models that followed a different modeling approach. Based on our model analysis we also suggest a series of experiments for which our model gives plausible predictions. These predictions are used to formulate a set of testable hypotheses that should be investigated empirically in future experimentation. PMID:25493558

  3. Algorithms for parallel flow solvers on message passing architectures

    NASA Technical Reports Server (NTRS)

    Vanderwijngaart, Rob F.

    1995-01-01

    The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those immediately adjacent to them, then the first processor in the pipeline will receive a computational load that is less than that of subsequent processors, magnifying the pipeline slowdown effect. Extra compensation is needed for grid boundary effects, even if all grid blocks are equally sized.

  4. Two-Dimensional Offline Chromatographic Fractionation for the Characterization of Humic-Like Substances in Atmospheric Aerosol Particles.

    PubMed

    Spranger, Tobias; van Pinxteren, Dominik; Herrmann, Hartmut

    2017-05-02

    Organic carbon in atmospheric particles comprises a large fraction of chromatographically unresolved compounds, often referred to as humic-like substances (HULIS), which influence particle properties and impact climate, human health, and ecosystems. To better understand its composition, a two-dimensional (2D) offline method combining size-exclusion (SEC) and reversed-phase liquid chromatography (RP-HPLC) using a new spiked gradient profile is presented. It separates HULIS into 55 fractions of different size and polarity, with estimated ranges of molecular weight and octanol/water partitioning coefficient (log P) from 160-900 g/mol and 0.2-3.3, respectively. The distribution of HULIS within the 2D size versus polarity space is illustrated with heat maps of ultraviolet absorption at 254 nm. It is found to strongly differ in a small example set of samples from a background site near Leipzig, Germany. In winter, the most intense signals were obtained for the largest molecules (>520 g/mol) with low polarity (log P ∼ 1.9), whereas in summer, smaller (225-330 g/mol) and more polar (log P ∼ 0.55) molecules dominate. The method reveals such differences in HULIS composition in a more detailed manner than previously possible and can therefore help to better elucidate the sources of HULIS in different seasons or at different sites. Analyzing Suwannee river fulvic acid as a common HULIS surrogate shows a similar polarity range, but the sizes are clearly larger than those of atmospheric HULIS.

  5. 47 CFR 101.1415 - Partitioning and disaggregation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... American Datum (NAD83). (d) Unjust enrichment. 12 GHz licensees that received a bidding credit and... be subject to the provisions concerning unjust enrichment as set forth in § 1.2111 of this chapter...

  6. Clustering, Seriation, and Subset Extraction of Confusion Data

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Steinley, Douglas

    2006-01-01

    The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although…

  7. Foreign Language Analysis and Recognition (FLARe)

    DTIC Science & Technology

    2016-10-08

    10 7 Chinese CER ...Rates ( CERs ) were obtained with each feature set: (1) 19.2%, (2) 17.3%, and (3) 15.3%. Based on these results, a GMM-HMM speech recognition system...These systems were evaluated on the HUB4 and HKUST test partitions. Table 7 shows the CER obtained on each test set. Whereas including the HKUST data

  8. Screening-level models to estimate partition ratios of organic chemicals between polymeric materials, air and water.

    PubMed

    Reppas-Chrysovitsinos, Efstathios; Sobek, Anna; MacLeod, Matthew

    2016-06-15

    Polymeric materials flowing through the technosphere are repositories of organic chemicals throughout their life cycle. Equilibrium partition ratios of organic chemicals between these materials and air (KMA) or water (KMW) are required for models of fate and transport, high-throughput exposure assessment and passive sampling. KMA and KMW have been measured for a growing number of chemical/material combinations, but significant data gaps still exist. We assembled a database of 363 KMA and 910 KMW measurements for 446 individual compounds and nearly 40 individual polymers and biopolymers, collected from 29 studies. We used the EPI Suite and ABSOLV software packages to estimate physicochemical properties of the compounds and we employed an empirical correlation based on Trouton's rule to adjust the measured KMA and KMW values to a standard reference temperature of 298 K. Then, we used a thermodynamic triangle with Henry's law constant to calculate a complete set of 1273 KMA and KMW values. Using simple linear regression, we developed a suite of single parameter linear free energy relationship (spLFER) models to estimate KMA from the EPI Suite-estimated octanol-air partition ratio (KOA) and KMW from the EPI Suite-estimated octanol-water (KOW) partition ratio. Similarly, using multiple linear regression, we developed a set of polyparameter linear free energy relationship (ppLFER) models to estimate KMA and KMW from ABSOLV-estimated Abraham solvation parameters. We explored the two LFER approaches to investigate (1) their performance in estimating partition ratios, and (2) uncertainties associated with treating all different polymers as a single "bulk" polymeric material compartment. The models we have developed are suitable for screening assessments of the tendency for organic chemicals to be emitted from materials, and for use in multimedia models of the fate of organic chemicals in the indoor environment. In screening applications we recommend that KMA and KMW be modeled as 0.06 ×KOA and 0.06 ×KOW respectively, with an uncertainty range of a factor of 15.

  9. Statistical mechanics of high-density bond percolation

    NASA Astrophysics Data System (ADS)

    Timonin, P. N.

    2018-05-01

    High-density (HD) percolation describes the percolation of specific κ -clusters, which are the compact sets of sites each connected to κ nearest filled sites at least. It takes place in the classical patterns of independently distributed sites or bonds in which the ordinary percolation transition also exists. Hence, the study of series of κ -type HD percolations amounts to the description of classical clusters' structure for which κ -clusters constitute κ -cores nested one into another. Such data are needed for description of a number of physical, biological, and information properties of complex systems on random lattices, graphs, and networks. They range from magnetic properties of semiconductor alloys to anomalies in supercooled water and clustering in biological and social networks. Here we present the statistical mechanics approach to study HD bond percolation on an arbitrary graph. It is shown that the generating function for κ -clusters' size distribution can be obtained from the partition function of the specific q -state Potts-Ising model in the q →1 limit. Using this approach we find exact κ -clusters' size distributions for the Bethe lattice and Erdos-Renyi graph. The application of the method to Euclidean lattices is also discussed.

  10. Fluid mechanical scaling of impact craters in unconsolidated granular materials

    NASA Astrophysics Data System (ADS)

    Miranda, Colin S.; Dowling, David R.

    2015-11-01

    A single scaling law is proposed for the diameter of simple low- and high-speed impact craters in unconsolidated granular materials where spall is not apparent. The scaling law is based on the assumption that gravity- and shock-wave effects set crater size, and is formulated in terms of a dimensionless crater diameter, and an empirical combination of Froude and Mach numbers. The scaling law involves the kinetic energy and speed of the impactor, the acceleration of gravity, and the density and speed of sound in the target material. The size of the impactor enters the formulation but divides out of the final empirical result. The scaling law achieves a 98% correlation with available measurements from drop tests, ballistic tests, missile impacts, and centrifugally-enhanced gravity impacts for a variety of target materials (sand, alluvium, granulated sugar, and expanded perlite). The available measurements cover more than 10 orders of magnitude in impact energy. For subsonic and supersonic impacts, the crater diameter is found to scale with the 1/4- and 1/6-power, respectively, of the impactor kinetic energy with the exponent crossover occurring near a Mach number of unity. The final empirical formula provides insight into how impact energy partitioning depends on Mach number.

  11. Climatic and physiographic controls of spatial variability in surface water balance over the contiguous United States using the Budyko relationship

    NASA Astrophysics Data System (ADS)

    Abatzoglou, John T.; Ficklin, Darren L.

    2017-09-01

    The geographic variability in the partitioning of precipitation into surface runoff (Q) and evapotranspiration (ET) is fundamental to understanding regional water availability. The Budyko equation suggests this partitioning is strictly a function of aridity, yet observed deviations from this relationship for individual watersheds impede using the framework to model surface water balance in ungauged catchments and under future climate and land use scenarios. A set of climatic, physiographic, and vegetation metrics were used to model the spatial variability in the partitioning of precipitation for 211 watersheds across the contiguous United States (CONUS) within Budyko's framework through the free parameter ω. A generalized additive model found that four widely available variables, precipitation seasonality, the ratio of soil water holding capacity to precipitation, topographic slope, and the fraction of precipitation falling as snow, explained 81.2% of the variability in ω. The ω model applied to the Budyko equation explained 97% of the spatial variability in long-term Q for an independent set of watersheds. The ω model was also applied to estimate the long-term water balance across the CONUS for both contemporary and mid-21st century conditions. The modeled partitioning of observed precipitation to Q and ET compared favorably across the CONUS with estimates from more sophisticated land-surface modeling efforts. For mid-21st century conditions, the model simulated an increase in the fraction of precipitation used by ET across the CONUS with declines in Q for much of the eastern CONUS and mountainous watersheds across the western United States.

  12. Distribution and grain-size partitioning of metals in bovfom sediments of an experimentally acidified Wisconsin lake

    USGS Publications Warehouse

    Elder, John F.

    2007-01-01

    A study of concentrations and distribution of major and trace elements in surficial bottom sediments of Little Rock Lake in northern Wisconsin included examination of spatial variation and grain-size effects. No significant differences with respect to metal distribution in sediments were observed between the two basins of the lake, despite the experimental acidification of one of the basins from pH 6.1 to 4.6. The concentrations of most elements in the lake sediments were generally similar to soil concentrations in the area and were well below sediment quality criteria. Two exceptions were lead and zinc, whose concentrations in July 1990 exceeded the criteria of 50 μg/g and 100 μg/g, respectively, in both littoral and pelagic sediments. Concentrations of some elements, particularly Cu, Pb, and Zn, increased along transects from nearshore to midlake, following a similar gradient of sedimentary organic carbon. In contrast, Mn, Fe, and alkali/alkaline-earth elements were at maximum concentrations in nearshore sediments. These elements are less likely to partition to organic particles, and their distribution is more dependent on mineralogical composition, grain size, and other factors. Element concentrations varied among different sediment grain-size fractions, although a simple inverse relation to grain size was not observed. Fe, Mn, Pb, and Zn were more concentrated in a grain-size range 20–60 tm than in either the very fine or the coarse fractions, possibly because of the aggregation of smaller particles cemented together by organic and Fe/Mn hydrous-oxide coatings.

  13. Effects of plasma proteins on sieving of tracer macromolecules in glomerular basement membrane.

    PubMed

    Lazzara, M J; Deen, W M

    2001-11-01

    It was found previously that the sieving coefficients of Ficoll and Ficoll sulfate across isolated glomerular basement membrane (GBM) were greatly elevated when BSA was present at physiological levels, and it was suggested that most of this increase might have been the result of steric interactions between BSA and the tracers (5). To test this hypothesis, we extended the theory for the sieving of macromolecular tracers to account for the presence of a second, abundant solute. Increasing the concentration of an abundant solute is predicted to increase the equilibrium partition coefficient of a tracer in a porous or fibrous membrane, thereby increasing the sieving coefficient. The magnitude of this partitioning effect depends on solute size and membrane structure. The osmotic reduction in filtrate velocity caused by an abundant, mostly retained solute will also tend to elevate the tracer sieving coefficient. The osmotic effect alone explained only about one-third of the observed increase in the sieving coefficients of Ficoll and Ficoll sulfate, whereas the effect of BSA on tracer partitioning was sufficient to account for the remainder. At physiological concentrations, predictions for tracer sieving in the presence of BSA were found to be insensitive to the assumed shape of the protein (sphere or prolate spheroid). For protein mixtures, the theoretical effect of 6 g/dl BSA on the partitioning of spherical tracers was indistinguishable from that of 3 g/dl BSA and 3 g/dl IgG. This suggests that for partitioning and sieving studies in vitro, a good experimental model for plasma is a BSA solution with a mass concentration matching that of total plasma protein. The effect of plasma proteins on tracer partitioning is expected to influence sieving not only in isolated GBM but also in intact glomerular capillaries in vivo.

  14. Shear Stress Partitioning in Large Patches of Roughness in the Atmospheric Inertial Sublayer

    NASA Technical Reports Server (NTRS)

    Gillies, John A.; Nickling, William G.; King, James

    2007-01-01

    Drag partition measurements were made in the atmospheric inertial sublayer for six roughness configurations made up of solid elements in staggered arrays of different roughness densities. The roughness was in the form of a patch within a large open area and in the shape of an equilateral triangle with 60 m long sides. Measurements were obtained of the total shear stress (tau) acting on the surfaces, the surface shear stress on the ground between the elements (tau(sub S)) and the drag force on the elements for each roughness array. The measurements indicated that tau(sub S) quickly reduced near the leading edge of the roughness compared with tau, and a tau(sub S) minimum occurs at a normalized distance (x/h, where h is element height) of approx. -42 (downwind of the roughness leading edge is negative), then recovers to a relatively stable value. The location of the minimum appears to scale with element height and not roughness density. The force on the elements decreases exponentially with normalized downwind distance and this rate of change scales with the roughness density, with the rate of change increasing as roughness density increases. Average tau(sub S): tau values for the six roughness surfaces scale predictably as a function of roughness density and in accordance with a shear stress partitioning model. The shear stress partitioning model performed very well in predicting the amount of surface shear stress, given knowledge of the stated input parameters for these patches of roughness. As the shear stress partitioning relationship within the roughness appears to come into equilibrium faster for smaller roughness element sizes it would also appear the shear stress partitioning model can be applied with confidence for smaller patches of smaller roughness elements than those used in this experiment.

  15. RNA Graph Partitioning for the Discovery of RNA Modularity: A Novel Application of Graph Partition Algorithm to Biology

    PubMed Central

    Elmetwaly, Shereef; Schlick, Tamar

    2014-01-01

    Graph representations have been widely used to analyze and design various economic, social, military, political, and biological networks. In systems biology, networks of cells and organs are useful for understanding disease and medical treatments and, in structural biology, structures of molecules can be described, including RNA structures. In our RNA-As-Graphs (RAG) framework, we represent RNA structures as tree graphs by translating unpaired regions into vertices and helices into edges. Here we explore the modularity of RNA structures by applying graph partitioning known in graph theory to divide an RNA graph into subgraphs. To our knowledge, this is the first application of graph partitioning to biology, and the results suggest a systematic approach for modular design in general. The graph partitioning algorithms utilize mathematical properties of the Laplacian eigenvector (µ2) corresponding to the second eigenvalues (λ2) associated with the topology matrix defining the graph: λ2 describes the overall topology, and the sum of µ2′s components is zero. The three types of algorithms, termed median, sign, and gap cuts, divide a graph by determining nodes of cut by median, zero, and largest gap of µ2′s components, respectively. We apply these algorithms to 45 graphs corresponding to all solved RNA structures up through 11 vertices (∼220 nucleotides). While we observe that the median cut divides a graph into two similar-sized subgraphs, the sign and gap cuts partition a graph into two topologically-distinct subgraphs. We find that the gap cut produces the best biologically-relevant partitioning for RNA because it divides RNAs at less stable connections while maintaining junctions intact. The iterative gap cuts suggest basic modules and assembly protocols to design large RNA structures. Our graph substructuring thus suggests a systematic approach to explore the modularity of biological networks. In our applications to RNA structures, subgraphs also suggest design strategies for novel RNA motifs. PMID:25188578

  16. Sensitivity of Aerosol Mass and Microphysics to varying treatments of Condensational Growth of Secondary Organic Compounds in a regional model

    NASA Astrophysics Data System (ADS)

    Lowe, Douglas; Topping, David; McFiggans, Gordon

    2017-04-01

    Gas to particle partitioning of atmospheric compounds occurs through disequilibrium mass transfer rather than through instantaneous equilibrium. However, it is common to treat only the inorganic compounds as partitioning dynamically whilst organic compounds, represented by the Volatility Basis Set (VBS), are partitioned instantaneously. In this study we implement a more realistic dynamic partitioning of organic compounds in a regional framework and assess impact on aerosol mass and microphysics. It is also common to assume condensed phase water is only associated with inorganic components. We thus also assess sensitivity to assuming all organics are hygroscopic according to their prescribed molecular weight. For this study we use WRF-Chem v3.4.1, focusing on anthropogenic dominated North-Western Europe. Gas-phase chemistry is represented using CBM-Z whilst aerosol dynamics are simulated using the 8-section MOSAIC scheme, including a 9-bin VBS treatment of organic aerosol. Results indicate that predicted mass loadings can vary significantly. Without gas phase ageing of higher volatility compounds, dynamic partitioning always results in lower mass loadings downwind of emission sources. The inclusion of condensed phase water in both partitioning models increases the predicted PM mass, resulting from a larger contribution from higher volatility organics, if present. If gas phase ageing of VBS compounds is allowed to occur in a dynamic model, this can often lead to higher predicted mass loadings, contrary to expected behaviour from a simple non-reactive gas phase box model. As descriptions of aerosol phase processes improve within regional models, the baseline descriptions of partitioning should retain the ability to treat dynamic partitioning of organics compounds. Using our simulations, we discuss whether derived sensitivities to aerosol processes in existing models may be inherently biased. This work was supported by the Natural Environment Research Council within the RONOCO (NE/F004656/1) and CCN-Vol (NE/L007827/1) projects.

  17. Effect of Q&P heat treatment on fine microstructure and mechanical properties of a low-alloy medium-carbon steel

    NASA Astrophysics Data System (ADS)

    Jafari, Rahim; Kheirandish, Shahram; Mirdamadi, Shamsoddin

    2018-01-01

    The current research investigates the effect of ultrafine microstructure resulted from Quench and Partitioning (Q&P) process on obtaining ultra-high strengths in a low-alloy steel with 4wt.% carbon. The purpose of Q&P heat treatment is to enrich the austenite with carbon by partitioning of carbon from supersaturated martensite to austenite, in order to stabilize it to the room temperature. The microstructure, consequently, is consists of martensite, retained austenite and in some conditions bainite. Two-step Q&P heat treatment with quench and partitioning temperatures equal to 120°C and 300°C respectively were applied to the samples at different times. Mechanical behavior was studied by tensile test. The microstructure of the samples was observed using SEM, and TEM and to quantify the amount of retained austenite X-ray diffraction was used. The retained austenite grain size was estimated to be about 0.5 µm and the highest amount of retained austenite obtained was 10 vol%. All samples showed a yield strength and a tensile strength of above 900MPa and 1500MP respectively. The yield strength increased with increase in partitioning time, whereas tensile strength showed an inverse behavior. The elongation in samples varied from 5% to 9% which seemed to not have a direct connection with the amount of retained austenite, but instead it was related to the ferritic structures formed during partitioning such as coalesced martensite, bainite and tempered martensite.

  18. A physically based catchment partitioning method for hydrological analysis

    NASA Astrophysics Data System (ADS)

    Menduni, Giovanni; Riboni, Vittoria

    2000-07-01

    We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.

  19. Gauging Variational Inference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertkov, Michael; Ahn, Sungsoo; Shin, Jinwoo

    Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used to resolve the issue in practice, where meanfield (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we provemore » that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments, on complete GMs of relatively small size and on large GM (up-to 300 variables) confirm that the newly proposed algorithms outperform and generalize MF and BP.« less

  20. Atomistic simulation of mineral-melt trace-element partitioning

    NASA Astrophysics Data System (ADS)

    Allan, Neil L.; Du, Zhimei; Lavrentiev, Mikhail Yu.; Blundy, Jon D.; Purton, John A.; van Westrenen, Wim

    2003-09-01

    We discuss recent advances in computational approaches to trace-element incorporation in minerals and melts. It is crucial to take explicit account of the local structural environment of each ion in the solid and the change in this environment following the introduction of a foreign atom or atoms. Particular attention is paid to models using relaxation (strain) energies and solution energies, and the use of these different models for isovalent and heterovalent substitution in diopside and forsterite. Solution energies are also evaluated for pyrope and grossular garnets, and pyrope-grossular solid solutions. Unfavourable interactions between dodecahedral sites containing ions of the same size and connected by an intervening tetrahedron lead to larger solubilities of trace elements in the garnet solid solution than in either end member compound and to the failure of Goldschmidt's first rule. Our final two examples are the partitioning behaviour of noble gases, which behave as 'ions of zero charge' and the direct calculation of high-temperature partition coefficients between CaO solid and melt via Monte Carlo simulations.

  1. End-to-end QoS bounds for RTP-based service subnetworks

    NASA Astrophysics Data System (ADS)

    Pitts, Jonathan M.; Schormans, John A.

    1999-11-01

    With the increasing focus on traffic prioritization to support voice-data integration in corporate intranets, practical methods are needed to dimension and manage cost efficient service partitions. This is particularly important for the provisioning of real time, delay sensitive services such as telephony and voice/video conferencing applications. Typically these can be provided over RTP/UDP/IP or ATM DBR/SBR bearers but, irrespective of the specific networking technology, the switches or routers need to implement some form of virtual buffer management with queue scheduling mechanisms to provide partitioning. The key requirement is for operators of such networks to be able to dimension the partitions and virtual buffer sizes for efficient resource utilization, instead of simply over-dimensioning. This paper draws on recent work at Queen Mary, University of London, supported by the UK Engineering and Physical Sciences Research Council, to investigate approximate analytical methods for assessing end to end delay variation bounds in cell based and packet based networks.

  2. A novel method for the determination of adsorption partition coefficients of minor gases in a shale sample by headspace gas chromatography.

    PubMed

    Zhang, Chun-Yun; Hu, Hui-Chao; Chai, Xin-Sheng; Pan, Lei; Xiao, Xian-Ming

    2013-10-04

    A novel method has been developed for the determination of adsorption partition coefficient (Kd) of minor gases in shale. The method uses samples of two different sizes (masses) of the same material, from which the partition coefficient of the gas can be determined from two independent headspace gas chromatographic (HS-GC) measurements. The equilibrium for the model gas (ethane) was achieved in 5h at 120°C. The method also involves establishing an equation based on the Kd at higher equilibrium temperature, from which the Kd at lower temperature can be calculated. Although the HS-GC method requires some time and effort, it is simpler and quicker than the isothermal adsorption method that is in widespread use today. As a result, the method is simple and practical and can be a valuable tool for shale gas-related research and applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Field determination and QSPR prediction of equilibrium-status soil/vegetation partition coefficient of PCDD/Fs.

    PubMed

    Li, Li; Wang, Qiang; Qiu, Xinghua; Dong, Yian; Jia, Shenglan; Hu, Jianxin

    2014-07-15

    Characterizing pseudo equilibrium-status soil/vegetation partition coefficient KSV, the quotient of respective concentrations in soil and vegetation of a certain substance at remote background areas, is essential in ecological risk assessment, however few previous attempts have been made for field determination and developing validated and reproducible structure-based estimates. In this study, KSV was calculated based on measurements of seventeen 2,3,7,8-substituted PCDD/F congeners in soil and moss (Dicranum angustum), and rouzi grass (Thylacospermum caespitosum) of two background sites, Ny-Ålesund of the Arctic and Zhangmu-Nyalam region of the Tibet Plateau, respectively. By both fugacity modeling and stepwise regression of field data, the air-water partition coefficient (KAW) and aqueous solubility (SW) were identified as the influential physicochemical properties. Furthermore, validated quantitative structure-property relationship (QSPR) model was developed to extrapolate the KSV prediction to all 210 PCDD/F congeners. Molecular polarizability, molecular size and molecular energy demonstrated leading effects on KSV. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Space and Time Partitioning with Hardware Support for Space Applications

    NASA Astrophysics Data System (ADS)

    Pinto, S.; Tavares, A.; Montenegro, S.

    2016-08-01

    Complex and critical systems like airplanes and spacecraft implement a very fast growing amount of functions. Typically, those systems were implemented with fully federated architectures, but the number and complexity of desired functions of todays systems led aerospace industry to follow another strategy. Integrated Modular Avionics (IMA) arose as an attractive approach for consolidation, by combining several applications into one single generic computing resource. Current approach goes towards higher integration provided by space and time partitioning (STP) of system virtualization. The problem is existent virtualization solutions are not ready to fully provide what the future of aerospace are demanding: performance, flexibility, safety, security while simultaneously containing Size, Weight, Power and Cost (SWaP-C).This work describes a real time hypervisor for space applications assisted by commercial off-the-shell (COTS) hardware. ARM TrustZone technology is exploited to implement a secure virtualization solution with low overhead and low memory footprint. This is demonstrated by running multiple guest partitions of RODOS operating system on a Xilinx Zynq platform.

  5. Evaluating abundance and trends in a Hawaiian avian community using state-space analysis

    USGS Publications Warehouse

    Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.

    2016-01-01

    Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.

  6. The How and Why of Chemical Reactions

    ERIC Educational Resources Information Center

    Schubert, Leo

    1970-01-01

    Presents a discussion of some of the fundamental concepts in thermodynamics and quantum mechanics including entropy, enthalpy, free energy, the partition function, chemical kinetics, transition state theory, the making and breaking of chemical bonds, electronegativity, ion sizes, intermolecular energies and of their role in explaining the nature…

  7. USING CMAQ-AIM TO EVALUATE THE GAS-PARTICLE PARTITIONING TREATMENT IN CMAQ

    EPA Science Inventory

    The Community Multi-scale Air Quality model (CMAQ) aerosol component utilizes a modal representation, where the size distribution is represented as a sum of three lognormal modes. Though the aerosol treatment in CMAQ is quite advanced compared to other operational air quality mo...

  8. Effects of partitioning and scheduling sparse matrix factorization on communication and load balance

    NASA Technical Reports Server (NTRS)

    Venugopal, Sesh; Naik, Vijay K.

    1991-01-01

    A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.

  9. TriageTools: tools for partitioning and prioritizing analysis of high-throughput sequencing data.

    PubMed

    Fimereli, Danai; Detours, Vincent; Konopka, Tomasz

    2013-04-01

    High-throughput sequencing is becoming a popular research tool but carries with it considerable costs in terms of computation time, data storage and bandwidth. Meanwhile, some research applications focusing on individual genes or pathways do not necessitate processing of a full sequencing dataset. Thus, it is desirable to partition a large dataset into smaller, manageable, but relevant pieces. We present a toolkit for partitioning raw sequencing data that includes a method for extracting reads that are likely to map onto pre-defined regions of interest. We show the method can be used to extract information about genes of interest from DNA or RNA sequencing samples in a fraction of the time and disk space required to process and store a full dataset. We report speedup factors between 2.6 and 96, depending on settings and samples used. The software is available at http://www.sourceforge.net/projects/triagetools/.

  10. Anonymous quantum nonlocality.

    PubMed

    Liang, Yeong-Cherng; Curchod, Florian John; Bowles, Joseph; Gisin, Nicolas

    2014-09-26

    We investigate the phenomenon of anonymous quantum nonlocality, which refers to the existence of multipartite quantum correlations that are not local in the sense of being Bell-inequality-violating but where the nonlocality is--due to its biseparability with respect to all bipartitions--seemingly nowhere to be found. Such correlations can be produced by the nonlocal collaboration involving definite subset(s) of parties but to an outsider, the identity of these nonlocally correlated parties is completely anonymous. For all n≥3, we present an example of an n-partite quantum correlation exhibiting anonymous nonlocality derived from the n-partite Greenberger-Horne-Zeilinger state. An explicit biseparable decomposition of these correlations is provided for any partitioning of the n parties into two groups. Two applications of these anonymous Greenberger-Horne-Zeilinger correlations in the device-independent setting are discussed: multipartite secret sharing between any two groups of parties and bipartite quantum key distribution that is robust against nearly arbitrary leakage of information.

  11. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures.

    PubMed

    Sloma, Michael F; Mathews, David H

    2016-12-01

    RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. © 2016 Sloma and Mathews; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  12. Vapor pressures of a homologous series of polyethylene glycols as a reference data set for validating vapor pressure measurement techniques.

    NASA Astrophysics Data System (ADS)

    Krieger, Ulrich; Marcolli, Claudia; Siegrist, Franziska

    2015-04-01

    The production of secondary organic aerosol (SOA) by gas-to-particle partitioning is generally represented by an equilibrium partitioning model. A key physical parameter which governs gas-particle partitioning is the pure component vapor pressure, which is difficult to measure for low- and semivolatile compounds. For typical atmospheric compounds like e.g. citric acid or tartaric acid, vapor pressures have been reported in the literature which differ by up to six orders of magnitude [Huisman et al., 2013]. Here, we report vapor pressures of a homologous series of polyethylene glycols (triethylene glycol to octaethylene glycol) determined by measuring the evaporation rate of single, levitated aerosol particles in an electrodynamic balance. We propose to use those as a reference data set for validating different vapor pressure measurement techniques. With each addition of a (O-CH2-CH2)-group the vapor pressure is lowered by about one order of magnitude which makes it easy to detect the lower limit of vapor pressures accessible with a particular technique down to a pressure of 10-8 Pa at room temperature. Reference: Huisman, A. J., Krieger, U. K., Zuend, A., Marcolli, C., and Peter, T., Atmos. Chem. Phys., 13, 6647-6662, 2013.

  13. Allometric biomass partitioning under nitrogen enrichment: Evidence from manipulative experiments around the world.

    PubMed

    Peng, Yunfeng; Yang, Yuanhe

    2016-06-28

    Allometric and optimal hypotheses have been widely used to explain biomass partitioning in response to resource changes for individual plants; however, little evidence has been reported from measurements at the community level across a broad geographic scale. This study assessed the nitrogen (N) effect on community-level root to shoot (R/S) ratios and biomass partitioning functions by synthesizing global manipulative experiments. Results showed that, in aggregate, N addition decreased the R/S ratios in various biomes. However, the scaling slopes of the allometric equations were not significantly altered by the N enrichment, possibly indicating that N-induced reduction of the R/S ratio is a consequence of allometric allocation as a function of increasing plant size rather than an optimal partitioning model. To further illustrate this point, we developed power function models to explore the relationships between aboveground and belowground biomass for various biomes; then, we generated the predicted root biomass from the observed shoot biomass and predicted R/S ratios. The comparison of predicted and observed N-induced changes of the R/S ratio revealed no significant differences between each other, supporting the allometric allocation hypothesis. These results suggest that allometry, rather than optimal allocation, explains the N-induced reduction in the R/S ratio across global biomes.

  14. Software forecasting as it is really done: A study of JPL software engineers

    NASA Technical Reports Server (NTRS)

    Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.

    1993-01-01

    This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.

  15. Discuss Similarity Using Visual Intuition

    ERIC Educational Resources Information Center

    Cox, Dana C.; Lo, Jane-Jane

    2012-01-01

    The change in size from a smaller shape to a larger similar shape (or vice versa) is created through continuous proportional stretching or shrinking in every direction. Students cannot solve similarity tasks simply by iterating or partitioning a composed unit, strategies typically used on numerical proportional tasks. The transition to thinking…

  16. Recruiting and Advising Challenges in Actuarial Science

    ERIC Educational Resources Information Center

    Case, Bettye Anne; Guan, Yuanying Michelle; Paris, Stephen

    2014-01-01

    Some challenges to increasing actuarial science program size through recruiting broadly among potential students are identified. Possible solutions depend on the structures and culture of the school. Up to three student cohorts may result from partition of potential students by the levels of academic progress before program entry: students…

  17. 24 CFR 1007.20 - Eligible housing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... standards that: (1) Provide sufficient flexibility to permit the use of various designs and materials; and (2) Require each dwelling unit to: (i) Be decent, safe, sanitary, and modest in size and design; (ii... kitchen sink and a partitional bathroom with lavatory, toilet, and bath or shower; and (C) Uses water...

  18. Partitioning of Organic Ions to Muscle Protein: Experimental Data, Modeling, and Implications for in Vivo Distribution of Organic Ions.

    PubMed

    Henneberger, Luise; Goss, Kai-Uwe; Endo, Satoshi

    2016-07-05

    The in vivo partitioning behavior of ionogenic organic chemicals (IOCs) is of paramount importance for their toxicokinetics and bioaccumulation. Among other proteins, structural proteins including muscle proteins could be an important sorption phase for IOCs, because of their high quantity in the human and other animals' body and their polar nature. Binding data for IOCs to structural proteins are, however, severely limited. Therefore, in this study muscle protein-water partition coefficients (KMP/w) of 51 systematically selected organic anions and cations were determined experimentally. A comparison of the measured KMP/w with bovine serum albumin (BSA)-water partition coefficients showed that anionic chemicals sorb more strongly to BSA than to muscle protein (by up to 3.5 orders of magnitude), while cations sorb similarly to both proteins. Sorption isotherms of selected IOCs to muscle protein are linear (i.e., KMP/w is concentration independent), and KMP/w is only marginally influenced by pH value and salt concentration. Using the obtained data set of KMP/w a polyparameter linear free energy relationship (PP-LFER) model was established. The derived equation fits the data well (R(2) = 0.89, RMSE = 0.29). Finally, it was demonstrated that the in vitro measured KMP/w values of this study have the potential to be used to evaluate tissue-plasma partitioning of IOCs in vivo.

  19. Method of up-front load balancing for local memory parallel processors

    NASA Technical Reports Server (NTRS)

    Baffes, Paul Thomas (Inventor)

    1990-01-01

    In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.

  20. Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levnajić, Zoran; Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, California 93106; Mezić, Igor

    We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone,more » and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivasseau, Vincent, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org

    The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property tomore » lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.« less

  2. Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets.

    PubMed

    Levnajić, Zoran; Mezić, Igor

    2015-05-01

    We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone, and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.

  3. A Study of Energy Partitioning Using A Set of Related Explosive Formulations

    NASA Astrophysics Data System (ADS)

    Lieber, Mark; Foster, Joseph C., Jr.; Stewart, D. Scott

    2011-06-01

    Condensed phase high explosives convert potential energy stored in the electro-magnetic field structure of complex molecules to kinetic energy during the detonation process. This energy is manifest in the internal thermodynamic energy and the translational flow of the products. Historically, the explosive design problem has focused on intramolecular stoichiometry providing prompt reactions based on transport physics at the molecular scale. Modern material design has evolved to approaches that employee intermolecular ingredients to alter the spatial and temporal distribution of energy release. CHEETA has been used to produce data for a set of fictitious explosive formulations based on C-4 to study the partitioning of the available energy between internal and flow energy in the detonation. The equation of state information from CHEETA has been used in ALE3D to develop an understanding of the relationship between variations in the formulation parameters and the internal energy cycle in the products.

  4. National Land Cover Database 2001 (NLCD01) Tile 2, Northeast United States: NLCD01_2

    USGS Publications Warehouse

    LaMotte, Andrew

    2008-01-01

    This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  5. National Land Cover Database 2001 (NLCD01) Tile 3, Southwest United States: NLCD01_3

    USGS Publications Warehouse

    LaMotte, Andrew

    2008-01-01

    This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg).The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  6. National Land Cover Database 2001 (NLCD01) Tile 1, Northwest United States: NLCD01_1

    USGS Publications Warehouse

    LaMotte, Andrew

    2008-01-01

    This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  7. National Land Cover Database 2001 (NLCD01) Tile 4, Southeast United States: NLCD01_4

    USGS Publications Warehouse

    LaMotte, Andrew

    2008-01-01

    This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  8. No-infill 3D Printing

    NASA Astrophysics Data System (ADS)

    Wei, Xiao-Ran; Zhang, Yu-He; Geng, Guo-Hua

    2016-09-01

    In this paper, we examined how printing the hollow objects without infill via fused deposition modeling, one of the most widely used 3D-printing technologies, by partitioning the objects to shell parts. More specifically, we linked the partition to the exact cover problem. Given an input watertight mesh shape S, we developed region growing schemes to derive a set of surfaces that had inside surfaces that were printable without support on the mesh for the candidate parts. We then employed Monte Carlo tree search over the candidate parts to obtain the optimal set cover. All possible candidate subsets of exact cover from the optimal set cover were then obtained and the bounded tree was used to search the optimal exact cover. We oriented each shell part to the optimal position to guarantee the inside surface was printed without support, while the outside surface was printed with minimum support. Our solution can be applied to a variety of models, closed-hollowed or semi-closed, with or without holes, as evidenced by experiments and performance evaluation on our proposed algorithm.

  9. Estimating the octanol/water partition coefficient for aliphatic organic compounds using semi-empirical electrotopological index.

    PubMed

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.

  10. Organic Carbon/Water and Dissolved Organic Carbon/Water Partitioning of Cyclic Volatile Methylsiloxanes: Measurements and Polyparameter Linear Free Energy Relationships.

    PubMed

    Panagopoulos, Dimitri; Jahnke, Annika; Kierkegaard, Amelie; MacLeod, Matthew

    2015-10-20

    The sorption of cyclic volatile methyl siloxanes (cVMS) to organic matter has a strong influence on their fate in the aquatic environment. We report new measurements of the partition ratios between freshwater sediment organic carbon and water (KOC) and between Aldrich humic acid dissolved organic carbon and water (KDOC) for three cVMS, and for three polychlorinated biphenyls (PCBs) that were used as reference chemicals. Our measurements were made using a purge-and-trap method that employs benchmark chemicals to calibrate mass transfer at the air/water interface in a fugacity-based multimedia model. The measured log KOC of octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), and dodecamethylcyclohexasiloxane (D6) were 5.06, 6.12, and 7.07, and log KDOC were 5.05, 6.13, and 6.79. To our knowledge, our measurements for KOC of D6 and KDOC of D4 and D6 are the first reported. Polyparameter linear free energy relationships (PP-LFERs) derived from training sets of empirical data that did not include cVMS generally did not predict our measured partition ratios of cVMS accurately (root-mean-squared-error (RMSE) for logKOC 0.76 and for logKDOC 0.73). We constructed new PP-LFERs that accurately describe partition ratios for the cVMS as well as for other chemicals by including our new measurements in the existing training sets (logKOC RMSEcVMS: 0.09, logKDOC RMSEcVMS: 0.12). The PP-LFERs we have developed here should be further evaluated and perhaps recalibrated when experimental data for other siloxanes become available.

  11. Partitioning and interfacial tracers for differentiating NAPL entrapment configuration: column-scale investigation.

    PubMed

    Dai, D; Barranco, F T; Illangasekare, T H

    2001-12-15

    Research on the use of partitioning and interfacial tracers has led to the development of techniques for estimating subsurface NAPL amount and NAPL-water interfacial area. Although these techniques have been utilized with some success at field sites, current application is limited largely to NAPL at residual saturation, such as for the case of post-remediation settings where mobile NAPL has been removed through product recovery. The goal of this study was to fundamentally evaluate partitioning and interfacial tracer behavior in controlled column-scale test cells for a range of entrapment configurations varying in NAPL saturation, with the results serving as a determinant of technique efficacy (and design protocol) for use with complexly distributed NAPLs, possibly at high saturation, in heterogeneous aquifers. Representative end members of the range of entrapment configurations observed under conditions of natural heterogeneity (an occurrence with residual NAPL saturation [discontinuous blobs] and an occurrence with high NAPL saturation [continuous free-phase LNAPL lens]) were evaluated. Study results indicated accurate prediction (using measured tracer retardation and equilibrium-based computational techniques) of NAPL amount and NAPL-water interfacial area for the case of residual NAPL saturation. For the high-saturation LNAPL lens, results indicated that NAPL-water interfacial area, but not NAPL amount (underpredicted by 35%), can be reasonably determined using conventional computation techniques. Underprediction of NAPL amount lead to an erroneous prediction of NAPL distribution, as indicated by the NAPL morphology index. In light of these results, careful consideration should be given to technique design and critical assumptions before applying equilibrium-based partitioning tracer methodology to settings where NAPLs are complexly entrapped, such as in naturally heterogeneous subsurface formations.

  12. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  13. Computational solvent system screening for the separation of tocopherols with centrifugal partition chromatography using deep eutectic solvent-based biphasic systems.

    PubMed

    Bezold, Franziska; Weinberger, Maria E; Minceva, Mirjana

    2017-03-31

    Tocopherols are a class of molecules with vitamin E activity. Among those, α-tocopherol is the most important vitamin E source in the human diet. The purification of tocopherols involving biphasic liquid systems can be challenging since these vitamins are poorly soluble in water. Deep eutectic solvents (DES) can be used to form water-free biphasic systems and have already proven applicable for centrifugal partition chromatography separations. In this work, a computational solvent system screening was performed using the predictive thermodynamic model COSMO-RS. Liquid-liquid equilibria of solvent systems composed of alkanes, alcohols and DES, as well as partition coefficients of α-tocopherol, β-tocopherol, γ-tocopherol, and σ-tocopherol in these biphasic solvent systems were calculated. From the results the best suited biphasic solvent system, namely heptane/ethanol/choline chloride-1,4-butanediol, was chosen and a batch injection of a tocopherol mixture, mainly consisting of α- and γ-tocopherol, was performed using a centrifugal partition chromatography set up (SCPE 250-BIO). A separation factor of 1.74 was achieved for α- and γ-tocopherol. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Minimum nonuniform graph partitioning with unrelated weights

    NASA Astrophysics Data System (ADS)

    Makarychev, K. S.; Makarychev, Yu S.

    2017-12-01

    We give a bi-criteria approximation algorithm for the Minimum Nonuniform Graph Partitioning problem, recently introduced by Krauthgamer, Naor, Schwartz and Talwar. In this problem, we are given a graph G=(V,E) and k numbers ρ_1,\\dots, ρ_k. The goal is to partition V into k disjoint sets (bins) P_1,\\dots, P_k satisfying \\vert P_i\\vert≤ ρi \\vert V\\vert for all i, so as to minimize the number of edges cut by the partition. Our bi-criteria algorithm gives an O(\\sqrt{log \\vert V\\vert log k}) approximation for the objective function in general graphs and an O(1) approximation in graphs excluding a fixed minor. The approximate solution satisfies the relaxed capacity constraints \\vert P_i\\vert ≤ (5+ \\varepsilon)ρi \\vert V\\vert. This algorithm is an improvement upon the O(log \\vert V\\vert)-approximation algorithm by Krauthgamer, Naor, Schwartz and Talwar. We extend our results to the case of 'unrelated weights' and to the case of 'unrelated d-dimensional weights'. A preliminary version of this work was presented at the 41st International Colloquium on Automata, Languages and Programming (ICALP 2014). Bibliography: 7 titles.

  15. Transdermal delivery of diclofenac using water-in-oil microemulsion: formulation and mechanistic approach of drug skin permeation.

    PubMed

    Thakkar, Priyanka J; Madan, Parshotam; Lin, Senshang

    2014-05-01

    The objective of the present investigation was to enhance skin permeation of diclofenac using water-in-oil microemulsion and to elucidate its skin permeation mechanism. The w/o microemulsion formulations were selected based on constructed pseudoternary phase diagrams depending on water solubilization capacity and thermodynamic stability. These formulations were also subjected to physical characterization based on droplet size, viscosity, pH and conductivity. Permeation of diclofenac across rat skin using side-by-side permeation cells from selected w/o microemulsion formulations were evaluated and compared with control formulations. The selected w/o microemulsion formulations were thermodynamically stable, and incorporation of diclofenac sodium into microemulsion did not affect the phase behavior of system. All microemulsion formulations had very low viscosity (11-17 cps) and droplet size range of 30-160 nm. Microemulsion formulations exhibited statistically significant increase in diclofenac permeation compared to oily solution, aqueous solution and oil-Smix solution. Higher skin permeation of diclofenac was observed with low Smix concentration and smaller droplet size. Increase in diclofenac loading in aqueous phase decreased the partition of diclofenac. Diclofenac from the oil phase of microemulsion could directly partition into skin, while diclofenac from the aqueous droplets was carried through skin by carrier effect.

  16. Particle size distributions and gas-particle partitioning of polychlorinated dibenzo-p-dioxins and dibenzofurans in ambient air during haze days and normal days.

    PubMed

    Zhang, Xian; Zheng, Minghui; Liang, Yong; Liu, Guorui; Zhu, Qingqing; Gao, Lirong; Liu, Wenbin; Xiao, Ke; Sun, Xu

    2016-12-15

    Little information is available on the distributions of airborne polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) during haze days. In this study, PCDD/F concentrations, particle size distributions, and gas-particle partitioning in a Beijing suburban area during haze days and normal days were investigated. High PCDD/F concentrations, 3979-74,702fgm -3 (173-3885fgI-TEQm -3 ), were found during haze days and ~98% of the PCDD/Fs were associated with particles. Most PCDD/F congeners (>90%) were associated with particles. PCDD/F concentrations increased as particle sizes decreased and 95% of the particle-bound PCDD/Fs were associated with inhalable fine particles with aerodynamic diameters<2.5μm. PCDD/Fs were mainly absorbed in the particles and the Harner-Bidleman model predicted the particulate fractions of the PCDD/F congeners in the air samples well. The investigated PCDD/F concentrations and particle-bound distributions were different during normal days than during haze days. Temporal airborne PCDD/F trends in a suburban area during haze conditions could support better understanding of the exposure risk posed by toxic PCDD/Fs associated with fine particles. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Finite-size analysis of the detectability limit of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Young, Jean-Gabriel; Desrosiers, Patrick; Hébert-Dufresne, Laurent; Laurence, Edward; Dubé, Louis J.

    2017-06-01

    It has been shown in recent years that the stochastic block model is sometimes undetectable in the sparse limit, i.e., that no algorithm can identify a partition correlated with the partition used to generate an instance, if the instance is sparse enough and infinitely large. In this contribution, we treat the finite case explicitly, using arguments drawn from information theory and statistics. We give a necessary condition for finite-size detectability in the general SBM. We then distinguish the concept of average detectability from the concept of instance-by-instance detectability and give explicit formulas for both definitions. Using these formulas, we prove that there exist large equivalence classes of parameters, where widely different network ensembles are equally detectable with respect to our definitions of detectability. In an extensive case study, we investigate the finite-size detectability of a simplified variant of the SBM, which encompasses a number of important models as special cases. These models include the symmetric SBM, the planted coloring model, and more exotic SBMs not previously studied. We conclude with three appendices, where we study the interplay of noise and detectability, establish a connection between our information-theoretic approach and random matrix theory, and provide proofs of some of the more technical results.

  18. Characterization of fly ash from low-sulfur and high-sulfur coal sources: Partitioning of carbon and trace elements with particle size

    USGS Publications Warehouse

    Hower, J.C.; Trimble, A.S.; Eble, C.F.; Palmer, C.A.; Kolker, A.

    1999-01-01

    Fly ash samples were collected in November and December of 1994, from generating units at a Kentucky power station using high- and low-sulfur feed coals. The samples are part of a two-year study of the coal and coal combustion byproducts from the power station. The ashes were wet screened at 100, 200, 325, and 500 mesh (150, 75, 42, and 25 ??m, respectively). The size fractions were then dried, weighed, split for petrographic and chemical analysis, and analyzed for ash yield and carbon content. The low-sulfur "heavy side" and "light side" ashes each have a similar size distribution in the November samples. In contrast, the December fly ashes showed the trend observed in later months, the light-side ash being finer (over 20 % more ash in the -500 mesh [-25 ??m] fraction) than the heavy-side ash. Carbon tended to be concentrated in the coarse fractions in the December samples. The dominance of the -325 mesh (-42 ??m) fractions in the overall size analysis implies, though, that carbon in the fine sizes may be an important consideration in the utilization of the fly ash. Element partitioning follows several patterns. Volatile elements, such as Zn and As, are enriched in the finer sizes, particularly in fly ashes collected at cooler, light-side electrostatic precipitator (ESP) temperatures. The latter trend is a function of precipitation at the cooler-ESP temperatures and of increasing concentration with the increased surface area of the finest fraction. Mercury concentrations are higher in high-carbon fly ashes, suggesting Hg adsorption on the fly ash carbon. Ni and Cr are associated, in part, with the spinel minerals in the fly ash. Copyright ?? 1999 Taylor & Francis.

  19. Size distribution and sorption of polychlorinated biphenyls during haze episodes

    NASA Astrophysics Data System (ADS)

    Zhu, Qingqing; Liu, Guorui; Zheng, Minghui; Zhang, Xian; Gao, Lirong; Su, Guijin; Liang, Yong

    2018-01-01

    There is a lack of studies on the size distribution of polychlorinated biphenyls (PCBs) during haze days, and their sorption mechanisms on aerosol particles remain unclear. In this study, PCBs in particle-sized aerosols from urban atmospheres of Beijing, China were investigated during haze and normal days. The concentrations, gas/particle partitioning, size distribution, and associated human daily intake of PCBs via inhalation were compared during haze days and normal days. Compared with normal days, higher particle mass-associated PCB levels were measured during haze days. The concentrations of ∑PCBs in particulate fractions were 11.9-134 pg/m3 and 6.37-14.9 pg/m3 during haze days and normal days, respectively. PCBs increased with decreasing particle size (>10 μm, 10-2.5 μm, 2.5-1.0 μm, and ≤1.0 μm). During haze days, PCBs were overwhelmingly associated with a fine particle fraction of ≤1.0 μm (64.6%), while during normal days the contribution was 33.7%. Tetra-CBs were the largest contributors (51.8%-66.7%) both in the gas and particle fractions during normal days. The profiles in the gas fraction were conspicuously different than those in the PM fractions during haze days, with di-CBs predominating in the gas fraction and higher homologues (tetra-CBs, penta-CBs, and hexa-CBs) concurrently accounting for most of the PM fractions. The mean-normalized size distributions of particulate mass and PCBs exhibited unimodal patterns, and a similar trend was observed for PCBs during both days. They all tended to be in the PM fraction of 1.0-2.5 μm. Adsorption might be the predominating mechanism for the gas-particle partitioning of PCBs during haze days, whereas absorption might be dominative during normal days.

  20. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    PubMed

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  1. Partitioning of hydrophobic organic contaminants between polymer and lipids for two silicones and low density polyethylene.

    PubMed

    Smedes, Foppe; Rusina, Tatsiana P; Beeltje, Henry; Mayer, Philipp

    2017-11-01

    Polymers are increasingly used for passive sampling of neutral hydrophobic organic substances (HOC) in environmental media including water, air, soil, sediment and even biological tissue. The equilibrium concentration of HOC in the polymer can be measured and then converted into equilibrium concentrations in other (defined) media, which however requires appropriate polymer to media partition coefficients. We determined thus polymer-lipid partition coefficients (K PL ) of various PCB, PAH and organochlorine pesticides by equilibration of two silicones and low density polyethylene (LDPE) with fish oil and Triolein at 4 °C and 20 °C. We observed (i) that K PL was largely independent of lipid type and temperature, (ii) that lipid diffusion rates in the polymers were higher compared to predictions based on their molecular volume, (iii) that silicones showed higher lipid diffusion and lower lipid sorption compared to LDPE and (iv) that absorbed lipid behaved like a co-solute and did not affect the partitioning of HOC at least for the smaller molecular size HOC. The obtained K PL can convert measured equilibrium concentrations in passive sampling polymers into equilibrium concentrations in lipid, which then can be used (1) for environmental quality monitoring and assessment, (2) for thermodynamic exposure assessment and (3) for assessing the linkage between passive sampling and the traditionally measured lipid-normalized concentrations in biota. LDPE-lipid partition coefficients may also be of use for a thermodynamically sound risk assessment of HOC contained in microplastics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE PAGES

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...

    2017-09-07

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  3. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  4. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  5. Clustering algorithms for identifying core atom sets and for assessing the precision of protein structure ensembles.

    PubMed

    Snyder, David A; Montelione, Gaetano T

    2005-06-01

    An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.

  6. Detecting Anomalies from End-to-End Internet Performance Measurements (PingER) Using Cluster Based Local Outlier Factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Saqib; Wang, Guojun; Cottrell, Roger Leslie

    PingER (Ping End-to-End Reporting) is a worldwide end-to-end Internet performance measurement framework. It was developed by the SLAC National Accelerator Laboratory, Stanford, USA and running from the last 20 years. It has more than 700 monitoring agents and remote sites which monitor the performance of Internet links around 170 countries of the world. At present, the size of the compressed PingER data set is about 60 GB comprising of 100,000 flat files. The data is publicly available for valuable Internet performance analyses. However, the data sets suffer from missing values and anomalies due to congestion, bottleneck links, queuing overflow, networkmore » software misconfiguration, hardware failure, cable cuts, and social upheavals. Therefore, the objective of this paper is to detect such performance drops or spikes labeled as anomalies or outliers for the PingER data set. In the proposed approach, the raw text files of the data set are transformed into a PingER dimensional model. The missing values are imputed using the k-NN algorithm. The data is partitioned into similar instances using the k-means clustering algorithm. Afterward, clustering is integrated with the Local Outlier Factor (LOF) using the Cluster Based Local Outlier Factor (CBLOF) algorithm to detect the anomalies or outliers from the PingER data. Lastly, anomalies are further analyzed to identify the time frame and location of the hosts generating the major percentage of the anomalies in the PingER data set ranging from 1998 to 2016.« less

  7. Detecting Anomalies from End-to-End Internet Performance Measurements (PingER) Using Cluster Based Local Outlier Factor

    DOE PAGES

    Ali, Saqib; Wang, Guojun; Cottrell, Roger Leslie; ...

    2018-05-28

    PingER (Ping End-to-End Reporting) is a worldwide end-to-end Internet performance measurement framework. It was developed by the SLAC National Accelerator Laboratory, Stanford, USA and running from the last 20 years. It has more than 700 monitoring agents and remote sites which monitor the performance of Internet links around 170 countries of the world. At present, the size of the compressed PingER data set is about 60 GB comprising of 100,000 flat files. The data is publicly available for valuable Internet performance analyses. However, the data sets suffer from missing values and anomalies due to congestion, bottleneck links, queuing overflow, networkmore » software misconfiguration, hardware failure, cable cuts, and social upheavals. Therefore, the objective of this paper is to detect such performance drops or spikes labeled as anomalies or outliers for the PingER data set. In the proposed approach, the raw text files of the data set are transformed into a PingER dimensional model. The missing values are imputed using the k-NN algorithm. The data is partitioned into similar instances using the k-means clustering algorithm. Afterward, clustering is integrated with the Local Outlier Factor (LOF) using the Cluster Based Local Outlier Factor (CBLOF) algorithm to detect the anomalies or outliers from the PingER data. Lastly, anomalies are further analyzed to identify the time frame and location of the hosts generating the major percentage of the anomalies in the PingER data set ranging from 1998 to 2016.« less

  8. Density of states, Potts zeros, and Fisher zeros of the Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seung-Yeon; Creswick, Richard J.

    2001-06-01

    The Q-state Potts model can be extended to noninteger and even complex Q by expressing the partition function in the Fortuin-Kasteleyn (F-K) representation. In the F-K representation the partition function Z(Q,a) is a polynomial in Q and v=a{minus}1 (a=e{sup {beta}J}) and the coefficients of this polynomial, {Phi}(b,c), are the number of graphs on the lattice consisting of b bonds and c connected clusters. We introduce the random-cluster transfer matrix to compute {Phi}(b,c) exactly on finite square lattices with several types of boundary conditions. Given the F-K representation of the partition function we begin by studying the critical Potts model Z{submore » CP}=Z(Q,a{sub c}(Q)), where a{sub c}(Q)=1+{radical}Q. We find a set of zeros in the complex w={radical}Q plane that map to (or close to) the Beraha numbers for real positive Q. We also identify {tilde Q}{sub c}(L), the value of Q for a lattice of width L above which the locus of zeros in the complex p=v/{radical}Q plane lies on the unit circle. By finite-size scaling we find that 1/{tilde Q}{sub c}(L){r_arrow}0 as L{r_arrow}{infinity}. We then study zeros of the antiferromagnetic (AF) Potts model in the complex Q plane and determine Q{sub c}(a), the largest value of Q for a fixed value of a below which there is AF order. We find excellent agreement with Baxter{close_quote}s conjecture Q{sub c}{sup AF}(a)=(1{minus}a)(a+3). We also investigate the locus of zeros of the ferromagnetic Potts model in the complex Q plane and confirm that Q{sub c}{sup FM}(a)=(a{minus}1){sup 2}. We show that the edge singularity in the complex Q plane approaches Q{sub c} as Q{sub c}(L){similar_to}Q{sub c}+AL{sup {minus}y{sub q}}, and determine the scaling exponent y{sub q} for several values of Q. Finally, by finite-size scaling of the Fisher zeros near the antiferromagnetic critical point we determine the thermal exponent y{sub t} as a function of Q in the range 2{le}Q{le}3. Using data for lattices of size 3{le}L{le}8 we find that y{sub t} is a smooth function of Q and is well fitted by y{sub t}=(1+Au+Bu{sup 2})/(C+Du) where u={minus}(2/{pi})cos{sup {minus}1}({radical}Q/2). For Q=3 we find y{sub t}{approx_equal}0.6; however if we include lattices up to L=12 we find y{sub t}{approx_equal}0.50(8) in rough agreement with a recent result of Ferreira and Sokal [J. Stat. Phys. >96, 461 (1999)].« less

  9. A fuzzy neural network for intelligent data processing

    NASA Astrophysics Data System (ADS)

    Xie, Wei; Chu, Feng; Wang, Lipo; Lim, Eng Thiam

    2005-03-01

    In this paper, we describe an incrementally generated fuzzy neural network (FNN) for intelligent data processing. This FNN combines the features of initial fuzzy model self-generation, fast input selection, partition validation, parameter optimization and rule-base simplification. A small FNN is created from scratch -- there is no need to specify the initial network architecture, initial membership functions, or initial weights. Fuzzy IF-THEN rules are constantly combined and pruned to minimize the size of the network while maintaining accuracy; irrelevant inputs are detected and deleted, and membership functions and network weights are trained with a gradient descent algorithm, i.e., error backpropagation. Experimental studies on synthesized data sets demonstrate that the proposed Fuzzy Neural Network is able to achieve accuracy comparable to or higher than both a feedforward crisp neural network, i.e., NeuroRule, and a decision tree, i.e., C4.5, with more compact rule bases for most of the data sets used in our experiments. The FNN has achieved outstanding results for cancer classification based on microarray data. The excellent classification result for Small Round Blue Cell Tumors (SRBCTs) data set is shown. Compared with other published methods, we have used a much fewer number of genes for perfect classification, which will help researchers directly focus their attention on some specific genes and may lead to discovery of deep reasons of the development of cancers and discovery of drugs.

  10. Simulating the phase partitioning of NH3, HNO3, and HCl with size-resolved particles over northern Colorado in winter

    EPA Science Inventory

    Numerical modeling of inorganic aerosol processes is useful in air quality management, but comprehensive evaluation of modeled aerosol processes is rarely possible due to the lack of comprehensive datasets. During the Nitrogen, Aerosol Composition, and Halogens on a Tall Tower (N...

  11. Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models.

    PubMed

    Elben, A; Vermersch, B; Dalmonte, M; Cirac, J I; Zoller, P

    2018-02-02

    We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.

  12. Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models

    NASA Astrophysics Data System (ADS)

    Elben, A.; Vermersch, B.; Dalmonte, M.; Cirac, J. I.; Zoller, P.

    2018-02-01

    We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.

  13. Network immunization under limited budget using graph spectra

    NASA Astrophysics Data System (ADS)

    Zahedi, R.; Khansari, M.

    2016-03-01

    In this paper, we propose a new algorithm that minimizes the worst expected growth of an epidemic by reducing the size of the largest connected component (LCC) of the underlying contact network. The proposed algorithm is applicable to any level of available resources and, despite the greedy approaches of most immunization strategies, selects nodes simultaneously. In each iteration, the proposed method partitions the LCC into two groups. These are the best candidates for communities in that component, and the available resources are sufficient to separate them. Using Laplacian spectral partitioning, the proposed method performs community detection inference with a time complexity that rivals that of the best previous methods. Experiments show that our method outperforms targeted immunization approaches in both real and synthetic networks.

  14. Niche Partitioning in Three Sympatric Congeneric Species of Dragonfly, Orthetrum chrysostigma, O. coerulescens anceps, and O. nitidinerve: The Importance of Microhabitat

    PubMed Central

    Khelifa, Rassim; Zebsa, Rabah; Moussaoui, Abdelkrim; Kahalerras, Amin; Bensouilah, Soufyane; Mahdjoub, Hayat

    2013-01-01

    Habitat heterogeneity has been shown to promote co-existence of closely related species. Based on this concept, a field study was conducted on the niche partitioning of three territorial congeneric species of skimmers (Anisoptera: Libellulidae) in Northeast Algeria during the breeding season of 2011. According to their size, there is a descending hierarchy between Orthetrum nitidinerve Sélys, O. chrysostigma (Burmeister), and O. coerulescens anceps (Schneider). After being marked and surveyed, the two latter species had the same breeding behavior sequence. Knowing that they had almost the same size, such species could not co-occur in the same habitat according to the competitive exclusion principle. The spatial distribution of the three species was investigated at two different microhabitats, and it was found that these two species were actually isolated at this scale. O. chrysostigma and O. nitidinerve preferred open areas, while O. c. anceps occurred in highly vegetated waters. This study highlights the role of microhabitat in community structure as an important niche axis that maintains closely related species in the same habitat. PMID:24219357

  15. Polychlorinated biphenyls in the atmosphere of Taizhou, a major e-waste dismantling area in China.

    PubMed

    Han, Wenliang; Feng, Jialiang; Gu, Zeping; Wu, Minghong; Sheng, Guoying; Fu, Jiamo

    2010-01-01

    PM2.5, total suspended particles (TSP) and gas phase samples were collected at two sites of Taizhou, a major e-waste dismantling area in China. Concentrations, seasonal variations, congener profiles, gas-particle partitioning and size distribution of the atmospheric polychlorinated biphenyls (PCBs) were studied to assess the current state of atmospheric PCBs after the phase out of massive historical dismantling of PCBs containing e-wastes. The average sigma38PCBs concentration in the ambient air (TSP plus gas phase) near the e-waste dismantling area was (12,407 +/- 9592) pg/m3 in winter, which was substantially lower than that found one decade ago. However, the atmospheric PCBs level near the e-waste dismantling area was 54 times of the reference urban site, indicating that the impact of the historical dismantling of PCBs containing e-wastes was still significant. Tri-Penta-CBs were dominant homologues, consisting with their dominant global production. Size distribution of particle-bound PCBs showed that higher chlorinated CBs tended to partition more to the fine particles, facilitating its long range air transportation.

  16. In-situ ductile metal/bulk metallic glass matrix composites formed by chemical partitioning

    DOEpatents

    Kim, Choong Paul; Hays, Charles C.; Johnson, William L.

    2004-03-23

    A composite metal object comprises ductile crystalline metal particles in an amorphous metal matrix. An alloy is heated above its liquidus temperature. Upon cooling from the high temperature melt, the alloy chemically partitions, forming dendrites in the melt. Upon cooling the remaining liquid below the glass transition temperature it freezes to the amorphous state, producing a two-phase microstructure containing crystalline particles in an amorphous metal matrix. The ductile metal particles have a size in the range of from 0.1 to 15 micrometers and spacing in the range of from 0.1 to 20 micrometers. Preferably, the particle size is in the range of from 0.5 to 8 micrometers and spacing is in the range of from 1 to 10 micrometers. The volume proportion of particles is in the range of from 5 to 50% and preferably 15 to 35%. Differential cooling can produce oriented dendrites of ductile metal phase in an amorphous matrix. Examples are given in the Zr--Ti--Cu--Ni--Be alloy bulk glass forming system with added niobium.

  17. In-situ ductile metal/bulk metallic glass matrix composites formed by chemical partitioning

    DOEpatents

    Kim, Choong Paul [Northridge, CA; Hays, Charles C [Pasadena, CA; Johnson, William L [Pasadena, CA

    2007-07-17

    A composite metal object comprises ductile crystalline metal particles in an amorphous metal matrix. An alloy is heated above its liquidus temperature. Upon cooling from the high temperature melt, the alloy chemically partitions, forming dendrites in the melt. Upon cooling the remaining liquid below the glass transition temperature it freezes to the amorphous state, producing a two-phase microstructure containing crystalline particles in an amorphous metal matrix. The ductile metal particles have a size in the range of from 0.1 to 15 micrometers and spacing in the range of from 0.1 to 20 micrometers. Preferably, the particle size is in the range of from 0.5 to 8 micrometers and spacing is in the range of from 1 to 10 micrometers. The volume proportion of particles is in the range of from 5 to 50% and preferably 15 to 35%. Differential cooling can produce oriented dendrites of ductile metal phase in an amorphous matrix. Examples are given in the Zr--Ti--Cu--Ni--Be alloy bulk glass forming system with added niobium.

  18. A framework with Cucho algorithm for discovering regular plans in mobile clients

    NASA Astrophysics Data System (ADS)

    Tsiligaridis, John

    2017-09-01

    In a mobile computing system, broadcasting has become a very interesting and challenging research issue. The server continuously broadcasts data to mobile users; the data can be inserted into customized size relations and broadcasted as Regular Broadcast Plan (RBP) with multiple channels. Two algorithms, given the data size for each provided service, the Basic Regular (BRA) and the Partition Value Algorithm (PVA) can provide a static and dynamic RBP construction with multiple constraints solutions respectively. Servers have to define the data size of the services and can provide a feasible RBP working with many broadcasting plan operations. The operations become more complicated when there are many kinds of services and the sizes of data sets are unknown to the server. To that end a framework has been developed that also gives the ability to select low or high capacity channels for servicing. Theorems with new analytical results can provide direct conditions that can state the existence of solutions for the RBP problem with the compound criterion. Two kinds of solutions are provided: the equal and the non equal subrelation solutions. The Cucho Search Algorithm (CS) with the Levy flight behavior has been selected for the optimization. The CS for RBP (CSRP) is developed applying the theorems to the discovery of RBPs. An additional change to CS has been made in order to increase the local search. The CS can also discover RBPs with the minimum number of channels. From all the above modern servers can be upgraded with these possibilities in regards to RBPs discovery with fewer channels.

  19. Insights into the genetic architecture of morphological traits in two passerine bird species.

    PubMed

    Silva, C N S; McFarlane, S E; Hagen, I J; Rönnegård, L; Billing, A M; Kvalnes, T; Kemppainen, P; Rønning, B; Ringsby, T H; Sæther, B-E; Qvarnström, A; Ellegren, H; Jensen, H; Husby, A

    2017-09-01

    Knowledge about the underlying genetic architecture of phenotypic traits is needed to understand and predict evolutionary dynamics. The number of causal loci, magnitude of the effects and location in the genome are, however, still largely unknown. Here, we use genome-wide single-nucleotide polymorphism (SNP) data from two large-scale data sets on house sparrows and collared flycatchers to examine the genetic architecture of different morphological traits (tarsus length, wing length, body mass, bill depth, bill length, total and visible badge size and white wing patches). Genomic heritabilities were estimated using relatedness calculated from SNPs. The proportion of variance captured by the SNPs (SNP-based heritability) was lower in house sparrows compared with collared flycatchers, as expected given marker density (6348 SNPs in house sparrows versus 38 689 SNPs in collared flycatchers). Indeed, after downsampling to similar SNP density and sample size, this estimate was no longer markedly different between species. Chromosome-partitioning analyses demonstrated that the proportion of variance explained by each chromosome was significantly positively related to the chromosome size for some traits and, generally, that larger chromosomes tended to explain proportionally more variation than smaller chromosomes. Finally, we found two genome-wide significant associations with very small-effect sizes. One SNP on chromosome 20 was associated with bill length in house sparrows and explained 1.2% of phenotypic variation (V P ), and one SNP on chromosome 4 was associated with tarsus length in collared flycatchers (3% of V P ). Although we cannot exclude the possibility of undetected large-effect loci, our results indicate a polygenic basis for morphological traits.

  20. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Sherrill, C. David

    2014-07-01

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.

  1. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu

    2014-07-28

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work throughmore » the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.« less

  2. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  3. A Locally Optimal Algorithm for Estimating a Generating Partition from an Observed Time Series and Its Application to Anomaly Detection.

    PubMed

    Ghalyan, Najah F; Miller, David J; Ray, Asok

    2018-06-12

    Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.

  4. Predation and landscape characteristics independently affect reef fish community organization.

    PubMed

    Stier, Adrian C; Hanson, Katharine M; Holbrook, Sally J; Schmitt, Russell J; Brooks, Andrew J

    2014-05-01

    Trophic island biogeography theory predicts that the effects of predators on prey diversity are context dependent in heterogeneous landscapes. Specifically, models predict that the positive effect of habitat area on prey diversity should decline in the presence of predators, and that predators should modify the partitioning of alpha and beta diversity across patchy landscapes. However, experimental tests of the predicted context dependency in top-down control remain limited. Using a factorial field experiment we quantify the effects of a focal predatory fish species (grouper) and habitat characteristics (patch size, fragmentation) on the partitioning of diversity and assembly of coral reef fish communities. We found independent effects of groupers and patch characteristics on prey communities. Groupers reduced prey abundance by 50% and gamma diversity by 45%, with a disproportionate removal of rare species relative to common species (64% and 36% reduction, respectively; an oddity effect). Further, there was a 77% reduction in beta diversity. Null model analysis demonstrated that groupers increased the importance of stochastic community assembly relative to patches without groupers. With regard to patch size, larger patches contained more fishes, but a doubling of patch size led to a modest (36%) increase in prey abundance. Patch size had no effect on prey diversity; however, fragmented patches had 50% higher species richness and modified species composition relative to unfragmented patches. Our findings suggest two different pathways (i.e., habitat or predator shifts) by which natural and/or anthropogenic processes can drive variation in fish biodiversity and community assembly.

  5. Partitioning Ocean Wave Spectra Obtained from Radar Observations

    NASA Astrophysics Data System (ADS)

    Delaye, Lauriane; Vergely, Jean-Luc; Hauser, Daniele; Guitton, Gilles; Mouche, Alexis; Tison, Celine

    2016-08-01

    2D wave spectra of ocean waves can be partitioned into several wave components to better characterize the scene. We present here two methods of component detection: one based on watershed algorithm and the other based on a Bayesian approach. We tested both methods on a set of simulated SWIM data, the Ku-band real aperture radar embarked on the CFOSAT (China- France Oceanography Satellite) mission which launch is planned mid-2018. We present the results and the limits of both approaches and show that Bayesian method can also be applied to other kind of wave spectra observations as those obtained with the radar KuROS, an airborne radar wave spectrometer.

  6. Metal biogeochemistry in surface-water systems; a review of principles and concepts

    USGS Publications Warehouse

    Elder, John F.

    1988-01-01

    Metals are ubiquitous in natural surface-water systems, both as dissolved constituents and as particulate constituents. Although concentrations of many metals are generally very low (hence the common term 'trace metals'), their effects on the water quality and the biota of surfacewater systems are likely to be substantial. Biogeochemical partitioning of metals results in a diversity of forms, including hydrated or 'free' ions, colloids, precipitates, adsorbed phases, and various coordination complexes with dissolved organic and inorganic ligands. Much research has been dedicated to answering questions about the complexities of metal behavior and effects in aquatic systems. Voluminous literature on the subject has been produced. This paper synthesizes the findings of aquatic metal studies and describes some general concepts that emerge from such a synthesis. Emphasis is on sources, occurrence, partitioning, transport, and biological interactions of metals in freshwater systems of North America. Biological interactions, in this case, refer to bioavailability, effects of metals on ecological characteristics and functions of aquatic systems, and roles of biota in controlling metal partitioning. This discussion is devoted primarily to the elements aluminum, arsenic, cadmium, chromium, copper, iron, lead, manganese, mercury, nickel, and zinc and secondarily to cobalt, molybdenum, selenium, silver, and vanadium. Sources of these elements are both natural and anthropogenic. Significant anthropogenic sources are atmospheric deposition, discharges of municipal and industrial wastes, mine drainage, and urban and agricultural runoff. Biogeochemical partitioning of metals is controlled by various characteristics of the water and sediments in which the metals are found. Among the most important controlling factors are pH, oxidation-reduction potential, hydrologic features, sediment grain size, and the existence and nature of clay minerals, organic matter, and hydrous oxides of manganese and iron. Partitioning is also controlled by biological processes that provide mechanisms for detoxification of metals and for enhanced uptake of nutritive metals. Partitioning is important largely because availability to biota is highly variable among different phases. Hence, accumulation in biological tissues and toxicity of an element are dependent not only on total concentration of the element but also on the factors that control partitioning.

  7. Volatility of organic aerosol: evaporation of ammonium sulfate/succinic acid aqueous solution droplets.

    PubMed

    Yli-Juuti, Taina; Zardini, Alessandro A; Eriksson, Axel C; Hansen, Anne Maria K; Pagels, Joakim H; Swietlicki, Erik; Svenningsson, Birgitta; Glasius, Marianne; Worsnop, Douglas R; Riipinen, Ilona; Bilde, Merete

    2013-01-01

    Condensation and evaporation modify the properties and effects of atmospheric aerosol particles. We studied the evaporation of aqueous succinic acid and succinic acid/ammonium sulfate droplets to obtain insights on the effect of ammonium sulfate on the gas/particle partitioning of atmospheric organic acids. Droplet evaporation in a laminar flow tube was measured in a Tandem Differential Mobility Analyzer setup. A wide range of droplet compositions was investigated, and for some of the experiments the composition was tracked using an Aerosol Mass Spectrometer. The measured evaporation was compared to model predictions where the ammonium sulfate was assumed not to directly affect succinic acid evaporation. The model captured the evaporation rates for droplets with large organic content but overestimated the droplet size change when the molar concentration of succinic acid was similar to or lower than that of ammonium sulfate, suggesting that ammonium sulfate enhances the partitioning of dicarboxylic acids to aqueous particles more than currently expected from simple mixture thermodynamics. If extrapolated to the real atmosphere, these results imply enhanced partitioning of secondary organic compounds to particulate phase in environments dominated by inorganic aerosol.

  8. Volatility of Organic Aerosol: Evaporation of Ammonium Sulfate/Succinic Acid Aqueous Solution Droplets

    PubMed Central

    2013-01-01

    Condensation and evaporation modify the properties and effects of atmospheric aerosol particles. We studied the evaporation of aqueous succinic acid and succinic acid/ammonium sulfate droplets to obtain insights on the effect of ammonium sulfate on the gas/particle partitioning of atmospheric organic acids. Droplet evaporation in a laminar flow tube was measured in a Tandem Differential Mobility Analyzer setup. A wide range of droplet compositions was investigated, and for some of the experiments the composition was tracked using an Aerosol Mass Spectrometer. The measured evaporation was compared to model predictions where the ammonium sulfate was assumed not to directly affect succinic acid evaporation. The model captured the evaporation rates for droplets with large organic content but overestimated the droplet size change when the molar concentration of succinic acid was similar to or lower than that of ammonium sulfate, suggesting that ammonium sulfate enhances the partitioning of dicarboxylic acids to aqueous particles more than currently expected from simple mixture thermodynamics. If extrapolated to the real atmosphere, these results imply enhanced partitioning of secondary organic compounds to particulate phase in environments dominated by inorganic aerosol. PMID:24107221

  9. Natural Scales in Geographical Patterns

    NASA Astrophysics Data System (ADS)

    Menezes, Telmo; Roth, Camille

    2017-04-01

    Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.

  10. Evaluating the environmental fate of pharmaceuticals using a level III model based on poly-parameter linear free energy relationships.

    PubMed

    Zukowska, Barbara; Breivik, Knut; Wania, Frank

    2006-04-15

    We recently proposed how to expand the applicability of multimedia models towards polar organic chemicals by expressing environmental phase partitioning with the help of poly-parameter linear free energy relationships (PP-LFERs). Here we elaborate on this approach by applying it to three pharmaceutical substances. A PP-LFER-based version of a Level III fugacity model calculates overall persistence, concentrations and intermedia fluxes of polar and non-polar organic chemicals between air, water, soil and sediments at steady-state. Illustrative modeling results for the pharmaceuticals within a defined coastal region are presented and discussed. The model results are highly sensitive to the degradation rate in water and the equilibrium partitioning between organic carbon and water, suggesting that an accurate description of this particular partitioning equilibrium is essential in order to obtain reliable predictions of environmental fate. The PP-LFER based modeling approach furthermore illustrates that the greatest mobility in aqueous phases may be experienced by pharmaceuticals that combines a small molecular size with strong H-acceptor properties.

  11. A comparison of approaches for estimating bottom-sediment mass in large reservoirs

    USGS Publications Warehouse

    Juracek, Kyle E.

    2006-01-01

    Estimates of sediment and sediment-associated constituent loads and yields from drainage basins are necessary for the management of reservoir-basin systems to address important issues such as reservoir sedimentation and eutrophication. One method for the estimation of loads and yields requires a determination of the total mass of sediment deposited in a reservoir. This method involves a sediment volume-to-mass conversion using bulk-density information. A comparison of four computational approaches (partition, mean, midpoint, strategic) for using bulk-density information to estimate total bottom-sediment mass in four large reservoirs indicated that the differences among the approaches were not statistically significant. However, the lack of statistical significance may be a result of the small sample size. Compared to the partition approach, which was presumed to provide the most accurate estimates of bottom-sediment mass, the results achieved using the strategic, mean, and midpoint approaches differed by as much as ?4, ?20, and ?44 percent, respectively. It was concluded that the strategic approach may merit further investigation as a less time consuming and less costly alternative to the partition approach.

  12. ESTimating plant phylogeny: lessons from partitioning

    PubMed Central

    de la Torre, Jose EB; Egan, Mary G; Katari, Manpreet S; Brenner, Eric D; Stevenson, Dennis W; Coruzzi, Gloria M; DeSalle, Rob

    2006-01-01

    Background While Expressed Sequence Tags (ESTs) have proven a viable and efficient way to sample genomes, particularly those for which whole-genome sequencing is impractical, phylogenetic analysis using ESTs remains difficult. Sequencing errors and orthology determination are the major problems when using ESTs as a source of characters for systematics. Here we develop methods to incorporate EST sequence information in a simultaneous analysis framework to address controversial phylogenetic questions regarding the relationships among the major groups of seed plants. We use an automated, phylogenetically derived approach to orthology determination called OrthologID generate a phylogeny based on 43 process partitions, many of which are derived from ESTs, and examine several measures of support to assess the utility of EST data for phylogenies. Results A maximum parsimony (MP) analysis resulted in a single tree with relatively high support at all nodes in the tree despite rampant conflict among trees generated from the separate analysis of individual partitions. In a comparison of broader-scale groupings based on cellular compartment (ie: chloroplast, mitochondrial or nuclear) or function, only the nuclear partition tree (based largely on EST data) was found to be topologically identical to the tree based on the simultaneous analysis of all data. Despite topological conflict among the broader-scale groupings examined, only the tree based on morphological data showed statistically significant differences. Conclusion Based on the amount of character support contributed by EST data which make up a majority of the nuclear data set, and the lack of conflict of the nuclear data set with the simultaneous analysis tree, we conclude that the inclusion of EST data does provide a viable and efficient approach to address phylogenetic questions within a parsimony framework on a genomic scale, if problems of orthology determination and potential sequencing errors can be overcome. In addition, approaches that examine conflict and support in a simultaneous analysis framework allow for a more precise understanding of the evolutionary history of individual process partitions and may be a novel way to understand functional aspects of different kinds of cellular classes of gene products. PMID:16776834

  13. The Effect of Superior Semicircular Canal Dehiscence on Intracochlear Sound Pressures

    PubMed Central

    Pisano, Dominic V.; Niesten, Marlien E.F.; Merchant, Saumil N.; Nakajima, Hideko Heidi

    2013-01-01

    Semicircular canal dehiscence (SCD) is a pathological opening in the bony wall of the inner ear that can result in conductive hearing loss. The hearing loss is variable across patients, and the precise mechanism and source of variability are not fully understood. Simultaneous measurements of basal intracochlear sound pressures in scala vestibuli (SV) and scala tympani (ST) enable quantification of the differential pressure across the cochlear partition, the stimulus that excites the cochlear partition. We used intracochlear sound pressure measurements in cadaveric preparations to study the effects of SCD size. Sound-induced pressures in SV and ST, as well as stapes velocity and ear-canal pressure were measured simultaneously for various sizes of SCD followed by SCD patching. Our results showed that at low frequencies (<600 Hz), SCD decreased the pressure in both SV and ST, as well as differential pressure, and these effects became more pronounced as dehiscence size was increased. Near 100 Hz, SV decreased about 10 dB for a 0.5 mm dehiscence and 20 dB for a 2 mm dehiscence, while ST decreased about 8 dB for a 0.5 mm dehiscence and 18 dB for a 2mm dehiscence. Differential pressure decreased about 10 dB for a 0.5 mm dehiscence and about 20 dB for a 2 mm dehiscense at 100 Hz. In some ears, for frequencies above 1 kHz, the smallest pinpoint dehiscence had bigger effects on the differential pressure (10 dB decrease) than larger dehiscenses (less than 10 dB decrease), suggesting larger hearing losses in this frequency range. These effects due to SCD were reversible by patching the dehiscence. We also showed that under certain circumstances such as SCD, stapes velocity is not related to how the ear can transduce sound across the cochlear partition because it is not directly related to the differential pressure, emphasizing that certain pathologies cannot be fully assessed by measurements such as stapes velocity. PMID:22814034

  14. Gas/particle partitioning and particle size distribution of PCDD/Fs and PCBs in urban ambient air.

    PubMed

    Barbas, B; de la Torre, A; Sanz, P; Navarro, I; Artíñano, B; Martínez, M A

    2018-05-15

    Urban ambient air samples, including gas-phase (PUF), total suspended particulates (TSP), PM 10 , PM 2.5 and PM 1 airborne particle fractions were collected to evaluate gas-particle partitioning and size particle distribution of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) and polychlorinated biphenyls (PCBs). Clausius-Clapeyron equation, regressions of logKp vs logP L and logK OA, and human respiratory risk assessment were used to evaluate local or long-distance transport sources, gas-particle partitioning sorption mechanisms, and implications for health. Total ambient air levels (gas phase+particulate phase) of TPCBs and TPCDD/Fs, were 437 and 0.07pgm -3 (median), respectively. Levels of PCDD/F in the gas phase (0.004-0.14pgm -3 , range) were significantly (p<0.05) lower than those found in the particulate phase (0.02-0.34pgm -3 ). The concentrations of PCDD/Fs were higher in winter. In contrast, PCBs were mainly associated to the gas phase, and displayed maximum levels in warm seasons, probably due to an increase in evaporation rates, supported by significant and strong positive dependence on temperature observed for several congeners. No significant differences in PCDD/Fs and PCBs concentrations were detected between the different particle size fractions considered (TSP, PM 10 , PM 2.5 and PM 1 ), reflecting that these chemicals are mainly bounded to PM 1 . The toxic content of samples was also evaluated. Total toxicity (PUF+TSP) attributable to dl-PCBs (13.4fg-TEQ 05 m -3 , median) was higher than those reported for PCDD/Fs (6.26fg-TEQ 05 m -3 ). The inhalation risk assessment concluded that the inhalation of PCDD/Fs and dl-PCBs pose a low cancer risk in the studied area. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Evapotranspiration partitioning in a semi-arid African savanna using stable isotopes of water vapor

    NASA Astrophysics Data System (ADS)

    Soderberg, K.; Good, S. P.; O'Connor, M.; King, E. G.; Caylor, K. K.

    2012-04-01

    Evapotranspiration (ET) represents a major flux of water out of semi-arid ecosystems. Thus, understanding ET dynamics is central to the study of African savanna health and productivity. At our study site in central Kenya (Mpala Research Centre), we have been using stable isotopes of water vapor to partition ET into its constituent parts of plant transpiration (T) and soil evaporation (E). This effort includes continuous measurement (1 Hz) of δ2H and δ18O in water vapor using a portable water vapor isotope analyzer mounted on a 22.5 m eddy covariance flux tower. The flux tower has been collecting data since early 2010. The isotopic end-member of δET is calculated using a Keeling Plot approach, whereas δT and δE are measured directly via a leaf chamber and tubing buried in the soil, respectively. Here we report on a two recent sets of measurements for partitioning ET in the Kenya Long-term Exclosure Experiment (KLEE) and a nearby grassland. We combine leaf level measurements of photosynthesis and water use with canopy-scale isotope measurements for ET partitioning. In the KLEE experiment we compare ET partitioning in a 4 ha plot that has only seen cattle grazing for the past 15 years with an adjacent plot that has undergone grazing by both cattle and wild herbivores (antelope, elephants, giraffe). These results are compared with a detailed study of ET in an artificially watered grassland.

  16. Implementation of spectral clustering on microarray data of carcinoma using k-means algorithm

    NASA Astrophysics Data System (ADS)

    Frisca, Bustamam, Alhadi; Siswantining, Titin

    2017-03-01

    Clustering is one of data analysis methods that aims to classify data which have similar characteristics in the same group. Spectral clustering is one of the most popular modern clustering algorithms. As an effective clustering technique, spectral clustering method emerged from the concepts of spectral graph theory. Spectral clustering method needs partitioning algorithm. There are some partitioning methods including PAM, SOM, Fuzzy c-means, and k-means. Based on the research that has been done by Capital and Choudhury in 2013, when using Euclidian distance k-means algorithm provide better accuracy than PAM algorithm. So in this paper we use k-means as our partition algorithm. The major advantage of spectral clustering is in reducing data dimension, especially in this case to reduce the dimension of large microarray dataset. Microarray data is a small-sized chip made of a glass plate containing thousands and even tens of thousands kinds of genes in the DNA fragments derived from doubling cDNA. Application of microarray data is widely used to detect cancer, for the example is carcinoma, in which cancer cells express the abnormalities in his genes. The purpose of this research is to classify the data that have high similarity in the same group and the data that have low similarity in the others. In this research, Carcinoma microarray data using 7457 genes. The result of partitioning using k-means algorithm is two clusters.

  17. Release and Gas-Particle Partitioning Behaviors of Short-Chain Chlorinated Paraffins (SCCPs) During the Thermal Treatment of Polyvinyl Chloride Flooring.

    PubMed

    Zhan, Faqiang; Zhang, Haijun; Wang, Jing; Xu, Jiazhi; Yuan, Heping; Gao, Yuan; Su, Fan; Chen, Jiping

    2017-08-15

    Chlorinated paraffin (CP) mixture is a common additive in polyvinyl chloride (PVC) products as a plasticizer and flame retardant. During the PVC plastic life cycle, intentional or incidental thermal processes inevitably cause an abrupt release of short-chain CPs (SCCPs). In this study, the thermal processing of PVC plastics was simulated by heating PVC flooring at 100-200 °C in a chamber. The 1 h thermal treatment caused the release of 1.9-10.7% of the embedded SCCPs. A developed emission model indicated that SCCP release was mainly controlled by material-gas partitioning at 100 °C. However, release control tended to be subjected to material-phase diffusion above 150 °C, especially for SCCP congeners with shorter carbon-chain lengths. A cascade impactor (NanoMoudi) was used to collect particles of different sizes and gas-phase SCCPs. The elevated temperature resulted in a higher partition of SCCPs from the gas-phase to particle-phase. SCCPs were not strongly inclined to form aerosol particles by nucleation, and less present in the Aitken mode particles. Junge-Pankow adsorption model well fitted the partitioning of SCCPs between the gas-phase and accumulation mode particles. Inhalation exposure estimation indicated that PVC processing and recycling workers could face a considerably high risk for exposure to SCCPs.

  18. Electoral Susceptibility and Entropically Driven Interactions

    NASA Astrophysics Data System (ADS)

    Caravan, Bassir; Levine, Gregory

    2013-03-01

    In the United States electoral system the election is usually decided by the electoral votes cast by a small number of ``swing states'' where the two candidates historically have roughly equal probabilities of winning. The effective value of a swing state is determined not only by the number of its electoral votes but by the frequency of its appearance in the set of winning partitions of the electoral college. Since the electoral vote values of swing states are not identical, the presence or absence of a state in a winning partition is generally correlated with the frequency of appearance of other states and, hence, their effective values. We quantify the effective value of states by an electoral susceptibility, χj, the variation of the winning probability with the ``cost'' of changing the probability of winning state j. Associating entropy with the logarithm of the number of appearances of a state within the set of winning partitions, the entropy per state (in effect, the chemical potential) is not additive and the states may be said to ``interact.'' We study χj for a simple model with a Zipf's law type distribution of electoral votes. We show that the susceptibility for small states is largest in ``one-sided'' electoral contests and smallest in close contests. This research was supported by Department of Energy DE-FG02-08ER64623, Research Corporation CC6535 (GL) and HHMI Scholar Program (BC)

  19. Making sense of metacommunities: dispelling the mythology of a metacommunity typology.

    PubMed

    Brown, Bryan L; Sokol, Eric R; Skelton, James; Tornwall, Brett

    2017-03-01

    Metacommunity ecology has rapidly become a dominant framework through which ecologists understand the natural world. Unfortunately, persistent misunderstandings regarding metacommunity theory and the methods for evaluating hypotheses based on the theory are common in the ecological literature. Since its beginnings, four major paradigms-species sorting, mass effects, neutrality, and patch dynamics-have been associated with metacommunity ecology. The Big 4 have been misconstrued to represent the complete set of metacommunity dynamics. As a result, many investigators attempt to evaluate community assembly processes as strictly belonging to one of the Big 4 types, rather than embracing the full scope of metacommunity theory. The Big 4 were never intended to represent the entire spectrum of metacommunity dynamics and were rather examples of historical paradigms that fit within the new framework. We argue that perpetuation of the Big 4 typology hurts community ecology and we encourage researchers to embrace the full inference space of metacommunity theory. A related, but distinct issue is that the technique of variation partitioning is often used to evaluate the dynamics of metacommunities. This methodology has produced its own set of misunderstandings, some of which are directly a product of the Big 4 typology and others which are simply the product of poor study design or statistical artefacts. However, variation partitioning is a potentially powerful technique when used appropriately and we identify several strategies for successful utilization of variation partitioning.

  20. Real-Time Measurements of Gas/Particle Partitioning of Semivolatile Organic Compounds into Different Probe Particles in a Teflon Chamber

    NASA Astrophysics Data System (ADS)

    Liu, X.; Day, D. A.; Ziemann, P. J.; Krechmer, J. E.; Jimenez, J. L.

    2017-12-01

    The partitioning of semivolatile organic compounds (SVOCs) into and out of particles plays an essential role in secondary organic aerosol (SOA) formation and evolution. Most atmospheric models treat the gas/particle partitioning as an equilibrium between bulk gas and particle phases, despite potential kinetic limitations and differences in thermodynamics as a function of SOA and pre-existing OA composition. This study directly measures the partitioning of oxidized compounds in a Teflon chamber in the presence of single component seeds of different phases and polarities, including oleic acid, squalane, dioctyl sebacate, pentaethylene glycol, dry/wet ammonium sulfate, and dry/wet sucrose. The oxidized compounds are generated by a fast OH oxidation of a series of alkanols under high nitric oxide conditions. The observed SOA mass enhancements are highest with oleic acid, and lowest with wet ammonium sulfate and sucrose. A chemical ionization mass spectrometer (CIMS) was used to measure the decay of gas-phase organic nitrates, which reflects uptake by particles and chamber walls. We observed clear changes in equilibrium timescales with varying seed concentrations and in equilibrium gas-phase concentrations across different seeds. In general, the gas evolution can be reproduced by a kinetic box model that considers partitioning and evaporation with particles and chamber walls, except for the wet sucrose system. The accommodation coefficient and saturation mass concentration of each species in the presence of each seed are derived using the model. The changes in particle size distributions and composition monitored by a scanning mobility particle sizer (SMPS) and a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) are investigated to probe the SOA formation mechanism. Based on these results, the applicability of partitioning theory to these systems and the relevant quantitative parameters, including the dependencies on seed particle composition, will be discussed.

  1. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.

    Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less

  3. A new clustering algorithm applicable to multispectral and polarimetric SAR images

    NASA Technical Reports Server (NTRS)

    Wong, Yiu-Fai; Posner, Edward C.

    1993-01-01

    We describe an application of a scale-space clustering algorithm to the classification of a multispectral and polarimetric SAR image of an agricultural site. After the initial polarimetric and radiometric calibration and noise cancellation, we extracted a 12-dimensional feature vector for each pixel from the scattering matrix. The clustering algorithm was able to partition a set of unlabeled feature vectors from 13 selected sites, each site corresponding to a distinct crop, into 13 clusters without any supervision. The cluster parameters were then used to classify the whole image. The classification map is much less noisy and more accurate than those obtained by hierarchical rules. Starting with every point as a cluster, the algorithm works by melting the system to produce a tree of clusters in the scale space. It can cluster data in any multidimensional space and is insensitive to variability in cluster densities, sizes and ellipsoidal shapes. This algorithm, more powerful than existing ones, may be useful for remote sensing for land use.

  4. Identifying finite-time coherent sets from limited quantities of Lagrangian data.

    PubMed

    Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W

    2015-08-01

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.

  5. Determination of the n-octanol/water partition coefficients of weakly ionizable basic compounds by reversed-phase high-performance liquid chromatography with neutral model compounds.

    PubMed

    Liang, Chao; Han, Shu-ying; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin

    2014-11-01

    A strategy to utilize neutral model compounds for lipophilicity measurement of ionizable basic compounds by reversed-phase high-performance liquid chromatography is proposed in this paper. The applicability of the novel protocol was justified by theoretical derivation. Meanwhile, the linear relationships between logarithm of apparent n-octanol/water partition coefficients (logKow '') and logarithm of retention factors corresponding to the 100% aqueous fraction of mobile phase (logkw ) were established for a basic training set, a neutral training set and a mixed training set of these two. As proved in theory, the good linearity and external validation results indicated that the logKow ''-logkw relationships obtained from a neutral model training set were always reliable regardless of mobile phase pH. Afterwards, the above relationships were adopted to determine the logKow of harmaline, a weakly dissociable alkaloid. As far as we know, this is the first report on experimental logKow data for harmaline (logKow = 2.28 ± 0.08). Introducing neutral compounds into a basic model training set or using neutral model compounds alone is recommended to measure the lipophilicity of weakly ionizable basic compounds especially those with high hydrophobicity for the advantages of more suitable model compound choices and convenient mobile phase pH control. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Identifying finite-time coherent sets from limited quantities of Lagrangian data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Matthew O.; Rypina, Irina I.; Rowley, Clarence W.

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods basedmore » on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.« less

  7. Improving the representation of mixed-phase cloud microphysics in the ICON-LEM

    NASA Astrophysics Data System (ADS)

    Tonttila, Juha; Hoose, Corinna; Milbrandt, Jason; Morrison, Hugh

    2017-04-01

    The representation of ice-phase cloud microphysics in ICON-LEM (the Large-Eddy Model configuration of the ICOsahedral Nonhydrostatic model) is improved by implementing the recently published Predicted Particle Properties (P3) scheme into the model. In the typical two-moment microphysical schemes, such as that previously used in ICON-LEM, ice-phase particles must be partitioned into several prescribed categories. It is inherently difficult to distinguish between categories such as graupel and hail based on just the particle size, yet this partitioning may significantly affect the simulation of convective clouds. The P3 scheme avoids the problems associated with predefined ice-phase categories that are inherent in traditional microphysics schemes by introducing the concept of "free" ice-phase categories, whereby the prognostic variables enable the prediction of a wide range of smoothly varying physical properties and hence particle types. To our knowledge, this is the first application of the P3 scheme in a large-eddy model with horizontal grid spacings on the order of 100 m. We will present results from ICON-LEM simulations with the new P3 scheme comprising idealized stratiform and convective cloud cases. We will also present real-case limited-area simulations focusing on the HOPE (HD(CP)2 Observational Prototype Experiment) intensive observation campaign. The results are compared with a matching set of simulations employing the two-moment scheme and the performance of the model is also evaluated against observations in the context of the HOPE simulations, comprising data from ground based remote sensing instruments.

  8. Enhanced Trajectory Based Similarity Prediction with Uncertainty Quantification

    DTIC Science & Technology

    2014-10-02

    challenge by obtaining the highest score by using a data-driven prognostics method to predict the RUL of a turbofan engine (Saxena & Goebel, PHM08...process for multi-regime health assessment. To illustrate multi-regime partitioning, the “ Turbofan Engine Degradation simulation” data set from...hence the name k- means. Figure 3 shows the results of the k-means clustering algorithm on the “ Turbofan Engine Degradation simulation” data set. As

  9. Synthesis, Interdiction, and Protection of Layered Networks

    DTIC Science & Technology

    2009-09-01

    152 4.7 Al Qaeda Network from Sageman Database . . . . . . . . . . 157 4.8 Interdiction Resources versus Closeness Centrality . . . . . . 159...where S may be a polyhedron , a set with discrete variables, a set with nonlin- earities, or so on); and partitions it into two mutually exclusive subsets...p. vii]. However, this database is based on Dr. Sagemans’s 2004 publication and may be dated. Therefore, the analysis in this section is to

  10. Interactive high-resolution isosurface ray casting on multicore processors.

    PubMed

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.

  11. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  12. Measuring Constraint-Set Utility for Partitional Clustering Algorithms

    NASA Technical Reports Server (NTRS)

    Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato

    2006-01-01

    Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.

  13. Coupled metal partitioning dynamics and toxicodynamics at biointerfaces: a theory beyond the biotic ligand model framework.

    PubMed

    Duval, Jérôme F L

    2016-04-14

    A mechanistic understanding of the processes governing metal toxicity to microorganisms (bacteria, algae) calls for an adequate formulation of metal partitioning at biointerfaces during cell exposure. This includes the account of metal transport dynamics from bulk solution to biomembrane and the kinetics of metal internalisation, both potentially controlling the intracellular and surface metal fractions that originate cell growth inhibition. A theoretical rationale is developed here for such coupled toxicodynamics and interfacial metal partitioning dynamics under non-complexing medium conditions with integration of the defining cell electrostatic properties. The formalism explicitly considers intertwined metal adsorption at the biointerface, intracellular metal excretion, cell growth and metal depletion from bulk solution. The theory is derived under relevant steady-state metal transport conditions on the basis of coupled Nernst-Planck equation and continuous logistic equation modified to include metal-induced cell growth inhibition and cell size changes. Computational examples are discussed to identify limitations of the classical Biotic Ligand Model (BLM) in evaluating metal toxicity over time. In particular, BLM is shown to severely underestimate metal toxicity depending on cell exposure time, metal internalisation kinetics, cell surface electrostatics and initial cell density. Analytical expressions are provided for the interfacial metal concentration profiles in the limit where cell-growth is completely inhibited. A rigorous relationship between time-dependent cell density and metal concentrations at the biosurface and in bulk solution is further provided, which unifies previous equations formulated by Best and Duval under constant cell density and cell size conditions. The theory is sufficiently flexible to adapt to toxicity scenarios with involved cell survival-death processes.

  14. Differential diagnosis of jaw pain using informatics technology.

    PubMed

    Nam, Y; Kim, H-G; Kho, H-S

    2018-05-21

    This study aimed to deduce evidence-based clinical clues that differentiate temporomandibular disorders (TMD)-mimicking conditions from genuine TMD by text mining using natural language processing (NLP) and recursive partitioning. We compared the medical records of 29 patients diagnosed with TMD-mimicking conditions and 290 patients diagnosed with genuine TMD. Chief complaints and medical histories were preprocessed via NLP to compare the frequency of word usage. In addition, recursive partitioning was used to deduce the optimal size of mouth opening, which could differentiate TMD-mimicking from genuine TMD groups. The prevalence of TMD-mimicking conditions was more evenly distributed across all age groups and showed a nearly equal gender ratio, which was significantly different from genuine TMD. TMD-mimicking conditions were caused by inflammation, infection, hereditary disease and neoplasm. Patients with TMD-mimicking conditions frequently used "mouth opening limitation" (P < .001), but less commonly used words such as "noise" (P < .001) and "temporomandibular joint" (P < .001) than patients with genuine TMD. A diagnostic classification tree on the basis of recursive partitioning suggested that 12.0 mm of comfortable mouth opening and 26.5 mm of maximum mouth opening were deduced as the most optimal mouth-opening cutoff sizes. When the combined analyses were performed based on both the text mining and clinical examination data, the predictive performance of the model was 96.6% with 69.0% sensitivity and 99.3% specificity in predicting TMD-mimicking conditions. In conclusion, this study showed that AI technology-based methods could be applied in the field of differential diagnosis of orofacial pain disorders. © 2018 John Wiley & Sons Ltd.

  15. Parallel computing of a digital hologram and particle searching for microdigital-holographic particle-tracking velocimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Satake, Shin-ichi; Kanamori, Hiroyuki; Kunugi, Tomoaki

    2007-02-01

    We have developed a parallel algorithm for microdigital-holographic particle-tracking velocimetry. The algorithm is used in (1) numerical reconstruction of a particle image computer using a digital hologram, and (2) searching for particles. The numerical reconstruction from the digital hologram makes use of the Fresnel diffraction equation and the FFT (fast Fourier transform),whereas the particle search algorithm looks for local maximum graduation in a reconstruction field represented by a 3D matrix. To achieve high performance computing for both calculations (reconstruction and particle search), two memory partitions are allocated to the 3D matrix. In this matrix, the reconstruction part consists of horizontallymore » placed 2D memory partitions on the x-y plane for the FFT, whereas, the particle search part consists of vertically placed 2D memory partitions set along the z axes.Consequently, the scalability can be obtained for the proportion of processor elements,where the benchmarks are carried out for parallel computation by a SGI Altix machine.« less

  16. Partitioning the Quaternary

    NASA Astrophysics Data System (ADS)

    Gibbard, Philip L.; Lewin, John

    2016-11-01

    We review the historical purposes and procedures for stratigraphical division and naming within the Quaternary, and summarize the current requirements for formal partitioning through the International Commission on Stratigraphy (ICS). A raft of new data and evidence has impacted traditional approaches: quasi-continuous records from ocean sediments and ice cores, new numerical dating techniques, and alternative macro-models, such as those provided through Sequence Stratigraphy and Earth-System Science. The practical usefulness of division remains, but there is now greater appreciation of complex Quaternary detail and the modelling of time continua, the latter also extending into the future. There are problems both of commission (what is done, but could be done better) and of omission (what gets left out) in partitioning the Quaternary. These include the challenge set by the use of unconformities as stage boundaries, how to deal with multiphase records in ocean and terrestrial sediments, what happened at the 'Early-Mid- (Middle) Pleistocene Transition', dealing with trends that cross phase boundaries, and the current controversial focus on how to subdivide the Holocene and formally define an 'Anthropocene'.

  17. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    PubMed

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  18. Estimating Grass-Soil Bioconcentration of Munitions Compounds from Molecular Structure.

    PubMed

    Torralba Sanchez, Tifany L; Liang, Yuzhen; Di Toro, Dominic M

    2017-10-03

    A partitioning-based model is presented to estimate the bioconcentration of five munitions compounds and two munition-like compounds in grasses. The model uses polyparameter linear free energy relationships (pp-LFERs) to estimate the partition coefficients between soil organic carbon and interstitial water and between interstitial water and the plant cuticle, a lipid-like plant component. Inputs for the pp-LFERs are a set of numerical descriptors computed from molecular structure only that characterize the molecular properties that determine the interaction with soil organic carbon, interstitial water, and plant cuticle. The model is validated by predicting concentrations measured in the whole plant during independent uptake experiments with a root-mean-square error (log predicted plant concentration-log observed plant concentration) of 0.429. This highlights the dominant role of partitioning between the exposure medium and the plant cuticle in the bioconcentration of these compounds. The pp-LFERs can be used to assess the environmental risk of munitions compounds and munition-like compounds using only their molecular structure as input.

  19. Quantifying uncertainty due to fission-fusion dynamics as a component of social complexity.

    PubMed

    Ramos-Fernandez, Gabriel; King, Andrew J; Beehner, Jacinta C; Bergman, Thore J; Crofoot, Margaret C; Di Fiore, Anthony; Lehmann, Julia; Schaffner, Colleen M; Snyder-Mackler, Noah; Zuberbühler, Klaus; Aureli, Filippo; Boyer, Denis

    2018-05-30

    Groups of animals (including humans) may show flexible grouping patterns, in which temporary aggregations or subgroups come together and split, changing composition over short temporal scales, (i.e. fission and fusion). A high degree of fission-fusion dynamics may constrain the regulation of social relationships, introducing uncertainty in interactions between group members. Here we use Shannon's entropy to quantify the predictability of subgroup composition for three species known to differ in the way their subgroups come together and split over time: spider monkeys ( Ateles geoffroyi ), chimpanzees ( Pan troglodytes ) and geladas ( Theropithecus gelada ). We formulate a random expectation of entropy that considers subgroup size variation and sample size, against which the observed entropy in subgroup composition can be compared. Using the theory of set partitioning, we also develop a method to estimate the number of subgroups that the group is likely to be divided into, based on the composition and size of single focal subgroups. Our results indicate that Shannon's entropy and the estimated number of subgroups present at a given time provide quantitative metrics of uncertainty in the social environment (within which social relationships must be regulated) for groups with different degrees of fission-fusion dynamics. These metrics also represent an indirect quantification of the cognitive challenges posed by socially dynamic environments. Overall, our novel methodological approach provides new insight for understanding the evolution of social complexity and the mechanisms to cope with the uncertainty that results from fission-fusion dynamics. © 2017 The Author(s).

  20. Effects of geomorphology, habitat, and spatial location on fish assemblages in a watershed in Ohio, USA.

    PubMed

    D'Ambrosio, Jessica L; Williams, Lance R; Witter, Jonathan D; Ward, Andy

    2009-01-01

    In this paper, we evaluate relationships between in-stream habitat, water chemistry, spatial distribution within a predominantly agricultural Midwestern watershed and geomorphic features and fish assemblage attributes and abundances. Our specific objectives were to: (1) identify and quantify key environmental variables at reach and system wide (watershed) scales; and (2) evaluate the relative influence of those environmental factors in structuring and explaining fish assemblage attributes at reach scales to help prioritize stream monitoring efforts and better incorporate all factors that influence aquatic biology in watershed management programs. The original combined data set consisted of 31 variables measured at 32 sites, which was reduced to 9 variables through correlation and linear regression analysis: stream order, percent wooded riparian zone, drainage area, in-stream cover quality, substrate quality, gradient, cross-sectional area, width of the flood prone area, and average substrate size. Canonical correspondence analysis (CCA) and variance partitioning were used to relate environmental variables to fish species abundance and assemblage attributes. Fish assemblages and abundances were explained best by stream size, gradient, substrate size and quality, and percent wooded riparian zone. Further data are needed to investigate why water chemistry variables had insignificant relationships with IBI scores. Results suggest that more quantifiable variables and consideration of spatial location of a stream reach within a watershed system should be standard data incorporated into stream monitoring programs to identify impairments that, while biologically limiting, are not fully captured or elucidated using current bioassessment methods.

  1. QSAR modeling of human serum protein binding with several modeling techniques utilizing structure-information representation.

    PubMed

    Votano, Joseph R; Parham, Marc; Hall, L Mark; Hall, Lowell H; Kier, Lemont B; Oloff, Scott; Tropsha, Alexander

    2006-11-30

    Four modeling techniques, using topological descriptors to represent molecular structure, were employed to produce models of human serum protein binding (% bound) on a data set of 1008 experimental values, carefully screened from publicly available sources. To our knowledge, this data is the largest set on human serum protein binding reported for QSAR modeling. The data was partitioned into a training set of 808 compounds and an external validation test set of 200 compounds. Partitioning was accomplished by clustering the compounds in a structure descriptor space so that random sampling of 20% of the whole data set produced an external test set that is a good representative of the training set with respect to both structure and protein binding values. The four modeling techniques include multiple linear regression (MLR), artificial neural networks (ANN), k-nearest neighbors (kNN), and support vector machines (SVM). With the exception of the MLR model, the ANN, kNN, and SVM QSARs were ensemble models. Training set correlation coefficients and mean absolute error ranged from r2=0.90 and MAE=7.6 for ANN to r2=0.61 and MAE=16.2 for MLR. Prediction results from the validation set yielded correlation coefficients and mean absolute errors which ranged from r2=0.70 and MAE=14.1 for ANN to a low of r2=0.59 and MAE=18.3 for the SVM model. Structure descriptors that contribute significantly to the models are discussed and compared with those found in other published models. For the ANN model, structure descriptor trends with respect to their affects on predicted protein binding can assist the chemist in structure modification during the drug design process.

  2. Root and stem partitioning of Pinus taeda

    Treesearch

    Timothy J. Albaugh; H. Lee Allen; Lance W. Kress

    2006-01-01

    We measured root and stem mass at three sites (Piedmont (P), Coastal Plain (C), and Sandhills (S)) in the southeastern United States. Stand density, soil texture and drainage, genetic makeup and environmental conditions varied with site while differences in tree size at each site were induced with fertilizer additions. Across sites, root mass was about one half of stem...

  3. A Systematic Software, Firmware, and Hardware Codesign Methodology for Digital Signal Processing

    DTIC Science & Technology

    2014-03-01

    possible mappings ...................................................60 Table 25. Possible optimal leaf -nodes... size weight and power UAV unmanned aerial vehicle UHF ultra-high frequency UML universal modeling language Verilog verify logic VHDL VHSIC...optimal leaf -nodes to some design patterns for embedded system design. Software and hardware partitioning is a very difficult challenge in the field of

  4. Conformal anomaly of some 2-d Z (n) models

    NASA Astrophysics Data System (ADS)

    William, Peter

    1991-01-01

    We describe a numerical calculation of the conformal anomaly in the case of some two-dimensional statistical models undergoing a second-order phase transition, utilizing a recently developed method to compute the partition function exactly. This computation is carried out on a massively parallel CM2 machine, using the finite size scaling behaviour of the free energy.

  5. Determination of the Partition Coefficients of Organophosphorus Compounds Using High-Performance Liquid Chromatography.

    DTIC Science & Technology

    1987-12-01

    have claimed an advantage to deter- mining values of k’ in 100% aqueous mobile phases by extrapolation of linear plots of log k’ vs. percent organic...im parti- cle size chemically bonded octadecylsilane (ODS) packing ( Alltech Econo- sphere). As required, this column was saturated with I-octanol by in

  6. Prediction of the partitioning behaviour of proteins in aqueous two-phase systems using only their amino acid composition.

    PubMed

    Salgado, J Cristian; Andrews, Barbara A; Ortuzar, Maria Fernanda; Asenjo, Juan A

    2008-01-18

    The prediction of the partition behaviour of proteins in aqueous two-phase systems (ATPS) using mathematical models based on their amino acid composition was investigated. The predictive models are based on the average surface hydrophobicity (ASH). The ASH was estimated by means of models that use the three-dimensional structure of proteins and by models that use only the amino acid composition of proteins. These models were evaluated for a set of 11 proteins with known experimental partition coefficient in four-phase systems: polyethylene glycol (PEG) 4000/phosphate, sulfate, citrate and dextran and considering three levels of NaCl concentration (0.0% w/w, 0.6% w/w and 8.8% w/w). The results indicate that such prediction is feasible even though the quality of the prediction depends strongly on the ATPS and its operational conditions such as the NaCl concentration. The ATPS 0 model which use the three-dimensional structure obtains similar results to those given by previous models based on variables measured in the laboratory. In addition it maintains the main characteristics of the hydrophobic resolution and intrinsic hydrophobicity reported before. Three mathematical models, ATPS I-III, based only on the amino acid composition were evaluated. The best results were obtained by the ATPS I model which assumes that all of the amino acids are completely exposed. The performance of the ATPS I model follows the behaviour reported previously, i.e. its correlation coefficients improve as the NaCl concentration increases in the system and, therefore, the effect of the protein hydrophobicity prevails over other effects such as charge or size. Its best predictive performance was obtained for the PEG/dextran system at high NaCl concentration. An increase in the predictive capacity of at least 54.4% with respect to the models which use the three-dimensional structure of the protein was obtained for that system. In addition, the ATPS I model exhibits high correlation coefficients in that system being higher than 0.88 on average. The ATPS I model exhibited correlation coefficients higher than 0.67 for the rest of the ATPS at high NaCl concentration. Finally, we tested our best model, the ATPS I model, on the prediction of the partition coefficient of the protein invertase. We found that the predictive capacities of the ATPS I model are better in PEG/dextran systems, where the relative error of the prediction with respect to the experimental value is 15.6%.

  7. Partitioning of pyroclasts between ballistic transport and a convective plume: Kīlauea volcano, 19 March 2008

    NASA Astrophysics Data System (ADS)

    Houghton, B. F.; Swanson, D. A.; Biass, S.; Fagents, S. A.; Orr, T. R.

    2017-05-01

    We describe the discrete ballistic and wind-advected products of a small, but exceptionally well-characterized, explosive eruption of wall-rock-derived pyroclasts from Kīlauea volcano on 19 March 2008 and, for the first time, integrate the size distribution of the two subpopulations to reconstruct the true size distribution of a population of pyroclasts as it exited from the vent. Based on thinning and fining relationships, the wind-advected fraction had a mass of 6.1 × 105 kg and a thickness half distance of 110 m, placing it at the bottom end of the magnitude and intensity spectra of pyroclastic falls. The ballistic population was mapped, in the field and by using structure-from-motion techniques, to a diameter of > 10-20 cm over an area of 0.1 km2, with an estimated mass of 1 × 105 kg. Initial ejection velocities of 50-80 m/s were estimated from inversion of isopleths. The total grain size distribution was estimated by using a mass partitioning of 98% of wind-advected material and 2% of ballistics, resulting in median and sorting values of -1.7ϕ and 3.1ϕ. It is markedly broader than those calculated for the products of magmatic explosive eruptions, because the grain size of 19 March 2008 clast population is unrelated to a volcanic fragmentation event and instead was "inherited" from a population of talus clasts that temporary blocked the vent prior to the eruption. Despite a conspicuous near-field presence, the ballistic subpopulation has only a minor influence on the grain size distribution because of its rapid thinning and fining away from source.

  8. Design of a Dual Waveguide Normal Incidence Tube (DWNIT) Utilizing Energy and Modal Methods

    NASA Technical Reports Server (NTRS)

    Betts, Juan F.; Jones, Michael G. (Technical Monitor)

    2002-01-01

    This report investigates the partition design of the proposed Dual Waveguide Normal Incidence Tube (DWNIT). Some advantages provided by the DWNIT are (1) Assessment of coupling relationships between resonators in close proximity, (2) Evaluation of "smart liners", (3) Experimental validation for parallel element models, and (4) Investigation of effects of simulated angles of incidence of acoustic waves. Energy models of the two chambers were developed to determine the Sound Pressure Level (SPL) drop across the two chambers, through the use of an intensity transmission function for the chamber's partition. The models allowed the chamber's lengthwise end samples to vary. The initial partition design (2" high, 16" long, 0.25" thick) was predicted to provide at least 160 dB SPL drop across the partition with a compressive model, and at least 240 dB SPL drop with a bending model using a damping loss factor of 0.01. The end chamber sample transmissions coefficients were set to 0.1. Since these results predicted more SPL drop than required, a plate thickness optimization algorithm was developed. The results of the algorithm routine indicated that a plate with the same height and length, but with a thickness of 0.1" and 0.05 structural damping loss, would provide an adequate SPL isolation between the chambers.

  9. Effects of low urea concentrations on protein-water interactions.

    PubMed

    Ferreira, Luisa A; Povarova, Olga I; Stepanenko, Olga V; Sulatskaya, Anna I; Madeira, Pedro P; Kuznetsova, Irina M; Turoverov, Konstantin K; Uversky, Vladimir N; Zaslavsky, Boris Y

    2017-01-01

    Solvent properties of aqueous media (dipolarity/polarizability, hydrogen bond donor acidity, and hydrogen bond acceptor basicity) were measured in the coexisting phases of Dextran-PEG aqueous two-phase systems (ATPSs) containing .5 and 2.0 M urea. The differences between the electrostatic and hydrophobic properties of the phases in the ATPSs were quantified by analysis of partitioning of the homologous series of sodium salts of dinitrophenylated amino acids with aliphatic alkyl side chains. Furthermore, partitioning of eleven different proteins in the ATPSs was studied. The analysis of protein partition behavior in a set of ATPSs with protective osmolytes (sorbitol, sucrose, trehalose, and TMAO) at the concentration of .5 M, in osmolyte-free ATPS, and in ATPSs with .5 or 2.0 M urea in terms of the solvent properties of the phases was performed. The results show unambiguously that even at the urea concentration of .5 M, this denaturant affects partitioning of all proteins (except concanavalin A) through direct urea-protein interactions and via its effect on the solvent properties of the media. The direct urea-protein interactions seem to prevail over the urea effects on the solvent properties of water at the concentration of .5 M urea and appear to be completely dominant at 2.0 M urea concentration.

  10. C-Depth Method to Determine Diffusion Coefficient and Partition Coefficient of PCB in Building Materials.

    PubMed

    Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping

    2015-10-20

    Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.

  11. Linear modeling of the soil-water partition coefficient normalized to organic carbon content by reversed-phase thin-layer chromatography.

    PubMed

    Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka

    2016-08-05

    Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A set partitioning reformulation for the multiple-choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Voß, Stefan; Lalla-Ruiz, Eduardo

    2016-05-01

    The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.

  13. Cumulants, free cumulants and half-shuffles

    PubMed Central

    Ebrahimi-Fard, Kurusch; Patras, Frédéric

    2015-01-01

    Free cumulants were introduced as the proper analogue of classical cumulants in the theory of free probability. There is a mix of similarities and differences, when one considers the two families of cumulants. Whereas the combinatorics of classical cumulants is well expressed in terms of set partitions, that of free cumulants is described and often introduced in terms of non-crossing set partitions. The formal series approach to classical and free cumulants also largely differs. The purpose of this study is to put forward a different approach to these phenomena. Namely, we show that cumulants, whether classical or free, can be understood in terms of the algebra and combinatorics underlying commutative as well as non-commutative (half-)shuffles and (half-) unshuffles. As a corollary, cumulants and free cumulants can be characterized through linear fixed point equations. We study the exponential solutions of these linear fixed point equations, which display well the commutative, respectively non-commutative, character of classical and free cumulants. PMID:27547078

  14. Phase partitioning, crystal growth, electrodeposition and cosmic ray experiments in microgravity

    NASA Technical Reports Server (NTRS)

    Wessling, Francis C.

    1987-01-01

    Five experiments are contained in one Get Away Special Canister (5 cu ft). The first utilizes microgravity to separate biological cells and to study the mechanism of phase partitioning in 12 separate cuvettes. Two experiments are designed to grow organic crystals by physical vapor transport. One experiment consists of eight electroplating cells with various chemicals to produce surfaces electroplated in microgravity. Some of the surfaces have micron sized particles of hard materials co-deposited during electrodeposition. The fifth experiment intercepts cosmic ray particles and records their paths on photographic emulsions. The first four experiments are controlled by an on-board C-MOS controller. The fifth experiment is totally passive. These are the first in Space. Their purpose is to create new commercial products with microgravity processing.

  15. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  16. Resource partitioning among top predators in a Miocene food web

    PubMed Central

    Domingo, M. Soledad; Domingo, Laura; Badgley, Catherine; Sanisidro, Oscar; Morales, Jorge

    2013-01-01

    The exceptional fossil sites of Cerro de los Batallones (Madrid Basin, Spain) contain abundant remains of Late Miocene mammals. From these fossil assemblages, we have inferred diet, resource partitioning and habitat of three sympatric carnivorous mammals based on stable isotopes. The carnivorans include three apex predators: two sabre-toothed cats (Felidae) and a bear dog (Amphicyonidae). Herbivore and carnivore carbon isotope (δ13C) values from tooth enamel imply the presence of a woodland ecosystem dominated by C3 plants. δ13C values and mixing-model analyses suggest that the two sabre-toothed cats, one the size of a leopard and the other the size of a tiger, consumed herbivores with similar δ13C values from a more wooded portion of the ecosystem. The two sabre-toothed cats probably hunted prey of different body sizes, and the smaller species could have used tree cover to avoid encounters with the larger felid. For the bear dog, δ13C values are higher and differ significantly from those of the sabre-toothed cats, suggesting a diet that includes prey from more open woodland. Coexistence of the sabre-toothed cats and the bear dog was likely facilitated by prey capture in different portions of the habitat. This study demonstrates the utility of stable isotope analysis for investigating the behaviour and ecology of members of past carnivoran guilds. PMID:23135673

  17. Recursive Partitioning Analysis for New Classification of Patients With Esophageal Cancer Treated by Chemoradiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Motoo, E-mail: excell@hkg.odn.ne.jp; Department of Clinical Oncology, Aichi Cancer Center Hospital, Nagoya; Department of Radiation Oncology, Aichi Cancer Center Hospital, Nagoya

    2012-11-01

    Background: The 7th edition of the American Joint Committee on Cancer staging system does not include lymph node size in the guidelines for staging patients with esophageal cancer. The objectives of this study were to determine the prognostic impact of the maximum metastatic lymph node diameter (ND) on survival and to develop and validate a new staging system for patients with esophageal squamous cell cancer who were treated with definitive chemoradiotherapy (CRT). Methods: Information on 402 patients with esophageal cancer undergoing CRT at two institutions was reviewed. Univariate and multivariate analyses of data from one institution were used to assessmore » the impact of clinical factors on survival, and recursive partitioning analysis was performed to develop the new staging classification. To assess its clinical utility, the new classification was validated using data from the second institution. Results: By multivariate analysis, gender, T, N, and ND stages were independently and significantly associated with survival (p < 0.05). The resulting new staging classification was based on the T and ND. The four new stages led to good separation of survival curves in both the developmental and validation datasets (p < 0.05). Conclusions: Our results showed that lymph node size is a strong independent prognostic factor and that the new staging system, which incorporated lymph node size, provided good prognostic power, and discriminated effectively for patients with esophageal cancer undergoing CRT.« less

  18. Ductility normalized-strainrange partitioning life relations for creep-fatigue life predictions

    NASA Technical Reports Server (NTRS)

    Halford, G. R.; Saltsman, J. F.; Hirschberg, M. H.

    1977-01-01

    Procedures based on Strainrange Partitioning (SRP) are presented for estimating the effects of environment and other influences on the high temperature, low cycle, creep fatigue resistance of alloys. It is proposed that the plastic and creep, ductilities determined from conventional tensile and creep rupture tests conducted in the environment of interest be used in a set of ductility normalized equations for making a first order approximation of the four SRP inelastic strainrange life relations. Different levels of sophistication in the application of the procedures are presented by means of illustrative examples with several high temperature alloys. Predictions of cyclic lives generally agree with observed lives within factors of three.

  19. Disentangling the phylogenetic and ecological components of spider phenotypic variation.

    PubMed

    Gonçalves-Souza, Thiago; Diniz-Filho, José Alexandre Felizola; Romero, Gustavo Quevedo

    2014-01-01

    An understanding of how the degree of phylogenetic relatedness influences the ecological similarity among species is crucial to inferring the mechanisms governing the assembly of communities. We evaluated the relative importance of spider phylogenetic relationships and ecological niche (plant morphological variables) to the variation in spider body size and shape by comparing spiders at different scales: (i) between bromeliads and dicot plants (i.e., habitat scale) and (ii) among bromeliads with distinct architectural features (i.e., microhabitat scale). We partitioned the interspecific variation in body size and shape into phylogenetic (that express trait values as expected by phylogenetic relationships among species) and ecological components (that express trait values independent of phylogenetic relationships). At the habitat scale, bromeliad spiders were larger and flatter than spiders associated with the surrounding dicots. At this scale, plant morphology sorted out close related spiders. Our results showed that spider flatness is phylogenetically clustered at the habitat scale, whereas it is phylogenetically overdispersed at the microhabitat scale, although phylogenic signal is present in both scales. Taken together, these results suggest that whereas at the habitat scale selective colonization affect spider body size and shape, at fine scales both selective colonization and adaptive evolution determine spider body shape. By partitioning the phylogenetic and ecological components of phenotypic variation, we were able to disentangle the evolutionary history of distinct spider traits and show that plant architecture plays a role in the evolution of spider body size and shape. We also discussed the relevance in considering multiple scales when studying phylogenetic community structure.

  20. Disentangling the Phylogenetic and Ecological Components of Spider Phenotypic Variation

    PubMed Central

    Gonçalves-Souza, Thiago; Diniz-Filho, José Alexandre Felizola; Romero, Gustavo Quevedo

    2014-01-01

    An understanding of how the degree of phylogenetic relatedness influences the ecological similarity among species is crucial to inferring the mechanisms governing the assembly of communities. We evaluated the relative importance of spider phylogenetic relationships and ecological niche (plant morphological variables) to the variation in spider body size and shape by comparing spiders at different scales: (i) between bromeliads and dicot plants (i.e., habitat scale) and (ii) among bromeliads with distinct architectural features (i.e., microhabitat scale). We partitioned the interspecific variation in body size and shape into phylogenetic (that express trait values as expected by phylogenetic relationships among species) and ecological components (that express trait values independent of phylogenetic relationships). At the habitat scale, bromeliad spiders were larger and flatter than spiders associated with the surrounding dicots. At this scale, plant morphology sorted out close related spiders. Our results showed that spider flatness is phylogenetically clustered at the habitat scale, whereas it is phylogenetically overdispersed at the microhabitat scale, although phylogenic signal is present in both scales. Taken together, these results suggest that whereas at the habitat scale selective colonization affect spider body size and shape, at fine scales both selective colonization and adaptive evolution determine spider body shape. By partitioning the phylogenetic and ecological components of phenotypic variation, we were able to disentangle the evolutionary history of distinct spider traits and show that plant architecture plays a role in the evolution of spider body size and shape. We also discussed the relevance in considering multiple scales when studying phylogenetic community structure. PMID:24651264

  1. A log-normal distribution model for the molecular weight of aquatic fulvic acids

    USGS Publications Warehouse

    Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.

    2000-01-01

    The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.

  2. The total position-spread tensor: Spin partition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Khatib, Muammar, E-mail: elkhatib@irsamc.ups-tlse.fr; Evangelisti, Stefano, E-mail: stefano@irsamc.ups-tlse.fr; Leininger, Thierry, E-mail: Thierry.Leininger@irsamc.ups-tlse.fr

    2015-03-07

    The Total Position Spread (TPS) tensor, defined as the second moment cumulant of the position operator, is a key quantity to describe the mobility of electrons in a molecule or an extended system. In the present investigation, the partition of the TPS tensor according to spin variables is derived and discussed. It is shown that, while the spin-summed TPS gives information on charge mobility, the spin-partitioned TPS tensor becomes a powerful tool that provides information about spin fluctuations. The case of the hydrogen molecule is treated, both analytically, by using a 1s Slater-type orbital, and numerically, at Full Configuration Interactionmore » (FCI) level with a V6Z basis set. It is found that, for very large inter-nuclear distances, the partitioned tensor growths quadratically with the distance in some of the low-lying electronic states. This fact is related to the presence of entanglement in the wave function. Non-dimerized open chains described by a model Hubbard Hamiltonian and linear hydrogen chains H{sub n} (n ≥ 2), composed of equally spaced atoms, are also studied at FCI level. The hydrogen systems show the presence of marked maxima for the spin-summed TPS (corresponding to a high charge mobility) when the inter-nuclear distance is about 2 bohrs. This fact can be associated to the presence of a Mott transition occurring in this region. The spin-partitioned TPS tensor, on the other hand, has a quadratical growth at long distances, a fact that corresponds to the high spin mobility in a magnetic system.« less

  3. ParABS Systems of the Four Replicons of Burkholderia cenocepacia: New Chromosome Centromeres Confer Partition Specificity†

    PubMed Central

    Dubarry, Nelly; Pasta, Franck; Lane, David

    2006-01-01

    Most bacterial chromosomes carry an analogue of the parABS systems that govern plasmid partition, but their role in chromosome partition is ambiguous. parABS systems might be particularly important for orderly segregation of multipartite genomes, where their role may thus be easier to evaluate. We have characterized parABS systems in Burkholderia cenocepacia, whose genome comprises three chromosomes and one low-copy-number plasmid. A single parAB locus and a set of ParB-binding (parS) centromere sites are located near the origin of each replicon. ParA and ParB of the longest chromosome are phylogenetically similar to analogues in other multichromosome and monochromosome bacteria but are distinct from those of smaller chromosomes. The latter form subgroups that correspond to the taxa of their hosts, indicating evolution from plasmids. The parS sites on the smaller chromosomes and the plasmid are similar to the “universal” parS of the main chromosome but with a sequence specific to their replicon. In an Escherichia coli plasmid stabilization test, each parAB exhibits partition activity only with the parS of its own replicon. Hence, parABS function is based on the independent partition of individual chromosomes rather than on a single communal system or network of interacting systems. Stabilization by the smaller chromosome and plasmid systems was enhanced by mutation of parS sites and a promoter internal to their parAB operons, suggesting autoregulatory mechanisms. The small chromosome ParBs were found to silence transcription, a property relevant to autoregulation. PMID:16452432

  4. Some controversial multiple testing problems in regulatory applications.

    PubMed

    Hung, H M James; Wang, Sue-Jane

    2009-01-01

    Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.

  5. A Negative Partition Relation

    PubMed Central

    Hajnal, A.

    1971-01-01

    If the continuum hypothesis is assumed, there is a graph G whose vertices form an ordered set of type ω12; G does not contain triangles or complete even graphs of form [[unk]0,[unk]0], and there is no independent subset of vertices of type ω12. PMID:16591893

  6. Maximum plant height and the biophysical factors that limit it.

    PubMed

    Niklas, Karl J

    2007-03-01

    Basic engineering theory and empirically determined allometric relationships for the biomass partitioning patterns of extant tree-sized plants show that the mechanical requirements for vertical growth do not impose intrinsic limits on the maximum heights that can be reached by species with woody, self-supporting stems. This implies that maximum tree height is constrained by other factors, among which hydraulic constraints are plausible. A review of the available information on scaling relationships observed for large tree-sized plants, nevertheless, indicates that mechanical and hydraulic requirements impose dual restraints on plant height and thus, may play equally (but differentially) important roles during the growth of arborescent, large-sized species. It may be the case that adaptations to mechanical and hydraulic phenomena have optimized growth, survival and reproductive success rather than longevity and mature size.

  7. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  8. Interspecific resource partitioning in sympatric ursids

    USGS Publications Warehouse

    Belant, Jerrold L.; Kielland, Knut; Follmann, Erich H.; Adams, Layne G.

    2006-01-01

    The fundamental niche of a species is rarely if ever realized because the presence of other species restricts it to a narrower range of ecological conditions. The effects of this narrower range of conditions define how resources are partitioned. Resource partitioning has been inferred but not demonstrated previously for sympatric ursids. We estimated assimilated diet in relation to body condition (body fat and lean and total body mass) and reproduction for sympatric brown bears (Ursus arctos) and American black bears (U. americanus) in south‐central Alaska, 1998–2000. Based on isotopic analysis of blood and keratin in claws, salmon (Oncorhynchus spp.) predominated in brown bear diets (>53% annually) whereas black bears assimilated 0–25% salmon annually. Black bears did not exploit salmon during a year with below average spawning numbers, probably because brown bears deterred black bear access to salmon. Proportion of salmon in assimilated diet was consistent across years for brown bears and represented the major portion of their diet. Body size of brown bears in the study area approached mean body size of several coastal brown bear populations, demonstrating the importance of salmon availability to body condition. Black bears occurred at a comparable density (mass : mass), but body condition varied and was related directly to the amount of salmon assimilated in their diet. Both species gained most lean body mass during spring and all body fat during summer when salmon were present. Improved body condition (i.e., increased percentage body fat) from salmon consumption reduced catabolism of lean body mass during hibernation, resulting in better body condition the following spring. Further, black bear reproduction was directly related to body condition; reproductive rates were reduced when body condition was lower. High body fat content across years for brown bears was reflected in consistently high reproductive levels. We suggest that the fundamental niche of black bears was constrained by brown bears through partitioning of food resources, which varied among years. Reduced exploitation of salmon caused black bears to rely more extensively on less reliable or nutritious food sources (e.g., moose [Alces alces], berries) resulting in lowered body condition and subsequent reproduction.

  9. Healthcare Text Classification System and its Performance Evaluation: A Source of Better Intelligence by Characterizing Healthcare Text.

    PubMed

    Srivastava, Saurabh Kumar; Singh, Sandeep Kumar; Suri, Jasjit S

    2018-04-13

    A machine learning (ML)-based text classification system has several classifiers. The performance evaluation (PE) of the ML system is typically driven by the training data size and the partition protocols used. Such systems lead to low accuracy because the text classification systems lack the ability to model the input text data in terms of noise characteristics. This research study proposes a concept of misrepresentation ratio (MRR) on input healthcare text data and models the PE criteria for validating the hypothesis. Further, such a novel system provides a platform to amalgamate several attributes of the ML system such as: data size, classifier type, partitioning protocol and percentage MRR. Our comprehensive data analysis consisted of five types of text data sets (TwitterA, WebKB4, Disease, Reuters (R8), and SMS); five kinds of classifiers (support vector machine with linear kernel (SVM-L), MLP-based neural network, AdaBoost, stochastic gradient descent and decision tree); and five types of training protocols (K2, K4, K5, K10 and JK). Using the decreasing order of MRR, our ML system demonstrates the mean classification accuracies as: 70.13 ± 0.15%, 87.34 ± 0.06%, 93.73 ± 0.03%, 94.45 ± 0.03% and 97.83 ± 0.01%, respectively, using all the classifiers and protocols. The corresponding AUC is 0.98 for SMS data using Multi-Layer Perceptron (MLP) based neural network. All the classifiers, the best accuracy of 91.84 ± 0.04% is shown to be of MLP-based neural network and this is 6% better over previously published. Further we observed that as MRR decreases, the system robustness increases and validated by standard deviations. The overall text system accuracy using all data types, classifiers, protocols is 89%, thereby showing the entire ML system to be novel, robust and unique. The system is also tested for stability and reliability.

  10. Preformed template fluctuations promote fibril formation: Insights from lattice and all-atom models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouza, Maksim, E-mail: mkouza@chem.uw.edu.pl; Kolinski, Andrzej; Co, Nguyen Truong

    2015-04-14

    Fibril formation resulting from protein misfolding and aggregation is a hallmark of several neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases. Despite the fact that the fibril formation process is very slow and thus poses a significant challenge for theoretical and experimental studies, a number of alternative pictures of molecular mechanisms of amyloid fibril formation have been recently proposed. What seems to be common for the majority of the proposed models is that fibril elongation involves the formation of pre-nucleus seeds prior to the creation of a critical nucleus. Once the size of the pre-nucleus seed reaches the critical nucleusmore » size, its thermal fluctuations are expected to be small and the resulting nucleus provides a template for sequential (one-by-one) accommodation of added monomers. The effect of template fluctuations on fibril formation rates has not been explored either experimentally or theoretically so far. In this paper, we make the first attempt at solving this problem by two sets of simulations. To mimic small template fluctuations, in one set, monomers of the preformed template are kept fixed, while in the other set they are allowed to fluctuate. The kinetics of addition of a new peptide onto the template is explored using all-atom simulations with explicit water and the GROMOS96 43a1 force field and simple lattice models. Our result demonstrates that preformed template fluctuations can modulate protein aggregation rates and pathways. The association of a nascent monomer with the template obeys the kinetics partitioning mechanism where the intermediate state occurs in a fraction of routes to the protofibril. It was shown that template immobility greatly increases the time of incorporating a new peptide into the preformed template compared to the fluctuating template case. This observation has also been confirmed by simulation using lattice models and may be invoked to understand the role of template fluctuations in slowing down fibril elongation in vivo.« less

  11. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  12. Reliability Estimation When a Test Is Split into Two Parts of Unknown Effective Length.

    ERIC Educational Resources Information Center

    Feldt, Leonard S.

    2002-01-01

    Considers the situation in which content or administrative considerations limit the way in which a test can be partitioned to estimate the internal consistency reliability of the total test score. Demonstrates that a single-valued estimate of the total score reliability is possible only if an assumption is made about the comparative size of the…

  13. Biological Individuality of Man

    DTIC Science & Technology

    1974-12-01

    levels of a specific chemical substance in blood, urine, digestive juices , or tissues of "normal" persons. These are cited as the extremes of...physiological state, blood sugar level and nutritional status of the individual have been identified as significant. Studies on oxygen deprivation...in great detail. These include; geometry and composition of the cochlear partition, riensf.ty and spacing of epithelial hair cells, size and

  14. Food emulsions as delivery systems for flavor compounds: A review.

    PubMed

    Mao, Like; Roos, Yrjö H; Biliaderis, Costas G; Miao, Song

    2017-10-13

    Food flavor is an important attribute of quality food, and it largely determines consumer food preference. Many food products exist as emulsions or experience emulsification during processing, and therefore, a good understanding of flavor release from emulsions is essential to design food with desirable flavor characteristics. Emulsions are biphasic systems, where flavor compounds are partitioning into different phases, and the releases can be modulated through different ways. Emulsion ingredients, such as oils, emulsifiers, thickening agents, can interact with flavor compounds, thus modifying the thermodynamic behavior of flavor compounds. Emulsion structures, including droplet size and size distribution, viscosity, interface thickness, etc., can influence flavor component partition and their diffusion in the emulsions, resulting in different release kinetics. When emulsions are consumed in the mouth, both emulsion ingredients and structures undergo significant changes, resulting in different flavor perception. Special design of emulsion structures in the water phase, oil phase, and interface provides emulsions with great potential as delivery systems to control flavor release in wider applications. This review provides an overview of the current understanding of flavor release from emulsions, and how emulsions can behave as delivery systems for flavor compounds to better design novel food products with enhanced sensorial and nutritional attributes.

  15. EBSD Analysis of Relationship Between Microstructural Features and Toughness of a Medium-Carbon Quenching and Partitioning Bainitic Steel

    NASA Astrophysics Data System (ADS)

    Li, Qiangguo; Huang, Xuefei; Huang, Weigang

    2017-12-01

    A multiphase microstructure of bainite, martensite and retained austenite in a 0.3C bainitic steel was obtained by a novel bainite isothermal transformation plus quenching and partitioning (B-QP) process. The correlations between microstructural features and toughness were investigated by electron backscatter diffraction (EBSD), and the results showed that the multiphase microstructure containing approximately 50% bainite exhibits higher strength (1617 MPa), greater elongation (18.6%) and greater impact toughness (103 J) than the full martensite. The EBSD analysis indicated that the multiphase microstructure with a smaller average local misorientation (1.22°) has a lower inner stress concentration possibility and that the first formed bainitic ferrite plates in the multiphase microstructure can refine subsequently generated packets and blocks. The corresponding packet and block average size decrease from 11.9 and 2.3 to 8.4 and 1.6 μm, respectively. A boundary misorientation analysis indicated that the multiphase microstructure has a higher percentage of high-angle boundaries (67.1%) than the full martensite (57.9%) because of the larger numbers and smaller sizes of packets and blocks. The packet boundary obstructs crack propagation more effectively than the block boundary.

  16. Cation solvation with quantum chemical effects modeled by a size-consistent multi-partitioning quantum mechanics/molecular mechanics method.

    PubMed

    Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi

    2017-07-21

    In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.

  17. Separation of ion types in tandem mass spectrometry data interpretation -- a graph-theoretic approach.

    PubMed

    Yan, Bo; Pan, Chongle; Olman, Victor N; Hettich, Robert L; Xu, Ying

    2004-01-01

    Mass spectrometry is one of the most popular analytical techniques for identification of individual proteins in a protein mixture, one of the basic problems in proteomics. It identifies a protein through identifying its unique mass spectral pattern. While the problem is theoretically solvable, it remains a challenging problem computationally. One of the key challenges comes from the difficulty in distinguishing the N- and C-terminus ions, mostly b- and y-ions respectively. In this paper, we present a graph algorithm for solving the problem of separating bfrom y-ions in a set of mass spectra. We represent each spectral peak as a node and consider two types of edges: a type-1 edge connects two peaks possibly of the same ion types and a type-2 edge connects two peaks possibly of different ion types, predicted based on local information. The ion-separation problem is then formulated and solved as a graph partition problem, which is to partition the graph into three subgraphs, namely b-, y-ions and others respectively, so to maximize the total weight of type-1 edges while minimizing the total weight of type-2 edges within each subgraph. We have developed a dynamic programming algorithm for rigorously solving this graph partition problem and implemented it as a computer program PRIME. We have tested PRIME on 18 data sets of high accurate FT-ICR tandem mass spectra and found that it achieved ~90% accuracy for separation of b- and y- ions.

  18. Study on the rheological properties and volatile release of cold-set emulsion-filled protein gels.

    PubMed

    Mao, Like; Roos, Yrjö H; Miao, Song

    2014-11-26

    Emulsion-filled protein gels (EFP gels) were prepared through a cold-set gelation process, and they were used to deliver volatile compounds. An increase in the whey protein isolate (WPI) content from 4 to 6% w/w did not show significant effect on the gelation time, whereas an increase in the oil content from 5 to 20% w/w resulted in an earlier onset of gelation. Gels with a higher WPI content had a higher storage modulus and water-holding capacity (WHC), and they presented a higher force and strain at breaking, indicating that a more compact gel network was formed. An increase in the oil content contributed to gels with a higher storage modulus and force at breaking; however, this increase did not affect the WHC of the gels, and gels with a higher oil content became more brittle, resulting in a decreased strain at breaking. GC headspace analysis showed that volatiles released at lower rates and had lower air-gel partition coefficients in EFP gels than those in ungelled counterparts. Gels with a higher WPI content had lower release rates and partition coefficients of the volatiles. A change in the oil content significantly modified the partition of volatiles at equilibrium, but it produced a minor effect on the release rate of the volatiles. The findings indicated that EFP gels could be potentially used to modulate volatile release by varying the rheological properties of the gel.

  19. On the use of the partitioning approach to derive Environmental Quality Standards (EQS) for persistent organic pollutants (POPs) in sediments: a review of existing data.

    PubMed

    Dueri, Sibylle; Castro-Jiménez, Javier; Comenges, José-Manuel Zaldívar

    2008-09-15

    A review of experimental data has been performed to study the relationships between the concentration in water, pore water and sediments for different families of organic contaminants. The objective was to determine whether it is possible to set EQS for sediments from EQS defined for surface waters in the Daughter Directive of the European Parliament (COM (2006) 397). The analysis of experimental data showed that even though in some specific cases there is a coupling between water column and sediments, this coupling is rather the exception. Therefore it is not recommendable to use water column data to assess the chemical quality status of sediments and it is necessary to measure in both media. At the moment EQS have been defined for the water column and will assess only the compliance with good chemical status of surface waters. Since the sediment toxicity depends on the dissolved pore water concentration, the EQS developed for water could be applied to pore water (interstitial water); hence, there would be no need of developing another set of EQS. The partitioning approach has been proposed as a solution to calculate sediment EQS from water EQS, but the partitioning coefficient strongly depends on sediment characteristics and its use introduces an important uncertainty in the definition of sediment EQS. Therefore, the direct measurement of pore water concentration is regarded as a better option.

  20. ADME evaluation in drug discovery. 1. Applications of genetic algorithms to the prediction of blood-brain partitioning of a large set of drugs.

    PubMed

    Hou, Tingjun; Xu, Xiaojie

    2002-12-01

    In this study, the relationships between the brain-blood concentration ratio of 96 structurally diverse compounds with a large number of structurally derived descriptors were investigated. The linear models were based on molecular descriptors that can be calculated for any compound simply from a knowledge of its molecular structure. The linear correlation coefficients of the models were optimized by genetic algorithms (GAs), and the descriptors used in the linear models were automatically selected from 27 structurally derived descriptors. The GA optimizations resulted in a group of linear models with three or four molecular descriptors with good statistical significance. The change of descriptor use as the evolution proceeds demonstrates that the octane/water partition coefficient and the partial negative solvent-accessible surface area multiplied by the negative charge are crucial to brain-blood barrier permeability. Moreover, we found that the predictions using multiple QSPR models from GA optimization gave quite good results in spite of the diversity of structures, which was better than the predictions using the best single model. The predictions for the two external sets with 37 diverse compounds using multiple QSPR models indicate that the best linear models with four descriptors are sufficiently effective for predictive use. Considering the ease of computation of the descriptors, the linear models may be used as general utilities to screen the blood-brain barrier partitioning of drugs in a high-throughput fashion.

Top