NASA Astrophysics Data System (ADS)
Ise, Takeshi; Litton, Creighton M.; Giardina, Christian P.; Ito, Akihiko
2010-12-01
Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long-lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning to leaves, stems, and roots varies consistently with GPP and that the ratio of net primary production (NPP) to GPP is conservative across environmental gradients. To examine influences of carbon partitioning schemes employed by global ecosystem models, we used this meta-analysis-based model and a satellite-based (MODIS) terrestrial GPP data set to estimate global woody NPP and equilibrium biomass, and then compared it to two process-based ecosystem models (Biome-BGC and VISIT) using the same GPP data set. We hypothesized that different carbon partitioning schemes would result in large differences in global estimates of woody NPP and equilibrium biomass. Woody NPP estimated by Biome-BGC and VISIT was 25% and 29% higher than the meta-analysis-based model for boreal forests, with smaller differences in temperate and tropics. Global equilibrium woody biomass, calculated from model-specific NPP estimates and a single set of tissue turnover rates, was 48 and 226 Pg C higher for Biome-BGC and VISIT compared to the meta-analysis-based model, reflecting differences in carbon partitioning to structural versus metabolically active tissues. In summary, we found that different carbon partitioning schemes resulted in large variations in estimates of global woody carbon flux and storage, indicating that stand-level controls on carbon partitioning are not yet accurately represented in ecosystem models.
Direct optimization, affine gap costs, and node stability.
Aagesen, Lone
2005-09-01
The outcome of a phylogenetic analysis based on DNA sequence data is highly dependent on the homology-assignment step and may vary with alignment parameter costs. Robustness to changes in parameter costs is therefore a desired quality of a data set because the final conclusions will be less dependent on selecting a precise optimal cost set. Here, node stability is explored in relationship to separate versus combined analysis in three different data sets, all including several data partitions. Robustness to changes in cost sets is measured as number of successive changes that can be made in a given cost set before a specific clade is lost. The changes are in all cases base change cost, gap penalties, and adding/removing/changing affine gap costs. When combining data partitions, the number of clades that appear in the entire parameter space is not remarkably increased, in some cases this number even decreased. However, when combining data partitions the trees from cost sets including affine gap costs were always more similar than the trees were from cost sets without affine gap costs. This was not the case when the data partitions were analyzed independently. When data sets were combined approximately 80% of the clades found under cost sets including affine gap costs resisted at least one change to the cost set.
Padró, Juan M; Pellegrino Vidal, Rocío B; Reta, Mario
2014-12-01
The partition coefficients, P IL/w, of several compounds, some of them of biological and pharmacological interest, between water and room-temperature ionic liquids based on the imidazolium, pyridinium, and phosphonium cations, namely 1-octyl-3-methylimidazolium hexafluorophosphate, N-octylpyridinium tetrafluorophosphate, trihexyl(tetradecyl)phosphonium chloride, trihexyl(tetradecyl)phosphonium bromide, trihexyl(tetradecyl)phosphonium bis(trifluoromethylsulfonyl)imide, and trihexyl(tetradecyl)phosphonium dicyanamide, were accurately measured. In this way, we extended our database of partition coefficients in room-temperature ionic liquids previously reported. We employed the solvation parameter model with different probe molecules (the training set) to elucidate the chemical interactions involved in the partition process and discussed the most relevant differences among the three types of ionic liquids. The multiparametric equations obtained with the aforementioned model were used to predict the partition coefficients for compounds (the test set) not present in the training set, most being of biological and pharmacological interest. An excellent agreement between calculated and experimental log P IL/w values was obtained. Thus, the obtained equations can be used to predict, a priori, the extraction efficiency for any compound using these ionic liquids as extraction solvents in liquid-liquid extractions.
Burant, Aniela; Thompson, Christopher; Lowry, Gregory V; Karamalidis, Athanasios K
2016-05-17
Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch-reactor system with dual spectroscopic detectors: a near-infrared spectrometer for measuring the organic analyte in the CO2 phase and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly parameter linear free-energy relationship and to develop five new linear free-energy relationships for predicting water-sc-CO2 partitioning coefficients. A total of four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than does the model built for the entire data set.
NASA Technical Reports Server (NTRS)
Drake, Michael J.; Rubie, David C.; Mcfarlane, Elisabeth A.
1992-01-01
The partitioning of elements amongst lower mantle phases and silicate melts is of interest in unraveling the early thermal history of the Earth. Because of the technical difficulty in carrying out such measurements, only one direct set of measurements was reported previously, and these results as well as interpretations based on them have generated controversy. Here we report what are to our knowledge only the second set of directly measured trace element partition coefficients for a natural system (KLB-1).
Monkey search algorithm for ECE components partitioning
NASA Astrophysics Data System (ADS)
Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.
2018-05-01
The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.
NASA Astrophysics Data System (ADS)
Chandramouli, Bharadwaj; Kamens, Richard M.
Decamethyl cyclopentasiloxane (D 5) and decamethyl tetrasiloxane (MD 2M) were injected into a smog chamber containing fine Arizona road dust particles (95% surface area <2.6 μM) and an urban smog atmosphere in the daytime. A photochemical reaction - gas-particle partitioning reaction scheme, was implemented to simulate the formation and gas-particle partitioning of hydroxyl oxidation products of D 5 and MD 2M. This scheme incorporated the reactions of D 5 and MD 2M into an existing urban smog chemical mechanism carbon bond IV and partitioned the products between gas and particle phase by treating gas-particle partitioning as a kinetic process and specifying an uptake and off-gassing rate. A photochemical model PKSS was used to simulate this set of reactions. A Langmuirian partitioning model was used to convert the measured and estimated mass-based partitioning coefficients ( KP) to a molar or volume-based form. The model simulations indicated that >99% of all product silanol formed in the gas-phase partition immediately to particle phase and the experimental data agreed with model predictions. One product, D 4TOH was observed and confirmed for the D 5 reaction and this system was modeled successfully. Experimental data was inadequate for MD 2M reaction products and it is likely that more than one product formed. The model set up a framework into which more reaction and partitioning steps can be easily added.
Springer, M S; Amrine, H M; Burk, A; Stanhope, M J
1999-03-01
We concatenated sequences for four mitochondrial genes (12S rRNA, tRNA valine, 16S rRNA, cytochrome b) and four nuclear genes [aquaporin, alpha 2B adrenergic receptor (A2AB), interphotoreceptor retinoid-binding protein (IRBP), von Willebrand factor (vWF)] into a multigene data set representing 11 eutherian orders (Artiodactyla, Hyracoidea, Insectivora, Lagomorpha, Macroscelidea, Perissodactyla, Primates, Proboscidea, Rodentia, Sirenia, Tubulidentata). Within this data set, we recognized nine mitochondrial partitions (both stems and loops, for each of 12S rRNA, tRNA valine, and 16S rRNA; and first, second, and third codon positions of cytochrome b) and 12 nuclear partitions (first, second, and third codon positions, respectively, of each of the four nuclear genes). Four of the 21 partitions (third positions of cytochrome b, A2AB, IRBP, and vWF) showed significant heterogeneity in base composition across taxa. Phylogenetic analyses (parsimony, minimum evolution, maximum likelihood) based on sequences for all 21 partitions provide 99-100% bootstrap support for Afrotheria and Paenungulata. With the elimination of the four partitions exhibiting heterogeneity in base composition, there is also high bootstrap support (89-100%) for cow + horse. Statistical tests reject Altungulata, Anagalida, and Ungulata. Data set heterogeneity between mitochondrial and nuclear genes is most evident when all partitions are included in the phylogenetic analyses. Mitochondrial-gene trees associate cow with horse, whereas nuclear-gene trees associate cow with hedgehog and these two with horse. However, after eliminating third positions of A2AB, IRBP, and vWF, nuclear data agree with mitochondrial data in supporting cow + horse. Nuclear genes provide stronger support for both Afrotheria and Paenungulata. Removal of third positions of cytochrome b results in improved performance for the mitochondrial genes in recovering these clades.
Krajewski, C; Fain, M G; Buckley, L; King, D G
1999-11-01
ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems
2015-05-01
of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC
Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio
2010-04-01
Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.
Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.
Zhao, Jian; Chen, Lian-Kuan
2017-04-17
We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
Spatial coding-based approach for partitioning big spatial data in Hadoop
NASA Astrophysics Data System (ADS)
Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai
2017-09-01
Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
On the star partition dimension of comb product of cycle and path
NASA Astrophysics Data System (ADS)
Alfarisi, Ridho; Darmaji
2017-08-01
Let G = (V, E) be a connected graphs with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk }. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S1, S2, S3, …, Sk } is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si, 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V(G) is the star partition dimension of G, denoted by spd(G). Finding the star partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of cycle and path namely Cm⊳Pn and Pn⊳Cm for n ≥ 2 and m ≥ 3.
Does History Repeat Itself? Wavelets and the Phylodynamics of Influenza A
Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.
2012-01-01
Unprecedented global surveillance of viruses will result in massive sequence data sets that require new statistical methods. These data sets press the limits of Bayesian phylogenetics as the high-dimensional parameters that comprise a phylogenetic tree increase the already sizable computational burden of these techniques. This burden often results in partitioning the data set, for example, by gene, and inferring the evolutionary dynamics of each partition independently, a compromise that results in stratified analyses that depend only on data within a given partition. However, parameter estimates inferred from these stratified models are likely strongly correlated, considering they rely on data from a single data set. To overcome this shortfall, we exploit the existing Monte Carlo realizations from stratified Bayesian analyses to efficiently estimate a nonparametric hierarchical wavelet-based model and learn about the time-varying parameters of effective population size that reflect levels of genetic diversity across all partitions simultaneously. Our methods are applied to complete genome influenza A sequences that span 13 years. We find that broad peaks and trends, as opposed to seasonal spikes, in the effective population size history distinguish individual segments from the complete genome. We also address hypotheses regarding intersegment dynamics within a formal statistical framework that accounts for correlation between segment-specific parameters. PMID:22160768
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Lost in the supermarket: Quantifying the cost of partitioning memory sets in hybrid search.
Boettcher, Sage E P; Drew, Trafton; Wolfe, Jeremy M
2018-01-01
The items on a memorized grocery list are not relevant in every aisle; for example, it is useless to search for the cabbage in the cereal aisle. It might be beneficial if one could mentally partition the list so only the relevant subset was active, so that vegetables would be activated in the produce section. In four experiments, we explored observers' abilities to partition memory searches. For example, if observers held 16 items in memory, but only eight of the items were relevant, would response times resemble a search through eight or 16 items? In Experiments 1a and 1b, observers were not faster for the partition set; however, they suffered relatively small deficits when "lures" (items from the irrelevant subset) were presented, indicating that they were aware of the partition. In Experiment 2 the partitions were based on semantic distinctions, and again, observers were unable to restrict search to the relevant items. In Experiments 3a and 3b, observers attempted to remove items from the list one trial at a time but did not speed up over the course of a block, indicating that they also could not limit their memory searches. Finally, Experiments 4a, 4b, 4c, and 4d showed that observers were able to limit their memory searches when a subset was relevant for a run of trials. Overall, observers appear to be unable or unwilling to partition memory sets from trial to trial, yet they are capable of restricting search to a memory subset that remains relevant for several trials. This pattern is consistent with a cost to switching between currently relevant memory items.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burant, Aniela; Thompson, Christopher; Lowry, Gregory V.
2016-05-17
Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch reactor system with dual spectroscopic detectors: a near infrared spectrometer for measuring the organic analyte in the CO2 phase, and a UV detector for quantifying the analyte inmore » the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly-parameter linear free energy relationship and to develop five new linear free energy relationships for predicting water-sc-CO2 partitioning coefficients. Four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than the model built for the entire dataset.« less
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
Raevsky, O A; Grigor'ev, V J; Raevskaja, O E; Schaper, K-J
2006-06-01
QSPR analyses of a data set containing experimental partition coefficients in the three systems octanol-water, water-gas, and octanol-gas for 98 chemicals have shown that it is possible to calculate any partition coefficient in the system 'gas phase/octanol/water' by three different approaches: (1) from experimental partition coefficients obtained in the corresponding two other subsystems. However, in many cases these data may not be available. Therefore, a solution may be approached (2), a traditional QSPR analysis based on e.g. HYBOT descriptors (hydrogen bond acceptor and donor factors, SigmaCa and SigmaCd, together with polarisability alpha, a steric bulk effect descriptor) and supplemented with substructural indicator variables. (3) A very promising approach which is a combination of the similarity concept and QSPR based on HYBOT descriptors. In this approach observed partition coefficients of structurally nearest neighbours of a compound-of-interest are used. In addition, contributions arising from differences in alpha, SigmaCa, and SigmaCd values between the compound-of-interest and its nearest neighbour(s), respectively, are considered. In this investigation highly significant relationships were obtained by approaches (1) and (3) for the octanol/gas phase partition coefficient (log Log).
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution
Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen
2014-01-01
Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804
Hardware Index to Set Partition Converter
2013-01-01
Brisk, J.G. de Figueiredo Coutinho, P.C. Diniz (Eds.): ARC 2013, LNCS 7806, pp. 72–83, 2013. c© Springer-Verlag Berlin Heidelberg 2013 Report...374 (1990) 13. Orlov, M.: Efficient generation of set partitions (March 2002), http://www.cs.bgu.ac.il/~orlovm/papers/partitions.pdf 14. Reingold, E
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
A knowledge based system for scientific data visualization
NASA Technical Reports Server (NTRS)
Senay, Hikmet; Ignatius, Eve
1992-01-01
A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.
Finding and testing network communities by lumped Markov chains.
Piccardi, Carlo
2011-01-01
Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition that is based on a quality threshold. By means of a lumped Markov chain model of a random walker, a quality measure called "persistence probability" is associated to a cluster, which is then defined as an "α-community" if such a probability is not smaller than α. Consistently, a partition composed of α-communities is an "α-partition." These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired α-level allows one to immediately select the α-partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the quality of each single community. Given its ability in individually assessing each single cluster, this approach can also disclose single well-defined communities even in networks that overall do not possess a definite clusterized structure.
Boundaries on Range-Range Constrained Admissible Regions for Optical Space Surveillance
NASA Astrophysics Data System (ADS)
Gaebler, J. A.; Axelrad, P.; Schumacher, P. W., Jr.
We propose a new type of admissible-region analysis for track initiation in multi-satellite problems when apparent angles measured at known stations are the only observable. The goal is to create an efficient and parallelizable algorithm for computing initial candidate orbits for a large number of new targets. It takes at least three angles-only observations to establish an orbit by traditional means. Thus one is faced with a problem that requires N-choose-3 sets of calculations to test every possible combination of the N observations. An alternative approach is to reduce the number of combinations by making hypotheses of the range to a target along the observed line-of-sight. If realistic bounds on the range are imposed, consistent with a given partition of the space of orbital elements, a pair of range possibilities can be evaluated via Lambert’s method to find candidate orbits for that that partition, which then requires Nchoose- 2 times M-choose-2 combinations, where M is the average number of range hypotheses per observation. The contribution of this work is a set of constraints that establish bounds on the range-range hypothesis region for a given element-space partition, thereby minimizing M. Two effective constraints were identified, which together, constrain the hypothesis region in range-range space to nearly that of the true admissible region based on an orbital partition. The first constraint is based on the geometry of the vacant orbital focus. The second constraint is based on time-of-flight and Lagrange’s form of Kepler’s equation. A complete and efficient parallelization of the problem is possible on this approach because the element partitions can be arbitrary and can be handled independently of each other.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
On the partition dimension of comb product of path and complete graph
NASA Astrophysics Data System (ADS)
Darmaji, Alfarisi, Ridho
2017-08-01
For a vertex v of a connected graph G(V, E) with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk}. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). Finding the partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of path and complete graph. The results show that comb product of complete grapph Km and path Pn namely p d (Km⊳Pn)=m where m ≥ 3 and n ≥ 2 and p d (Pn⊳Km)=m where m ≥ 3, n ≥ 2 and m ≥ n.
Fayyoumi, Ebaa; Oommen, B John
2009-10-01
We consider the microaggregation problem (MAP) that involves partitioning a set of individual records in a microdata file into a number of mutually exclusive and exhaustive groups. This problem, which seeks for the best partition of the microdata file, is known to be NP-hard and has been tackled using many heuristic solutions. In this paper, we present the first reported fixed-structure-stochastic-automata-based solution to this problem. The newly proposed method leads to a lower value of the information loss (IL), obtains a better tradeoff between the IL and the disclosure risk (DR) when compared with state-of-the-art methods, and leads to a superior value of the scoring index, which is a criterion involving a combination of the IL and the DR. The scheme has been implemented, tested, and evaluated for different real-life and simulated data sets. The results clearly demonstrate the applicability of learning automata to the MAP and its ability to yield a solution that obtains the best tradeoff between IL and DR when compared with the state of the art.
Modeling of adipose/blood partition coefficient for environmental chemicals.
Papadaki, K C; Karakitsios, S P; Sarigiannis, D A
2017-12-01
A Quantitative Structure Activity Relationship (QSAR) model was developed in order to predict the adipose/blood partition coefficient of environmental chemical compounds. The first step of QSAR modeling was the collection of inputs. Input data included the experimental values of adipose/blood partition coefficient and two sets of molecular descriptors for 67 organic chemical compounds; a) the descriptors from Linear Free Energy Relationship (LFER) and b) the PaDEL descriptors. The datasets were split to training and prediction set and were analysed using two statistical methods; Genetic Algorithm based Multiple Linear Regression (GA-MLR) and Artificial Neural Networks (ANN). The models with LFER and PaDEL descriptors, coupled with ANN, produced satisfying performance results. The fitting performance (R 2 ) of the models, using LFER and PaDEL descriptors, was 0.94 and 0.96, respectively. The Applicability Domain (AD) of the models was assessed and then the models were applied to a large number of chemical compounds with unknown values of adipose/blood partition coefficient. In conclusion, the proposed models were checked for fitting, validity and applicability. It was demonstrated that they are stable, reliable and capable to predict the values of adipose/blood partition coefficient of "data poor" chemical compounds that fall within the applicability domain. Copyright © 2017. Published by Elsevier Ltd.
Partitioning behavior of aromatic components in jet fuel into diverse membrane-coated fibers.
Baynes, Ronald E; Xia, Xin-Rui; Barlow, Beth M; Riviere, Jim E
2007-11-01
Jet fuel components are known to partition into skin and produce occupational irritant contact dermatitis (OICD) and potentially adverse systemic effects. The purpose of this study was to determine how jet fuel components partition (1) from solvent mixtures into diverse membrane-coated fibers (MCFs) and (2) from biological media into MCFs to predict tissue distribution. Three diverse MCFs, polydimethylsiloxane (PDMS, lipophilic), polyacrylate (PA, polarizable), and carbowax (CAR, polar), were selected to simulate the physicochemical properties of skin in vivo. Following an appropriate equilibrium time between the MCF and dosing solutions, the MCF was injected directly into a gas chromatograph/mass spectrometer (GC-MS) to quantify the amount that partitioned into the membrane. Three vehicles (water, 50% ethanol-water, and albumin-containing media solution) were studied for selected jet fuel components. The more hydrophobic the component, the greater was the partitioning into the membranes across all MCF types, especially from water. The presence of ethanol as a surrogate solvent resulted in significantly reduced partitioning into the MCFs with discernible differences across the three fibers based on their chemistries. The presence of a plasma substitute (media) also reduced partitioning into the MCF, with the CAR MCF system being better correlated to the predicted partitioning of aromatic components into skin. This study demonstrated that a single or multiple set of MCF fibers may be used as a surrogate for octanol/water systems and skin to assess partitioning behavior of nine aromatic components frequently formulated with jet fuels. These diverse inert fibers were able to assess solute partitioning from a blood substitute such as media into a membrane possessing physicochemical properties similar to human skin. This information may be incorporated into physiologically based pharmacokinetic (PBPK) models to provide a more accurate assessment of tissue dosimetry of related toxicants.
Partitioning of polar and non-polar neutral organic chemicals into human and cow milk.
Geisler, Anett; Endo, Satoshi; Goss, Kai-Uwe
2011-10-01
The aim of this work was to develop a predictive model for milk/water partition coefficients of neutral organic compounds. Batch experiments were performed for 119 diverse organic chemicals in human milk and raw and processed cow milk at 37°C. No differences (<0.3 log units) in the partition coefficients of these types of milk were observed. The polyparameter linear free energy relationship model fit the calibration data well (SD=0.22 log units). An experimental validation data set including hormones and hormone active compounds was predicted satisfactorily by the model. An alternative modelling approach based on log K(ow) revealed a poorer performance. The model presented here provides a significant improvement in predicting enrichment of potentially hazardous chemicals in milk. In combination with physiologically based pharmacokinetic modelling this improvement in the estimation of milk/water partitioning coefficients may allow a better risk assessment for a wide range of neutral organic chemicals. Copyright © 2011 Elsevier Ltd. All rights reserved.
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
K-Partite RNA Secondary Structures
NASA Astrophysics Data System (ADS)
Jiang, Minghui; Tejada, Pedro J.; Lasisi, Ramoni O.; Cheng, Shanhong; Fechser, D. Scott
RNA secondary structure prediction is a fundamental problem in structural bioinformatics. The prediction problem is difficult because RNA secondary structures may contain pseudoknots formed by crossing base pairs. We introduce k-partite secondary structures as a simple classification of RNA secondary structures with pseudoknots. An RNA secondary structure is k-partite if it is the union of k pseudoknot-free sub-structures. Most known RNA secondary structures are either bipartite or tripartite. We show that there exists a constant number k such that any secondary structure can be modified into a k-partite secondary structure with approximately the same free energy. This offers a partial explanation of the prevalence of k-partite secondary structures with small k. We give a complete characterization of the computational complexities of recognizing k-partite secondary structures for all k ≥ 2, and show that this recognition problem is essentially the same as the k-colorability problem on circle graphs. We present two simple heuristics, iterated peeling and first-fit packing, for finding k-partite RNA secondary structures. For maximizing the number of base pair stackings, our iterated peeling heuristic achieves a constant approximation ratio of at most k for 2 ≤ k ≤ 5, and at most frac6{1-(1-6/k)^k} le frac6{1-e^{-6}} < 6.01491 for k ≥ 6. Experiment on sequences from PseudoBase shows that our first-fit packing heuristic outperforms the leading method HotKnots in predicting RNA secondary structures with pseudoknots. Source code, data set, and experimental results are available at
A Novel Coarsening Method for Scalable and Efficient Mesh Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, A; Hysom, D; Gunney, B
2010-12-02
In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less
Allan Variance Calculation for Nonuniformly Spaced Input Data
2015-01-01
τ (tau). First, the set of gyro values is partitioned into bins of duration τ. For example, if the sampling duration τ is 2 sec and there are 4,000...Variance Calculation For each value of τ, the conventional AV calculation partitions the gyro data sets into bins with approximately τ / Δt...value of Δt. Therefore, a new way must be found to partition the gyro data sets into bins. The basic concept behind the modified AV calculation is
Effects of partitioning and scheduling sparse matrix factorization on communication and load balance
NASA Technical Reports Server (NTRS)
Venugopal, Sesh; Naik, Vijay K.
1991-01-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
NASA Astrophysics Data System (ADS)
McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.
2017-03-01
We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.
Hall, Matthew; Woolhouse, Mark; Rambaut, Andrew
2015-01-01
The use of genetic data to reconstruct the transmission tree of infectious disease epidemics and outbreaks has been the subject of an increasing number of studies, but previous approaches have usually either made assumptions that are not fully compatible with phylogenetic inference, or, where they have based inference on a phylogeny, have employed a procedure that requires this tree to be fixed. At the same time, the coalescent-based models of the pathogen population that are employed in the methods usually used for time-resolved phylogeny reconstruction are a considerable simplification of epidemic process, as they assume that pathogen lineages mix freely. Here, we contribute a new method that is simultaneously a phylogeny reconstruction method for isolates taken from an epidemic, and a procedure for transmission tree reconstruction. We observe that, if one or more samples is taken from each host in an epidemic or outbreak and these are used to build a phylogeny, a transmission tree is equivalent to a partition of the set of nodes of this phylogeny, such that each partition element is a set of nodes that is connected in the full tree and contains all the tips corresponding to samples taken from one and only one host. We then implement a Monte Carlo Markov Chain (MCMC) procedure for simultaneous sampling from the spaces of both trees, utilising a newly-designed set of phylogenetic tree proposals that also respect node partitions. We calculate the posterior probability of these partitioned trees based on a model that acknowledges the population structure of an epidemic by employing an individual-based disease transmission model and a coalescent process taking place within each host. We demonstrate our method, first using simulated data, and then with sequences taken from the H7N7 avian influenza outbreak that occurred in the Netherlands in 2003. We show that it is superior to established coalescent methods for reconstructing the topology and node heights of the phylogeny and performs well for transmission tree reconstruction when the phylogeny is well-resolved by the genetic data, but caution that this will often not be the case in practice and that existing genetic and epidemiological data should be used to configure such analyses whenever possible. This method is available for use by the research community as part of BEAST, one of the most widely-used packages for reconstruction of dated phylogenies. PMID:26717515
ESTimating plant phylogeny: lessons from partitioning
de la Torre, Jose EB; Egan, Mary G; Katari, Manpreet S; Brenner, Eric D; Stevenson, Dennis W; Coruzzi, Gloria M; DeSalle, Rob
2006-01-01
Background While Expressed Sequence Tags (ESTs) have proven a viable and efficient way to sample genomes, particularly those for which whole-genome sequencing is impractical, phylogenetic analysis using ESTs remains difficult. Sequencing errors and orthology determination are the major problems when using ESTs as a source of characters for systematics. Here we develop methods to incorporate EST sequence information in a simultaneous analysis framework to address controversial phylogenetic questions regarding the relationships among the major groups of seed plants. We use an automated, phylogenetically derived approach to orthology determination called OrthologID generate a phylogeny based on 43 process partitions, many of which are derived from ESTs, and examine several measures of support to assess the utility of EST data for phylogenies. Results A maximum parsimony (MP) analysis resulted in a single tree with relatively high support at all nodes in the tree despite rampant conflict among trees generated from the separate analysis of individual partitions. In a comparison of broader-scale groupings based on cellular compartment (ie: chloroplast, mitochondrial or nuclear) or function, only the nuclear partition tree (based largely on EST data) was found to be topologically identical to the tree based on the simultaneous analysis of all data. Despite topological conflict among the broader-scale groupings examined, only the tree based on morphological data showed statistically significant differences. Conclusion Based on the amount of character support contributed by EST data which make up a majority of the nuclear data set, and the lack of conflict of the nuclear data set with the simultaneous analysis tree, we conclude that the inclusion of EST data does provide a viable and efficient approach to address phylogenetic questions within a parsimony framework on a genomic scale, if problems of orthology determination and potential sequencing errors can be overcome. In addition, approaches that examine conflict and support in a simultaneous analysis framework allow for a more precise understanding of the evolutionary history of individual process partitions and may be a novel way to understand functional aspects of different kinds of cellular classes of gene products. PMID:16776834
Fragment-based prediction of skin sensitization using recursive partitioning
NASA Astrophysics Data System (ADS)
Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian
2011-09-01
Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.
Certificate Revocation Using Fine Grained Certificate Space Partitioning
NASA Astrophysics Data System (ADS)
Goyal, Vipul
A new certificate revocation system is presented. The basic idea is to divide the certificate space into several partitions, the number of partitions being dependent on the PKI environment. Each partition contains the status of a set of certificates. A partition may either expire or be renewed at the end of a time slot. This is done efficiently using hash chains.
The partition dimension of cycle books graph
NASA Astrophysics Data System (ADS)
Santoso, Jaya; Darmaji
2018-03-01
Let G be a nontrivial and connected graph with vertex set V(G), edge set E(G) and S ⊆ V(G) with v ∈ V(G), the distance between v and S is d(v,S) = min{d(v,x)|x ∈ S}. For an ordered partition ∏ = {S 1, S 2, S 3,…, Sk } of V(G), the representation of v with respect to ∏ is defined by r(v|∏) = (d(v, S 1), d(v, S 2),…, d(v, Sk )). The partition ∏ is called a resolving partition of G if all representations of vertices are distinct. The partition dimension pd(G) is the smallest integer k such that G has a resolving partition set with k members. In this research, we will determine the partition dimension of Cycle Books {B}{Cr,m}. Cycle books graph {B}{Cr,m} is a graph consisting of m copies cycle Cr with the common path P 2. It is shown that the partition dimension of cycle books graph, pd({B}{C3,m}) is 3 for m = 2, 3, and m for m ≥ 4. pd({B}{C4,m}) is 3 + 2k for m = 3k + 2, 4 + 2(k ‑ 1) for m = 3k + 1, and 3 + 2(k ‑ 1) for m = 3k. pd({B}{C5,m}) is m + 1.
Partitioning Ocean Wave Spectra Obtained from Radar Observations
NASA Astrophysics Data System (ADS)
Delaye, Lauriane; Vergely, Jean-Luc; Hauser, Daniele; Guitton, Gilles; Mouche, Alexis; Tison, Celine
2016-08-01
2D wave spectra of ocean waves can be partitioned into several wave components to better characterize the scene. We present here two methods of component detection: one based on watershed algorithm and the other based on a Bayesian approach. We tested both methods on a set of simulated SWIM data, the Ku-band real aperture radar embarked on the CFOSAT (China- France Oceanography Satellite) mission which launch is planned mid-2018. We present the results and the limits of both approaches and show that Bayesian method can also be applied to other kind of wave spectra observations as those obtained with the radar KuROS, an airborne radar wave spectrometer.
Foreign Language Analysis and Recognition (FLARe)
2016-10-08
10 7 Chinese CER ...Rates ( CERs ) were obtained with each feature set: (1) 19.2%, (2) 17.3%, and (3) 15.3%. Based on these results, a GMM-HMM speech recognition system...These systems were evaluated on the HUB4 and HKUST test partitions. Table 7 shows the CER obtained on each test set. Whereas including the HKUST data
Toropov, A A; Toropova, A P; Raska, I
2008-04-01
Simplified molecular input line entry system (SMILES) has been utilized in constructing quantitative structure-property relationships (QSPR) for octanol/water partition coefficient of vitamins and organic compounds of different classes by optimal descriptors. Statistical characteristics of the best model (vitamins) are the following: n=17, R(2)=0.9841, s=0.634, F=931 (training set); n=7, R(2)=0.9928, s=0.773, F=690 (test set). Using this approach for modeling octanol/water partition coefficient for a set of organic compounds gives a model that is statistically characterized by n=69, R(2)=0.9872, s=0.156, F=5184 (training set) and n=70, R(2)=0.9841, s=0.179, F=4195 (test set).
A Study of Energy Partitioning Using A Set of Related Explosive Formulations
NASA Astrophysics Data System (ADS)
Lieber, Mark; Foster, Joseph C., Jr.; Stewart, D. Scott
2011-06-01
Condensed phase high explosives convert potential energy stored in the electro-magnetic field structure of complex molecules to kinetic energy during the detonation process. This energy is manifest in the internal thermodynamic energy and the translational flow of the products. Historically, the explosive design problem has focused on intramolecular stoichiometry providing prompt reactions based on transport physics at the molecular scale. Modern material design has evolved to approaches that employee intermolecular ingredients to alter the spatial and temporal distribution of energy release. CHEETA has been used to produce data for a set of fictitious explosive formulations based on C-4 to study the partitioning of the available energy between internal and flow energy in the detonation. The equation of state information from CHEETA has been used in ALE3D to develop an understanding of the relationship between variations in the formulation parameters and the internal energy cycle in the products.
NASA Astrophysics Data System (ADS)
Chen, Naijin
2013-03-01
Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.
Wang, Li Kun; Heng, Paul Wan Sia; Liew, Celine Valeria
2015-04-01
Bottom spray fluid-bed coating is a common technique for coating multiparticulates. Under the quality-by-design framework, particle recirculation within the partition column is one of the main variability sources affecting particle coating and coat uniformity. However, the occurrence and mechanism of particle recirculation within the partition column of the coater are not well understood. The purpose of this study was to visualize and define particle recirculation within the partition column. Based on different combinations of partition gap setting, air accelerator insert diameter, and particle size fraction, particle movements within the partition column were captured using a high-speed video camera. The particle recirculation probability and voidage information were mapped using a visiometric process analyzer. High-speed images showed that particles contributing to the recirculation phenomenon were behaving as clustered colonies. Fluid dynamics analysis indicated that particle recirculation within the partition column may be attributed to the combined effect of cluster formation and drag reduction. Both visiometric process analysis and particle coating experiments showed that smaller particles had greater propensity toward cluster formation than larger particles. The influence of cluster formation on coating performance and possible solutions to cluster formation were further discussed. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Rational design of polymer-based absorbents: application to the fermentation inhibitor furfural.
Nwaneshiudu, Ikechukwu C; Schwartz, Daniel T
2015-01-01
Reducing the amount of water-soluble fermentation inhibitors like furfural is critical for downstream bio-processing steps to biofuels. A theoretical approach for tailoring absorption polymers to reduce these pretreatment contaminants would be useful for optimal bioprocess design. Experiments were performed to measure aqueous furfural partitioning into polymer resins of 5 bisphenol A diglycidyl ether (epoxy) and polydimethylsiloxane (PDMS). Experimentally measured partitioning of furfural between water and PDMS, the more hydrophobic polymer, showed poor performance, with the logarithm of PDMS-to-water partition coefficient falling between -0.62 and -0.24 (95% confidence). In contrast, the fast setting epoxy was found to effectively partition furfural with the logarithm of the epoxy-to-water partition coefficient falling between 0.41 and 0.81 (95% confidence). Flory-Huggins theory is used to predict the partitioning of furfural into diverse polymer absorbents and is useful for predicting these results. We show that Flory-Huggins theory can be adapted to guide the selection of polymer adsorbents for the separation of low molecular weight organic species from aqueous solutions. This work lays the groundwork for the general design of polymers for the separation of a wide range of inhibitory compounds in biomass pretreatment streams.
Snyder, David A; Montelione, Gaetano T
2005-06-01
An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.
NASA Astrophysics Data System (ADS)
Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter
2011-12-01
At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS is the simplified computation, since BFBS are piecewise polynomial, which ERBS are not. One disadvantage of using BFBS in the place of ERBS in this construction is that the necessary selection of the degree of BFBS imposes constraints on the maximal possible multiplicity of the Hermite interpolation.
Target Detection and Classification Using Seismic and PIR Sensors
2012-06-01
time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The
Wang, Thanh; Han, Shanlong; Yuan, Bo; Zeng, Lixi; Li, Yingming; Wang, Yawei; Jiang, Guibin
2012-12-01
Short chain chlorinated paraffins (SCCPs) are semi-volatile chemicals that are considered persistent in the environment, potential toxic and subject to long-range transport. This study investigates the concentrations and gas-particle partitioning of SCCPs at an urban site in Beijing during summer and wintertime. The total atmospheric SCCP levels ranged 1.9-33.0 ng/m(3) during wintertime. Significantly higher levels were found during the summer (range 112-332 ng/m(3)). The average fraction of total SCCPs in the particle phase (ϕ) was 0.67 during wintertime but decreased significantly during the summer (ϕ = 0.06). The ten and eleven carbon chain homologues with five to eight chlorine atoms were the predominant SCCP formula groups in air. Significant linear correlations were found between the gas-particle partition coefficients and the predicted subcooled vapor pressures and octanol-air partition coefficients. The gas-particle partitioning of SCCPs was further investigated and compared with both the Junge-Pankow adsorption and K(oa)-based absorption models. Copyright © 2012 Elsevier Ltd. All rights reserved.
On the star partition dimension of comb product of cycle and complete graph
NASA Astrophysics Data System (ADS)
Alfarisi, Ridho; Darmaji; Dafik
2017-06-01
Let G = (V, E) be a connected graphs with vertex set V (G), edge set E(G) and S ⊆ V (G). For an ordered partition Π = {S 1, S 2, S 3, …, Sk } of V (G), the representation of a vertex v ∈ V (G) with respect to Π is the k-vectors r(v|Π) = (d(v, S 1), d(v, S 2), …, d(v, Sk )), where d(v, Sk ) represents the distance between the vertex v and the set Sk , defined by d(v, Sk ) = min{d(v, x)|x ∈ Sk}. The partition Π of V (G) is a resolving partition if the k-vektors r(v|Π), v ∈ V (G) are distinct. The minimum resolving partition Π is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S 1, S 2, S 3, …, Sk} is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si , 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V (G) is the star partition dimension of G, denoted by spd(G). Finding a star partition dimension of G is classified to be a NP-Hard problem. Furthermore, the comb product between G and H, denoted by G ⊲ H, is a graph obtained by taking one copy of G and |V (G)| copies of H and grafting the i-th copy of H at the vertex o to the i-th vertex of G. By definition of comb product, we can say that V (G ⊲ H) = {(a, u)|a ∈ V (G), u ∈ V (H)} and (a, u)(b, v) ∈ E(G ⊲ H) whenever a = b and uv ∈ E(H), or ab ∈ E(G) and u = v = o. In this paper, we will study the star partition dimension of comb product of cycle and complete graph, namely Cn ⊲ Km and Km ⊲ Cn for n ≥ 3 and m ≥ 3.
Chan, An-Wen; Fung, Kinwah; Tran, Jennifer M; Kitchen, Jessica; Austin, Peter C; Weinstock, Martin A; Rochon, Paula A
2016-10-01
Keratinocyte carcinoma (nonmelanoma skin cancer) accounts for substantial burden in terms of high incidence and health care costs but is excluded by most cancer registries in North America. Administrative health insurance claims databases offer an opportunity to identify these cancers using diagnosis and procedural codes submitted for reimbursement purposes. To apply recursive partitioning to derive and validate a claims-based algorithm for identifying keratinocyte carcinoma with high sensitivity and specificity. Retrospective study using population-based administrative databases linked to 602 371 pathology episodes from a community laboratory for adults residing in Ontario, Canada, from January 1, 1992, to December 31, 2009. The final analysis was completed in January 2016. We used recursive partitioning (classification trees) to derive an algorithm based on health insurance claims. The performance of the derived algorithm was compared with 5 prespecified algorithms and validated using an independent academic hospital clinic data set of 2082 patients seen in May and June 2011. Sensitivity, specificity, positive predictive value, and negative predictive value using the histopathological diagnosis as the criterion standard. We aimed to achieve maximal specificity, while maintaining greater than 80% sensitivity. Among 602 371 pathology episodes, 131 562 (21.8%) had a diagnosis of keratinocyte carcinoma. Our final derived algorithm outperformed the 5 simple prespecified algorithms and performed well in both community and hospital data sets in terms of sensitivity (82.6% and 84.9%, respectively), specificity (93.0% and 99.0%, respectively), positive predictive value (76.7% and 69.2%, respectively), and negative predictive value (95.0% and 99.6%, respectively). Algorithm performance did not vary substantially during the 18-year period. This algorithm offers a reliable mechanism for ascertaining keratinocyte carcinoma for epidemiological research in the absence of cancer registry data. Our findings also demonstrate the value of recursive partitioning in deriving valid claims-based algorithms.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
An in situ approach to study trace element partitioning in the laser heated diamond anvil cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petitgirard, S.; Mezouar, M.; Borchert, M.
2012-01-15
Data on partitioning behavior of elements between different phases at in situ conditions are crucial for the understanding of element mobility especially for geochemical studies. Here, we present results of in situ partitioning of trace elements (Zr, Pd, and Ru) between silicate and iron melts, up to 50 GPa and 4200 K, using a modified laser heated diamond anvil cell (DAC). This new experimental set up allows simultaneous collection of x-ray fluorescence (XRF) and x-ray diffraction (XRD) data as a function of time using the high pressure beamline ID27 (ESRF, France). The technique enables the simultaneous detection of sample meltingmore » based to the appearance of diffuse scattering in the XRD pattern, characteristic of the structure factor of liquids, and measurements of elemental partitioning of the sample using XRF, before, during and after laser heating in the DAC. We were able to detect elements concentrations as low as a few ppm level (2-5 ppm) on standard solutions. In situ measurements are complimented by mapping of the chemical partitions of the trace elements after laser heating on the quenched samples to constrain the partitioning data. Our first results indicate a strong partitioning of Pd and Ru into the metallic phase, while Zr remains clearly incompatible with iron. This novel approach extends the pressure and temperature range of partitioning experiments derived from quenched samples from the large volume presses and could bring new insight to the early history of Earth.« less
Weights and topology: a study of the effects of graph construction on 3D image segmentation.
Grady, Leo; Jolly, Marie-Pierre
2008-01-01
Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.
Pirkle, Catherine M; Wu, Yan Yan; Zunzunegui, Maria-Victoria; Gómez, José Fernando
2018-01-01
Objective Conceptual models underpinning much epidemiological research on ageing acknowledge that environmental, social and biological systems interact to influence health outcomes. Recursive partitioning is a data-driven approach that allows for concurrent exploration of distinct mixtures, or clusters, of individuals that have a particular outcome. Our aim is to use recursive partitioning to examine risk clusters for metabolic syndrome (MetS) and its components, in order to identify vulnerable populations. Study design Cross-sectional analysis of baseline data from a prospective longitudinal cohort called the International Mobility in Aging Study (IMIAS). Setting IMIAS includes sites from three middle-income countries—Tirana (Albania), Natal (Brazil) and Manizales (Colombia)—and two from Canada—Kingston (Ontario) and Saint-Hyacinthe (Quebec). Participants Community-dwelling male and female adults, aged 64–75 years (n=2002). Primary and secondary outcome measures We apply recursive partitioning to investigate social and behavioural risk factors for MetS and its components. Model-based recursive partitioning (MOB) was used to cluster participants into age-adjusted risk groups based on variabilities in: study site, sex, education, living arrangements, childhood adversities, adult occupation, current employment status, income, perceived income sufficiency, smoking status and weekly minutes of physical activity. Results 43% of participants had MetS. Using MOB, the primary partitioning variable was participant sex. Among women from middle-incomes sites, the predicted proportion with MetS ranged from 58% to 68%. Canadian women with limited physical activity had elevated predicted proportions of MetS (49%, 95% CI 39% to 58%). Among men, MetS ranged from 26% to 41% depending on childhood social adversity and education. Clustering for MetS components differed from the syndrome and across components. Study site was a primary partitioning variable for all components except HDL cholesterol. Sex was important for most components. Conclusion MOB is a promising technique for identifying disease risk clusters (eg, vulnerable populations) in modestly sized samples. PMID:29500203
A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.
Brusco, Michael J; Shireman, Emilie; Steinley, Douglas
2017-09-01
The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Huhn, Carolin; Pyell, Ute
2008-07-11
It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.
Structural methodologies for auditing SNOMED.
Wang, Yue; Halper, Michael; Min, Hua; Perl, Yehoshua; Chen, Yan; Spackman, Kent A
2007-10-01
SNOMED is one of the leading health care terminologies being used worldwide. As such, quality assurance is an important part of its maintenance cycle. Methodologies for auditing SNOMED based on structural aspects of its organization are presented. In particular, automated techniques for partitioning SNOMED into smaller groups of concepts based primarily on relationships patterns are defined. Two abstraction networks, the area taxonomy and p-area taxonomy, are derived from the partitions. The high-level views afforded by these abstraction networks form the basis for systematic auditing. The networks tend to highlight errors that manifest themselves as irregularities at the abstract level. They also support group-based auditing, where sets of purportedly similar concepts are focused on for review. The auditing methodologies are demonstrated on one of SNOMED's top-level hierarchies. Errors discovered during the auditing process are reported.
Yoink: An interaction-based partitioning API.
Zheng, Min; Waller, Mark P
2018-05-15
Herein, we describe the implementation details of our interaction-based partitioning API (application programming interface) called Yoink for QM/MM modeling and fragment-based quantum chemistry studies. Interactions are detected by computing density descriptors such as reduced density gradient, density overlap regions indicator, and single exponential decay detector. Only molecules having an interaction with a user-definable QM core are added to the QM region of a hybrid QM/MM calculation. Moreover, a set of molecule pairs having density-based interactions within a molecular system can be computed in Yoink, and an interaction graph can then be constructed. Standard graph clustering methods can then be applied to construct fragments for further quantum chemical calculations. The Yoink API is licensed under Apache 2.0 and can be accessed via yoink.wallerlab.org. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Cell-autonomous-like silencing of GFP-partitioned transgenic Nicotiana benthamiana.
Sohn, Seong-Han; Frost, Jennifer; Kim, Yoon-Hee; Choi, Seung-Kook; Lee, Yi; Seo, Mi-Suk; Lim, Sun-Hyung; Choi, Yeonhee; Kim, Kook-Hyung; Lomonossoff, George
2014-08-01
We previously reported the novel partitioning of regional GFP-silencing on leaves of 35S-GFP transgenic plants, coining the term "partitioned silencing". We set out to delineate the mechanism of partitioned silencing. Here, we report that the partitioned plants were hemizygous for the transgene, possessing two direct-repeat copies of 35S-GFP. The detection of both siRNA expression (21 and 24 nt) and DNA methylation enrichment specifically at silenced regions indicated that both post-transcriptional gene silencing (PTGS) and transcriptional gene silencing (TGS) were involved in the silencing mechanism. Using in vivo agroinfiltration of 35S-GFP/GUS and inoculation of TMV-GFP RNA, we demonstrate that PTGS, not TGS, plays a dominant role in the partitioned silencing, concluding that the underlying mechanism of partitioned silencing is analogous to RNA-directed DNA methylation (RdDM). The initial pattern of partitioned silencing was tightly maintained in a cell-autonomous manner, although partitioned-silenced regions possess a potential for systemic spread. Surprisingly, transcriptome profiling through next-generation sequencing demonstrated that expression levels of most genes involved in the silencing pathway were similar in both GFP-expressing and silenced regions although a diverse set of region-specific transcripts were detected.This suggests that partitioned silencing can be triggered and regulated by genes other than the genes involved in the silencing pathway. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.
1984-01-01
A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.
Set Partitions and the Multiplication Principle
ERIC Educational Resources Information Center
Lockwood, Elise; Caughman, John S., IV
2016-01-01
To further understand student thinking in the context of combinatorial enumeration, we examine student work on a problem involving set partitions. In this context, we note some key features of the multiplication principle that were often not attended to by students. We also share a productive way of thinking that emerged for several students who…
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
NASA Technical Reports Server (NTRS)
Medard, E.; Martin, A. M.; Righter, K.; Malouta, A.; Lee, C.-T.
2017-01-01
Most siderophile element concentrations in planetary mantles can be explained by metal/ silicate equilibration at high temperature and pressure during core formation. Highly siderophile elements (HSE = Au, Re, and the Pt-group elements), however, usually have higher mantle abundances than predicted by partitioning models, suggesting that their concentrations have been set by late accretion of material that did not equilibrate with the core. The partitioning of HSE at the low oxygen fugacities relevant for core formation is however poorly constrained due to the lack of sufficient experimental constraints to describe the variations of partitioning with key variables like temperature, pressure, and oxygen fugacity. To better understand the relative roles of metal/silicate partitioning and late accretion, we performed a self-consistent set of experiments that parameterizes the influence of oxygen fugacity, temperature and melt composition on the partitioning of Pt, one of the HSE, between metal and silicate melts. The major outcome of this project is the fact that Pt dissolves in an anionic form in silicate melts, causing a dependence of partitioning on oxygen fugacity opposite to that reported in previous studies.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
NASA Astrophysics Data System (ADS)
Hopcroft, Peter O.; Gallagher, Kerry; Pain, Christopher C.
2009-08-01
Collections of suitably chosen borehole profiles can be used to infer large-scale trends in ground-surface temperature (GST) histories for the past few hundred years. These reconstructions are based on a large database of carefully selected borehole temperature measurements from around the globe. Since non-climatic thermal influences are difficult to identify, representative temperature histories are derived by averaging individual reconstructions to minimize the influence of these perturbing factors. This may lead to three potentially important drawbacks: the net signal of non-climatic factors may not be zero, meaning that the average does not reflect the best estimate of past climate; the averaging over large areas restricts the useful amount of more local climate change information available; and the inversion methods used to reconstruct the past temperatures at each site must be mathematically identical and are therefore not necessarily best suited to all data sets. In this work, we avoid these issues by using a Bayesian partition model (BPM), which is computed using a trans-dimensional form of a Markov chain Monte Carlo algorithm. This then allows the number and spatial distribution of different GST histories to be inferred from a given set of borehole data by partitioning the geographical area into discrete partitions. Profiles that are heavily influenced by non-climatic factors will be partitioned separately. Conversely, profiles with climatic information, which is consistent with neighbouring profiles, will then be inferred to lie in the same partition. The geographical extent of these partitions then leads to information on the regional extent of the climatic signal. In this study, three case studies are described using synthetic and real data. The first demonstrates that the Bayesian partition model method is able to correctly partition a suite of synthetic profiles according to the inferred GST history. In the second, more realistic case, a series of temperature profiles are calculated using surface air temperatures of a global climate model simulation. In the final case, 23 real boreholes from the United Kingdom, previously used for climatic reconstructions, are examined and the results compared with a local instrumental temperature series and the previous estimate derived from the same borehole data. The results indicate that the majority (17) of the 23 boreholes are unsuitable for climatic reconstruction purposes, at least without including other thermal processes in the forward model.
Enhanced Trajectory Based Similarity Prediction with Uncertainty Quantification
2014-10-02
challenge by obtaining the highest score by using a data-driven prognostics method to predict the RUL of a turbofan engine (Saxena & Goebel, PHM08...process for multi-regime health assessment. To illustrate multi-regime partitioning, the “ Turbofan Engine Degradation simulation” data set from...hence the name k- means. Figure 3 shows the results of the k-means clustering algorithm on the “ Turbofan Engine Degradation simulation” data set. As
Synthesis, Interdiction, and Protection of Layered Networks
2009-09-01
152 4.7 Al Qaeda Network from Sageman Database . . . . . . . . . . 157 4.8 Interdiction Resources versus Closeness Centrality . . . . . . 159...where S may be a polyhedron , a set with discrete variables, a set with nonlin- earities, or so on); and partitions it into two mutually exclusive subsets...p. vii]. However, this database is based on Dr. Sagemans’s 2004 publication and may be dated. Therefore, the analysis in this section is to
2010-01-01
Background Comparative genomics methods such as phylogenetic profiling can mine powerful inferences from inherently noisy biological data sets. We introduce Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL), a method that applies the Partial Phylogenetic Profiling (PPP) approach locally within a protein sequence to discover short sequence signatures associated with functional sites. The approach is based on the basic scoring mechanism employed by PPP, namely the use of binomial distribution statistics to optimize sequence similarity cutoffs during searches of partitioned training sets. Results Here we illustrate and validate the ability of the SIMBAL method to find functionally relevant short sequence signatures by application to two well-characterized protein families. In the first example, we partitioned a family of ABC permeases using a metabolic background property (urea utilization). Thus, the TRUE set for this family comprised members whose genome of origin encoded a urea utilization system. By moving a sliding window across the sequence of a permease, and searching each subsequence in turn against the full set of partitioned proteins, the method found which local sequence signatures best correlated with the urea utilization trait. Mapping of SIMBAL "hot spots" onto crystal structures of homologous permeases reveals that the significant sites are gating determinants on the cytosolic face rather than, say, docking sites for the substrate-binding protein on the extracellular face. In the second example, we partitioned a protein methyltransferase family using gene proximity as a criterion. In this case, the TRUE set comprised those methyltransferases encoded near the gene for the substrate RF-1. SIMBAL identifies sequence regions that map onto the substrate-binding interface while ignoring regions involved in the methyltransferase reaction mechanism in general. Neither method for training set construction requires any prior experimental characterization. Conclusions SIMBAL shows that, in functionally divergent protein families, selected short sequences often significantly outperform their full-length parent sequence for making functional predictions by sequence similarity, suggesting avenues for improved functional classifiers. When combined with structural data, SIMBAL affords the ability to localize and model functional sites. PMID:20102603
Drug Distribution. Part 1. Models to Predict Membrane Partitioning.
Nagar, Swati; Korzekwa, Ken
2017-03-01
Tissue partitioning is an important component of drug distribution and half-life. Protein binding and lipid partitioning together determine drug distribution. Two structure-based models to predict partitioning into microsomal membranes are presented. An orientation-based model was developed using a membrane template and atom-based relative free energy functions to select drug conformations and orientations for neutral and basic drugs. The resulting model predicts the correct membrane positions for nine compounds tested, and predicts the membrane partitioning for n = 67 drugs with an average fold-error of 2.4. Next, a more facile descriptor-based model was developed for acids, neutrals and bases. This model considers the partitioning of neutral and ionized species at equilibrium, and can predict membrane partitioning with an average fold-error of 2.0 (n = 92 drugs). Together these models suggest that drug orientation is important for membrane partitioning and that membrane partitioning can be well predicted from physicochemical properties.
Dai, D; Barranco, F T; Illangasekare, T H
2001-12-15
Research on the use of partitioning and interfacial tracers has led to the development of techniques for estimating subsurface NAPL amount and NAPL-water interfacial area. Although these techniques have been utilized with some success at field sites, current application is limited largely to NAPL at residual saturation, such as for the case of post-remediation settings where mobile NAPL has been removed through product recovery. The goal of this study was to fundamentally evaluate partitioning and interfacial tracer behavior in controlled column-scale test cells for a range of entrapment configurations varying in NAPL saturation, with the results serving as a determinant of technique efficacy (and design protocol) for use with complexly distributed NAPLs, possibly at high saturation, in heterogeneous aquifers. Representative end members of the range of entrapment configurations observed under conditions of natural heterogeneity (an occurrence with residual NAPL saturation [discontinuous blobs] and an occurrence with high NAPL saturation [continuous free-phase LNAPL lens]) were evaluated. Study results indicated accurate prediction (using measured tracer retardation and equilibrium-based computational techniques) of NAPL amount and NAPL-water interfacial area for the case of residual NAPL saturation. For the high-saturation LNAPL lens, results indicated that NAPL-water interfacial area, but not NAPL amount (underpredicted by 35%), can be reasonably determined using conventional computation techniques. Underprediction of NAPL amount lead to an erroneous prediction of NAPL distribution, as indicated by the NAPL morphology index. In light of these results, careful consideration should be given to technique design and critical assumptions before applying equilibrium-based partitioning tracer methodology to settings where NAPLs are complexly entrapped, such as in naturally heterogeneous subsurface formations.
A discrete scattering series representation for lattice embedded models of chain cyclization
NASA Astrophysics Data System (ADS)
Fraser, Simon J.; Winnik, Mitchell A.
1980-01-01
In this paper we develop a lattice based model of chain cyclization in the presence of a set of occupied sites V in the lattice. We show that within the approximation of a Markovian chain propagator the effect of V on the partition function for the system can be written as a time-ordered exponential series in which V behaves like a scattering potential and chainlength is the timelike parameter. The discrete and finite nature of this model allows us to obtain rigorous upper and lower bounds to the series limit. We adapt these formulas to calculation of the partition functions and cyclization probabilities of terminally and globally cyclizing chains. Two classes of cyclization are considered: in the first model the target set H may be visited repeatedly (the Markovian model); in the second case vertices in H may be visited at most once(the non-Markovian or taboo model). This formulation depends on two fundamental combinatorial structures, namely the inclusion-exclusion principle and the set of subsets of a set. We have tried to interpret these abstract structures with physical analogies throughout the paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
NASA Astrophysics Data System (ADS)
Corrigan, Catherine M.; Chabot, Nancy L.; McCoy, Timothy J.; McDonough, William F.; Watson, Heather C.; Saslow, Sarah A.; Ash, Richard D.
2009-05-01
To better understand the partitioning behavior of elements during the formation and evolution of iron meteorites, two sets of experiments were conducted at 1 atm in the Fe-Ni-P system. The first set examined the effect of P on solid metal/liquid metal partitioning behavior of 22 elements, while the other set explored the effect of the crystal structures of body-centered cubic (α)- and face-centered cubic (γ)-solid Fe alloys on partitioning behavior. Overall, the effect of P on the partition coefficients for the majority of the elements was minimal. As, Au, Ga, Ge, Ir, Os, Pt, Re, and Sb showed slightly increasing partition coefficients with increasing P-content of the metallic liquid. Co, Cu, Pd, and Sn showed constant partition coefficients. Rh, Ru, W, and Mo showed phosphorophile (P-loving) tendencies. Parameterization models were applied to solid metal/liquid metal results for 12 elements. As, Au, Pt, and Re failed to match previous parameterization models, requiring the determination of separate parameters for the Fe-Ni-S and Fe-Ni-P systems. Experiments with coexisting α and γ Fe alloy solids produced partitioning ratios close to unity, indicating that an α versus γ Fe alloy crystal structure has only a minor influence on the partitioning behaviors of the trace element studied. A simple relationship between an element's natural crystal structure and its α/γ partitioning ratio was not observed. If an iron meteorite crystallizes from a single metallic liquid that contains both S and P, the effect of P on the distribution of elements between the crystallizing solids and the residual liquid will be minor in comparison to the effect of S. This indicates that to a first order, fractional crystallization models of the Fe-Ni-S-P system that do not take into account P are appropriate for interpreting the evolution of iron meteorites if the effects of S are appropriately included in the effort.
Unsupervised segmentation of MRI knees using image partition forests
NASA Astrophysics Data System (ADS)
Marčan, Marija; Voiculescu, Irina
2016-03-01
Nowadays many people are affected by arthritis, a condition of the joints with limited prevention measures, but with various options of treatment the most radical of which is surgical. In order for surgery to be successful, it can make use of careful analysis of patient-based models generated from medical images, usually by manual segmentation. In this work we show how to automate the segmentation of a crucial and complex joint -- the knee. To achieve this goal we rely on our novel way of representing a 3D voxel volume as a hierarchical structure of partitions which we have named Image Partition Forest (IPF). The IPF contains several partition layers of increasing coarseness, with partitions nested across layers in the form of adjacency graphs. On the basis of a set of properties (size, mean intensity, coordinates) of each node in the IPF we classify nodes into different features. Values indicating whether or not any particular node belongs to the femur or tibia are assigned through node filtering and node-based region growing. So far we have evaluated our method on 15 MRI knee images. Our unsupervised segmentation compared against a hand-segmented gold standard has achieved an average Dice similarity coefficient of 0.95 for femur and 0.93 for tibia, and an average symmetric surface distance of 0.98 mm for femur and 0.73 mm for tibia. The paper also discusses ways to introduce stricter morphological and spatial conditioning in the bone labelling process.
NASA Astrophysics Data System (ADS)
Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.
The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.
Partitioning in parallel processing of production systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oflazer, K.
1987-01-01
This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
Harnessing the Bethe free energy†
Bapst, Victor
2016-01-01
ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178
Sloma, Michael F.; Mathews, David H.
2016-01-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924
Tannenbaum, David; Doctor, Jason N; Persell, Stephen D; Friedberg, Mark W; Meeker, Daniella; Friesema, Elisha M; Goldstein, Noah J; Linder, Jeffrey A; Fox, Craig R
2015-03-01
Healthcare professionals are rapidly adopting electronic health records (EHRs). Within EHRs, seemingly innocuous menu design configurations can influence provider decisions for better or worse. The purpose of this study was to examine whether the grouping of menu items systematically affects prescribing practices among primary care providers. We surveyed 166 primary care providers in a research network of practices in the greater Chicago area, of whom 84 responded (51% response rate). Respondents and non-respondents were similar on all observable dimensions except that respondents were more likely to work in an academic setting. The questionnaire consisted of seven clinical vignettes. Each vignette described typical signs and symptoms for acute respiratory infections, and providers chose treatments from a menu of options. For each vignette, providers were randomly assigned to one of two menu partitions. For antibiotic-inappropriate vignettes, the treatment menu either listed over-the-counter (OTC) medications individually while grouping prescriptions together, or displayed the reverse partition. For antibiotic-appropriate vignettes, the treatment menu either listed narrow-spectrum antibiotics individually while grouping broad-spectrum antibiotics, or displayed the reverse partition. The main outcome was provider treatment choice. For antibiotic-inappropriate vignettes, we categorized responses as prescription drugs or OTC-only options. For antibiotic-appropriate vignettes, we categorized responses as broad- or narrow-spectrum antibiotics. Across vignettes, there was an 11.5 percentage point reduction in choosing aggressive treatment options (e.g., broad-spectrum antibiotics) when aggressive options were grouped compared to when those same options were listed individually (95% CI: 2.9 to 20.1%; p = .008). Provider treatment choice appears to be influenced by the grouping of menu options, suggesting that the layout of EHR order sets is not an arbitrary exercise. The careful crafting of EHR order sets can serve as an important opportunity to improve patient care without constraining physicians' ability to prescribe what they believe is best for their patients.
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levnajić, Zoran; Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, California 93106; Mezić, Igor
We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone,more » and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.« less
Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets.
Levnajić, Zoran; Mezić, Igor
2015-05-01
We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone, and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.
Normalized Cut Algorithm for Automated Assignment of Protein Domains
NASA Technical Reports Server (NTRS)
Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.
Clustering Financial Time Series by Network Community Analysis
NASA Astrophysics Data System (ADS)
Piccardi, Carlo; Calatroni, Lisa; Bertoni, Fabio
In this paper, we describe a method for clustering financial time series which is based on community analysis, a recently developed approach for partitioning the nodes of a network (graph). A network with N nodes is associated to the set of N time series. The weight of the link (i, j), which quantifies the similarity between the two corresponding time series, is defined according to a metric based on symbolic time series analysis, which has recently proved effective in the context of financial time series. Then, searching for network communities allows one to identify groups of nodes (and then time series) with strong similarity. A quantitative assessment of the significance of the obtained partition is also provided. The method is applied to two distinct case-studies concerning the US and Italy Stock Exchange, respectively. In the US case, the stability of the partitions over time is also thoroughly investigated. The results favorably compare with those obtained with the standard tools typically used for clustering financial time series, such as the minimal spanning tree and the hierarchical tree.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
ERIC Educational Resources Information Center
McCain, Daniel F.; Allgood, Ottie E.; Cox, Jacob T.; Falconi, Audrey E.; Kim, Michael J.; Shih, Wei-Yu
2012-01-01
Only a few pedagogical experiments have been published dealing specifically with the hydrophobic interaction though it plays a central role in biochemistry. A set of experiments is presented in which students partition a variety of colorful indicator dyes in biphasic water/organic solvent mixtures. Students monitor the partitioning visually and…
Vision-Based Autonomous Sensor-Tasking in Uncertain Adversarial Environments
2015-01-02
motion segmentation and change detection in crowd behavior. In particular we investigated Finite Time Lyapunov Exponents, Perron Frobenius Operator and...deformation tensor [11]. On the other hand, eigenfunctions of, the Perron Frobenius operator can be used to detect Almost Invariant Sets (AIS) which are... Perron Frobenius operator. Finally, Figure 1.12d shows the ergodic partitions (EP) obtained based on the eigenfunctions of the Koopman operator
Tsallis p, q-deformed Touchard polynomials and Stirling numbers
NASA Astrophysics Data System (ADS)
Herscovici, O.; Mansour, T.
2017-01-01
In this paper, we develop and investigate a new two-parametrized deformation of the Touchard polynomials, based on the definition of the NEXT q-exponential function of Tsallis. We obtain new generalizations of the Stirling numbers of the second kind and of the binomial coefficients and represent two new statistics for the set partitions.
Study of energy partitioning using a set of related explosive formulations
NASA Astrophysics Data System (ADS)
Lieber, Mark; Foster, Joseph C.; Stewart, D. Scott
2012-03-01
Condensed phase high explosives convert potential energy stored in the electro-magnetic field structure of complex molecules to high power output during the detonation process. Historically, the explosive design problem has focused on intramolecular energy storage. The molecules of interest are derived via molecular synthesis providing near stoichiometric balance on the physical scale of the molecule. This approach provides prompt reactions based on transport physics at the molecular scale. Modern material design has evolved to approaches that employ intermolecular ingredients to alter the spatial and temporal distribution of energy release. State of the art continuum methods have been used to study this approach to the materials design. Cheetah has been used to produce data for a set of fictitious explosive formulations based on C-4 to study the partitioning of the available energy between internal and kinetic energy in the detonation. The equation of state information from Cheetah has been used in ALE3D to develop an understanding of the relationship between variations in the formulation parameters and the internal energy cycle in the products.
The Development of the Speaker Independent ARM Continuous Speech Recognition System
1992-01-01
spokeTi airborne reconnaissance reports u-ing a speech recognition system based on phoneme-level hidden Markov models (HMMs). Previous versions of the ARM...will involve automatic selection from multiple model sets, corresponding to different speaker types, and that the most rudimen- tary partition of a...The vocabulary size for the ARM task is 497 words. These words are related to the phoneme-level symbols corresponding to the models in the model set
Xie, M; Barsanti, K C; Hannigan, M P; Dutton, S J; Vedal, S
2013-01-01
Gas-phase concentrations of semi-volatile organic compounds (SVOCs) were calculated from gas/particle (G/P) partitioning theory using their measured particle-phase concentrations. The particle-phase data were obtained from an existing filter measurement campaign (27 January 2003-2 October 2005) as a part of the Denver Aerosol Sources and Health (DASH) study, including 970 observations of 71 SVOCs (Xie et al., 2013). In each compound class of SVOCs, the lighter species (e.g. docosane in n alkanes, fluoranthene in PAHs) had higher total concentrations (gas + particle phase) and lower particle-phase fractions. The total SVOC concentrations were analyzed using positive matrix factorization (PMF). Then the results were compared with source apportionment results where only particle-phase SVOC concentrations were used (particle only-based study; Xie et al., 2013). For the particle only-based PMF analysis, the factors primarily associated with primary or secondary sources ( n alkane, EC/sterane and inorganic ion factors) exhibit similar contribution time series ( r = 0.92-0.98) with their corresponding factors ( n alkane, sterane and nitrate+sulfate factors) in the current work. Three other factors (light n alkane/PAH, PAH and summer/odd n alkane factors) are linked with pollution sources influenced by atmospheric processes (e.g. G/P partitioning, photochemical reaction), and were less correlated ( r = 0.69-0.84) with their corresponding factors (light SVOC, PAH and bulk carbon factors) in the current work, suggesting that the source apportionment results derived from particle-only SVOC data could be affected by atmospheric processes. PMF analysis was also performed on three temperature-stratified subsets of the total SVOC data, representing ambient sampling during cold (daily average temperature < 10 °C), warm (≥ 10 °C and ≤ 20 °C) and hot (> 20 °C) periods. Unlike the particle only-based study, in this work the factor characterized by the low molecular weight (MW) compounds (light SVOC factor) exhibited strong correlations ( r = 0.82-0.98) between the full data set and each sub-data set solution, indicating that the impacts of G/P partitioning on receptor-based source apportionment could be eliminated by using total SVOC concentrations.
The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the Estimating the octanol/water partition coefficient for aliphatic organic compounds using semi-empirical electrotopological index.
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.
Ductility normalized-strainrange partitioning life relations for creep-fatigue life predictions
NASA Technical Reports Server (NTRS)
Halford, G. R.; Saltsman, J. F.; Hirschberg, M. H.
1977-01-01
Procedures based on Strainrange Partitioning (SRP) are presented for estimating the effects of environment and other influences on the high temperature, low cycle, creep fatigue resistance of alloys. It is proposed that the plastic and creep, ductilities determined from conventional tensile and creep rupture tests conducted in the environment of interest be used in a set of ductility normalized equations for making a first order approximation of the four SRP inelastic strainrange life relations. Different levels of sophistication in the application of the procedures are presented by means of illustrative examples with several high temperature alloys. Predictions of cyclic lives generally agree with observed lives within factors of three.
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Optimal partitioning of random programs across two processors
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.
Padró, Juan M; Ponzinibbio, Agustín; Mesa, Leidy B Agudelo; Reta, Mario
2011-03-01
The partition coefficients, P(IL/w), for different probe molecules as well as for compounds of biological interest between the room-temperature ionic liquids (RTILs) 1-butyl-3-methylimidazolium hexafluorophosphate, [BMIM][PF(6)], 1-hexyl-3-methylimidazolium hexafluorophosphate, [HMIM][PF(6)], 1-octyl-3-methylimidazolium tetrafluoroborate, [OMIM][BF(4)] and water were accurately measured. [BMIM][PF(6)] and [OMIM][BF(4)] were synthesized by adapting a procedure from the literature to a simpler, single-vessel and faster methodology, with a much lesser consumption of organic solvent. We employed the solvation-parameter model to elucidate the general chemical interactions involved in RTIL/water partitioning. With this purpose, we have selected different solute descriptor parameters that measure polarity, polarizability, hydrogen-bond-donor and hydrogen-bond-acceptor interactions, and cavity formation for a set of specifically selected probe molecules (the training set). The obtained multiparametric equations were used to predict the partition coefficients for compounds not present in the training set (the test set), most being of biological interest. Partial solubility of the ionic liquid in water (and water into the ionic liquid) was taken into account to explain the obtained results. This fact has not been deeply considered up to date. Solute descriptors were obtained from the literature, when available, or else calculated through commercial software. An excellent agreement between calculated and experimental log P(IL/w) values was obtained, which demonstrated that the resulting multiparametric equations are robust and allow predicting partitioning for any organic molecule in the biphasic systems studied.
Sloma, Michael F; Mathews, David H
2016-12-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. © 2016 Sloma and Mathews; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Mode entanglement of Gaussian fermionic states
NASA Astrophysics Data System (ADS)
Spee, C.; Schwaiger, K.; Giedke, G.; Kraus, B.
2018-04-01
We investigate the entanglement of n -mode n -partite Gaussian fermionic states (GFS). First, we identify a reasonable definition of separability for GFS and derive a standard form for mixed states, to which any state can be mapped via Gaussian local unitaries (GLU). As the standard form is unique, two GFS are equivalent under GLU if and only if their standard forms coincide. Then, we investigate the important class of local operations assisted by classical communication (LOCC). These are central in entanglement theory as they allow one to partially order the entanglement contained in states. We show, however, that there are no nontrivial Gaussian LOCC (GLOCC) among pure n -partite (fully entangled) states. That is, any such GLOCC transformation can also be accomplished via GLU. To obtain further insight into the entanglement properties of such GFS, we investigate the richer class of Gaussian stochastic local operations assisted by classical communication (SLOCC). We characterize Gaussian SLOCC classes of pure n -mode n -partite states and derive them explicitly for few-mode states. Furthermore, we consider certain fermionic LOCC and show how to identify the maximally entangled set of pure n -mode n -partite GFS, i.e., the minimal set of states having the property that any other state can be obtained from one state inside this set via fermionic LOCC. We generalize these findings also to the pure m -mode n -partite (for m >n ) case.
Bezold, Franziska; Weinberger, Maria E; Minceva, Mirjana
2017-03-31
Tocopherols are a class of molecules with vitamin E activity. Among those, α-tocopherol is the most important vitamin E source in the human diet. The purification of tocopherols involving biphasic liquid systems can be challenging since these vitamins are poorly soluble in water. Deep eutectic solvents (DES) can be used to form water-free biphasic systems and have already proven applicable for centrifugal partition chromatography separations. In this work, a computational solvent system screening was performed using the predictive thermodynamic model COSMO-RS. Liquid-liquid equilibria of solvent systems composed of alkanes, alcohols and DES, as well as partition coefficients of α-tocopherol, β-tocopherol, γ-tocopherol, and σ-tocopherol in these biphasic solvent systems were calculated. From the results the best suited biphasic solvent system, namely heptane/ethanol/choline chloride-1,4-butanediol, was chosen and a batch injection of a tocopherol mixture, mainly consisting of α- and γ-tocopherol, was performed using a centrifugal partition chromatography set up (SCPE 250-BIO). A separation factor of 1.74 was achieved for α- and γ-tocopherol. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal Partitioning of a Data Set Based on the "p"-Median Model
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2008-01-01
Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…
Equivalence of partition properties and determinacy
Kechris, Alexander S.; Woodin, W. Hugh
1983-01-01
It is shown that, within L(ℝ), the smallest inner model of set theory containing the reals, the axiom of determinacy is equivalent to the existence of arbitrarily large cardinals below Θ with the strong partition property κ → (κ)κ. PMID:16593299
Partitioning-based mechanisms under personalized differential privacy.
Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian
2017-05-01
Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t -round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms.
Partitioning-based mechanisms under personalized differential privacy
Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian
2017-01-01
Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t-round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms. PMID:28932827
Impact of Surface Roughness and Soil Texture on Mineral Dust Emission Fluxes Modeling
NASA Technical Reports Server (NTRS)
Menut, Laurent; Perez, Carlos; Haustein, Karsten; Bessagnet, Bertrand; Prigent, Catherine; Alfaro, Stephane
2013-01-01
Dust production models (DPM) used to estimate vertical fluxes of mineral dust aerosols over arid regions need accurate data on soil and surface properties. The Laboratoire Inter-Universitaire des Systemes Atmospheriques (LISA) data set was developed for Northern Africa, the Middle East, and East Asia. This regional data set was built through dedicated field campaigns and include, among others, the aerodynamic roughness length, the smooth roughness length of the erodible fraction of the surface, and the dry (undisturbed) soil size distribution. Recently, satellite-derived roughness length and high-resolution soil texture data sets at the global scale have emerged and provide the opportunity for the use of advanced schemes in global models. This paper analyzes the behavior of the ERS satellite-derived global roughness length and the State Soil Geographic data base-Food and Agriculture Organization of the United Nations (STATSGO-FAO) soil texture data set (based on wet techniques) using an advanced DPM in comparison to the LISA data set over Northern Africa and the Middle East. We explore the sensitivity of the drag partition scheme (a critical component of the DPM) and of the dust vertical fluxes (intensity and spatial patterns) to the roughness length and soil texture data sets. We also compare the use of the drag partition scheme to a widely used preferential source approach in global models. Idealized experiments with prescribed wind speeds show that the ERS and STATSGO-FAO data sets provide realistic spatial patterns of dust emission and friction velocity thresholds in the region. Finally, we evaluate a dust transport model for the period of March to July 2011 with observed aerosol optical depths from Aerosol Robotic Network sites. Results show that ERS and STATSGO-FAO provide realistic simulations in the region.
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120
Chemical amplification based on fluid partitioning
Anderson, Brian L [Lodi, CA; Colston, Jr., Billy W.; Elkin, Chris [San Ramon, CA
2006-05-09
A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.
Estimating Grass-Soil Bioconcentration of Munitions Compounds from Molecular Structure.
Torralba Sanchez, Tifany L; Liang, Yuzhen; Di Toro, Dominic M
2017-10-03
A partitioning-based model is presented to estimate the bioconcentration of five munitions compounds and two munition-like compounds in grasses. The model uses polyparameter linear free energy relationships (pp-LFERs) to estimate the partition coefficients between soil organic carbon and interstitial water and between interstitial water and the plant cuticle, a lipid-like plant component. Inputs for the pp-LFERs are a set of numerical descriptors computed from molecular structure only that characterize the molecular properties that determine the interaction with soil organic carbon, interstitial water, and plant cuticle. The model is validated by predicting concentrations measured in the whole plant during independent uptake experiments with a root-mean-square error (log predicted plant concentration-log observed plant concentration) of 0.429. This highlights the dominant role of partitioning between the exposure medium and the plant cuticle in the bioconcentration of these compounds. The pp-LFERs can be used to assess the environmental risk of munitions compounds and munition-like compounds using only their molecular structure as input.
Procedure of Partitioning Data Into Number of Data Sets or Data Group - A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
The goal of clustering is to decompose a dataset into similar groups based on a objective function. Some already well established clustering algorithms are there for data clustering. Objective of these data clustering algorithms are to divide the data points of the feature space into a number of groups (or classes) so that a predefined set of criteria are satisfied. The article considers the comparative study about the effectiveness and efficiency of traditional data clustering algorithms. For evaluating the performance of the clustering algorithms, Minkowski score is used here for different data sets.
Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
B. Hendrickson; T.G. Kolda
1998-09-01
A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.
Partitioning of functional gene expression data using principal points.
Kim, Jaehee; Kim, Haseong
2017-10-12
DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system. A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and Escherichia coli data. The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.
Experimental constraints on the sulfur content in the Earth's core
NASA Astrophysics Data System (ADS)
Fei, Y.; Huang, H.; Leng, C.; Hu, X.; Wang, Q.
2015-12-01
Any core formation models would lead to the incorporation of sulfur (S) into the Earth's core, based on the cosmochemical/geochemical constraints, sulfur's chemical affinity for iron (Fe), and low eutectic melting temperature in the Fe-FeS system. Preferential partitioning of S into the melt also provides petrologic constraint on the density difference between the liquid outer and solid inner cores. Therefore, the center issue is to constrain the amount of sulfur in the core. Geochemical constraints usually place 2-4 wt.% S in the core after accounting for its volatility, whereas more S is allowed in models based on mineral physics data. Here we re-examine the constraints on the S content in the core by both petrologic and mineral physics data. We have measured S partitioning between solid and liquid iron in the multi-anvil apparatus and the laser-heated diamond anvil cell, evaluating the effect of pressure on melting temperature and partition coefficient. In addition, we have conducted shockwave experiments on Fe-11.8wt%S using a two-stage light gas gun up to 211 GPa. The new shockwave experiments yield Hugoniot densities and the longitudinal sound velocities. The measurements provide the longitudinal sound velocity before melting and the bulk sound velocity of liquid. The measured sound velocities clearly show melting of the Fe-FeS mix with 11.8wt%S at a pressure between 111 and 129 GPa. The sound velocities at pressures above 129GPa represent the bulk sound velocities of Fe-11.8wt%S liquid. The combined data set including density, sound velocity, melting temperature, and S partitioning places a tight constraint on the required sulfur partition coefficient to produce the density and velocity jumps and the bulk sulfur content in the core.
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
Reppas-Chrysovitsinos, Efstathios; Sobek, Anna; MacLeod, Matthew
2016-06-15
Polymeric materials flowing through the technosphere are repositories of organic chemicals throughout their life cycle. Equilibrium partition ratios of organic chemicals between these materials and air (KMA) or water (KMW) are required for models of fate and transport, high-throughput exposure assessment and passive sampling. KMA and KMW have been measured for a growing number of chemical/material combinations, but significant data gaps still exist. We assembled a database of 363 KMA and 910 KMW measurements for 446 individual compounds and nearly 40 individual polymers and biopolymers, collected from 29 studies. We used the EPI Suite and ABSOLV software packages to estimate physicochemical properties of the compounds and we employed an empirical correlation based on Trouton's rule to adjust the measured KMA and KMW values to a standard reference temperature of 298 K. Then, we used a thermodynamic triangle with Henry's law constant to calculate a complete set of 1273 KMA and KMW values. Using simple linear regression, we developed a suite of single parameter linear free energy relationship (spLFER) models to estimate KMA from the EPI Suite-estimated octanol-air partition ratio (KOA) and KMW from the EPI Suite-estimated octanol-water (KOW) partition ratio. Similarly, using multiple linear regression, we developed a set of polyparameter linear free energy relationship (ppLFER) models to estimate KMA and KMW from ABSOLV-estimated Abraham solvation parameters. We explored the two LFER approaches to investigate (1) their performance in estimating partition ratios, and (2) uncertainties associated with treating all different polymers as a single "bulk" polymeric material compartment. The models we have developed are suitable for screening assessments of the tendency for organic chemicals to be emitted from materials, and for use in multimedia models of the fate of organic chemicals in the indoor environment. In screening applications we recommend that KMA and KMW be modeled as 0.06 ×KOA and 0.06 ×KOW respectively, with an uncertainty range of a factor of 15.
Chemical amplification based on fluid partitioning in an immiscible liquid
Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.
2010-09-28
A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Apparatus for chemical amplification based on fluid partitioning in an immiscible liquid
Anderson, Brian L [Lodi, CA; Colston, Bill W [San Ramon, CA; Elkin, Christopher J [San Ramon, CA
2012-05-08
A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.
Method for chemical amplification based on fluid partitioning in an immiscible liquid
Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.
2015-06-02
A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.
Method for chemical amplification based on fluid partitioning in an immiscible liquid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.
A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.
NASA Astrophysics Data System (ADS)
Reygondeau, Gabriel; Guieu, Cécile; Benedetti, Fabio; Irisson, Jean-Olivier; Ayata, Sakina-Dorothée; Gasparini, Stéphane; Koubbi, Philippe
2017-02-01
When dividing the ocean, the aim is generally to summarise a complex system into a representative number of units, each representing a specific environment, a biological community or a socio-economical specificity. Recently, several geographical partitions of the global ocean have been proposed using statistical approaches applied to remote sensing or observations gathered during oceanographic cruises. Such geographical frameworks defined at a macroscale appear hardly applicable to characterise the biogeochemical features of semi-enclosed seas that are driven by smaller-scale chemical and physical processes. Following the Longhurst's biogeochemical partitioning of the pelagic realm, this study investigates the environmental divisions of the Mediterranean Sea using a large set of environmental parameters. These parameters were informed in the horizontal and the vertical dimensions to provide a 3D spatial framework for environmental management (12 regions found for the epipelagic, 12 for the mesopelagic, 13 for the bathypelagic and 26 for the seafloor). We show that: (1) the contribution of the longitudinal environmental gradient to the biogeochemical partitions decreases with depth; (2) the partition of the surface layer cannot be extrapolated to other vertical layers as the partition is driven by a different set of environmental variables. This new partitioning of the Mediterranean Sea has strong implications for conservation as it highlights that management must account for the differences in zoning with depth at a regional scale.
Raster Data Partitioning for Supporting Distributed GIS Processing
NASA Astrophysics Data System (ADS)
Nguyen Thai, B.; Olasz, A.
2015-08-01
In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
Adaptive zero-tree structure for curved wavelet image coding
NASA Astrophysics Data System (ADS)
Zhang, Liang; Wang, Demin; Vincent, André
2006-02-01
We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.
An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes
NASA Technical Reports Server (NTRS)
Ding, Hong Q.; Ferraro, Robert D.
1996-01-01
A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.
ERIC Educational Resources Information Center
Mathematics Teaching, 1972
1972-01-01
Topics discussed in this column include patterns of inverse multipliers in modular arithmetic; diagrams for product sets, set intersection, and set union; function notation; patterns in the number of partitions of positive integers; and tessellations. (DT)
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
NASA Technical Reports Server (NTRS)
Palopo, Kee; Lee, Hak-Tae; Chatterji, Gano
2011-01-01
The concept of re-partitioning the airspace into a new set of sectors for allocating capacity rather than delaying flights to comply with the capacity constraints of a static set of sectors is being explored. The reduction in delay, a benefit, achieved by this concept needs to be greater than the cost of controllers and equipment needed for the additional sectors. Therefore, tradeoff studies are needed for benefits assessment of this concept.
Architecture Aware Partitioning Algorithms
2006-01-19
follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.
Panagopoulos, Dimitri; Jahnke, Annika; Kierkegaard, Amelie; MacLeod, Matthew
2015-10-20
The sorption of cyclic volatile methyl siloxanes (cVMS) to organic matter has a strong influence on their fate in the aquatic environment. We report new measurements of the partition ratios between freshwater sediment organic carbon and water (KOC) and between Aldrich humic acid dissolved organic carbon and water (KDOC) for three cVMS, and for three polychlorinated biphenyls (PCBs) that were used as reference chemicals. Our measurements were made using a purge-and-trap method that employs benchmark chemicals to calibrate mass transfer at the air/water interface in a fugacity-based multimedia model. The measured log KOC of octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), and dodecamethylcyclohexasiloxane (D6) were 5.06, 6.12, and 7.07, and log KDOC were 5.05, 6.13, and 6.79. To our knowledge, our measurements for KOC of D6 and KDOC of D4 and D6 are the first reported. Polyparameter linear free energy relationships (PP-LFERs) derived from training sets of empirical data that did not include cVMS generally did not predict our measured partition ratios of cVMS accurately (root-mean-squared-error (RMSE) for logKOC 0.76 and for logKDOC 0.73). We constructed new PP-LFERs that accurately describe partition ratios for the cVMS as well as for other chemicals by including our new measurements in the existing training sets (logKOC RMSEcVMS: 0.09, logKDOC RMSEcVMS: 0.12). The PP-LFERs we have developed here should be further evaluated and perhaps recalibrated when experimental data for other siloxanes become available.
Making sense of metacommunities: dispelling the mythology of a metacommunity typology.
Brown, Bryan L; Sokol, Eric R; Skelton, James; Tornwall, Brett
2017-03-01
Metacommunity ecology has rapidly become a dominant framework through which ecologists understand the natural world. Unfortunately, persistent misunderstandings regarding metacommunity theory and the methods for evaluating hypotheses based on the theory are common in the ecological literature. Since its beginnings, four major paradigms-species sorting, mass effects, neutrality, and patch dynamics-have been associated with metacommunity ecology. The Big 4 have been misconstrued to represent the complete set of metacommunity dynamics. As a result, many investigators attempt to evaluate community assembly processes as strictly belonging to one of the Big 4 types, rather than embracing the full scope of metacommunity theory. The Big 4 were never intended to represent the entire spectrum of metacommunity dynamics and were rather examples of historical paradigms that fit within the new framework. We argue that perpetuation of the Big 4 typology hurts community ecology and we encourage researchers to embrace the full inference space of metacommunity theory. A related, but distinct issue is that the technique of variation partitioning is often used to evaluate the dynamics of metacommunities. This methodology has produced its own set of misunderstandings, some of which are directly a product of the Big 4 typology and others which are simply the product of poor study design or statistical artefacts. However, variation partitioning is a potentially powerful technique when used appropriately and we identify several strategies for successful utilization of variation partitioning.
Identifying finite-time coherent sets from limited quantities of Lagrangian data.
Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W
2015-08-01
A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
NASA Astrophysics Data System (ADS)
Li, Yutong; Wang, Yuxin; Duffy, Alex H. B.
2014-11-01
Computer-based conceptual design for routine design has made great strides, yet non-routine design has not been given due attention, and it is still poorly automated. Considering that the function-behavior-structure(FBS) model is widely used for modeling the conceptual design process, a computer-based creativity enhanced conceptual design model(CECD) for non-routine design of mechanical systems is presented. In the model, the leaf functions in the FBS model are decomposed into and represented with fine-grain basic operation actions(BOA), and the corresponding BOA set in the function domain is then constructed. Choosing building blocks from the database, and expressing their multiple functions with BOAs, the BOA set in the structure domain is formed. Through rule-based dynamic partition of the BOA set in the function domain, many variants of regenerated functional schemes are generated. For enhancing the capability to introduce new design variables into the conceptual design process, and dig out more innovative physical structure schemes, the indirect function-structure matching strategy based on reconstructing the combined structure schemes is adopted. By adjusting the tightness of the partition rules and the granularity of the divided BOA subsets, and making full use of the main function and secondary functions of each basic structure in the process of reconstructing of the physical structures, new design variables and variants are introduced into the physical structure scheme reconstructing process, and a great number of simpler physical structure schemes to accomplish the overall function organically are figured out. The creativity enhanced conceptual design model presented has a dominant capability in introducing new deign variables in function domain and digging out simpler physical structures to accomplish the overall function, therefore it can be utilized to solve non-routine conceptual design problem.
The Partition of Multi-Resolution LOD Based on Qtm
NASA Astrophysics Data System (ADS)
Hou, M.-L.; Xing, H.-Q.; Zhao, X.-S.; Chen, J.
2011-08-01
The partition hierarch of Quaternary Triangular Mesh (QTM) determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details) based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.
Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael
2015-05-15
We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rössler system and find that periodic dynamics translate tomore » ring structures whereas chaotic time series translate to band or tube-like structures—thereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.« less
Overlapped Partitioning for Ensemble Classifiers of P300-Based Brain-Computer Interfaces
Onishi, Akinari; Natsume, Kiyohisa
2014-01-01
A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance. PMID:24695550
Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.
Onishi, Akinari; Natsume, Kiyohisa
2014-01-01
A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.
Processing scalar implicature: a Constraint-Based approach
Degen, Judith; Tanenhaus, Michael K.
2014-01-01
Three experiments investigated the processing of the implicature associated with some using a “gumball paradigm”. On each trial participants saw an image of a gumball machine with an upper chamber with 13 gumballs and an empty lower chamber. Gumballs then dropped to the lower chamber and participants evaluated statements, such as “You got some of the gumballs”. Experiment 1 established that some is less natural for reference to small sets (1, 2 and 3 of the 13 gumballs) and unpartitioned sets (all 13 gumballs) compared to intermediate sets (6–8). Partitive some of was less natural than simple some when used with the unpartitioned set. In Experiment 2, including exact number descriptions lowered naturalness ratings for some with small sets but not for intermediate size sets and the unpartitioned set. In Experiment 3 the naturalness ratings from Experiment 2 predicted response times. The results are interpreted as evidence for a Constraint-Based account of scalar implicature processing and against both two-stage, Literal-First models and pragmatic Default models. PMID:25265993
PAQ: Partition Analysis of Quasispecies.
Baccam, P; Thompson, R J; Fedrigo, O; Carpenter, S; Cornette, J L
2001-01-01
The complexities of genetic data may not be accurately described by any single analytical tool. Phylogenetic analysis is often used to study the genetic relationship among different sequences. Evolutionary models and assumptions are invoked to reconstruct trees that describe the phylogenetic relationship among sequences. Genetic databases are rapidly accumulating large amounts of sequences. Newly acquired sequences, which have not yet been characterized, may require preliminary genetic exploration in order to build models describing the evolutionary relationship among sequences. There are clustering techniques that rely less on models of evolution, and thus may provide nice exploratory tools for identifying genetic similarities. Some of the more commonly used clustering methods perform better when data can be grouped into mutually exclusive groups. Genetic data from viral quasispecies, which consist of closely related variants that differ by small changes, however, may best be partitioned by overlapping groups. We have developed an intuitive exploratory program, Partition Analysis of Quasispecies (PAQ), which utilizes a non-hierarchical technique to partition sequences that are genetically similar. PAQ was used to analyze a data set of human immunodeficiency virus type 1 (HIV-1) envelope sequences isolated from different regions of the brain and another data set consisting of the equine infectious anemia virus (EIAV) regulatory gene rev. Analysis of the HIV-1 data set by PAQ was consistent with phylogenetic analysis of the same data, and the EIAV rev variants were partitioned into two overlapping groups. PAQ provides an additional tool which can be used to glean information from genetic data and can be used in conjunction with other tools to study genetic similarities and genetic evolution of viral quasispecies.
NASA Astrophysics Data System (ADS)
Bernard, Julien; Eychenne, Julia; Le Pennec, Jean-Luc; Narváez, Diego
2016-08-01
How and how much the mass of juvenile magma is split between vent-derived tephra, PDC deposits and lavas (i.e., mass partition) is related to eruption dynamics and style. Estimating such mass partitioning budgets may reveal important for hazard evaluation purposes. We calculated the volume of each product emplaced during the August 2006 paroxysmal eruption of Tungurahua volcano (Ecuador) and converted it into masses using high-resolution grainsize, componentry and density data. This data set is one of the first complete descriptions of mass partitioning associated with a VEI 3 andesitic event. The scoria fall deposit, near-vent agglutinate and lava flow include 28, 16 and 12 wt. % of the erupted juvenile mass, respectively. Much (44 wt. %) of the juvenile material fed Pyroclastic Density Currents (i.e., dense flows, dilute surges and co-PDC plumes), highlighting that tephra fall deposits do not depict adequately the size and fragmentation processes of moderate PDC-forming event. The main parameters controlling the mass partitioning are the type of magmatic fragmentation, conditions of magma ascent, and crater area topography. Comparisons of our data set with other PDC-forming eruptions of different style and magma composition suggest that moderate andesitic eruptions are more prone to produce PDCs, in proportions, than any other eruption type. This finding may be explained by the relatively low magmatic fragmentation efficiency of moderate andesitic eruptions. These mass partitioning data reveal important trends that may be critical for hazard assessment, notably at frequently active andesitic edifices.
[On the partition of acupuncture academic schools].
Yang, Pengyan; Luo, Xi; Xia, Youbing
2016-05-01
Nowadays extensive attention has been paid on the research of acupuncture academic schools, however, a widely accepted method of partition of acupuncture academic schools is still in need. In this paper, the methods of partition of acupuncture academic schools in the history have been arranged, and three typical methods of"partition of five schools" "partition of eighteen schools" and "two-stage based partition" are summarized. After adeep analysis on the disadvantages and advantages of these three methods, a new method of partition of acupuncture academic schools that is called "three-stage based partition" is proposed. In this method, after the overall acupuncture academic schools are divided into an ancient stage, a modern stage and a contemporary stage, each schoolis divided into its sub-school category. It is believed that this method of partition can remedy the weaknesses ofcurrent methods, but also explore a new model of inheritance and development under a different aspect through thedifferentiation and interaction of acupuncture academic schools at three stages.
NASA Astrophysics Data System (ADS)
Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.
2015-10-01
In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.
Abraham, Michael H; Gola, Joelle M R; Ibrahim, Adam; Acree, William E; Liu, Xiangli
2014-07-01
There is considerable interest in the blood-tissue distribution of agrochemicals, and a number of researchers have developed experimental methods for in vitro distribution. These methods involve the determination of saline-blood and saline-tissue partitions; not only are they indirect, but they do not yield the required in vivo distribution. The authors set out equations for gas-tissue and blood-tissue distribution, for partition from water into skin and for permeation from water through human skin. Together with Abraham descriptors for the agrochemicals, these equations can be used to predict values for all of these processes. The present predictions compare favourably with experimental in vivo blood-tissue distribution where available. The predictions require no more than simple arithmetic. The present method represents a much easier and much more economic way of estimating blood-tissue partitions than the method that uses saline-blood and saline-tissue partitions. It has the added advantages of yielding the required in vivo partitions and being easily extended to the prediction of partition of agrochemicals from water into skin and permeation from water through skin. © 2013 Society of Chemical Industry.
3d expansions of 5d instanton partition functions
NASA Astrophysics Data System (ADS)
Nieri, Fabrizio; Pan, Yiwen; Zabzine, Maxim
2018-04-01
We propose a set of novel expansions of Nekrasov's instanton partition functions. Focusing on 5d supersymmetric pure Yang-Mills theory with unitary gauge group on C_{q,{t}^{-1}}^2× S^1 , we show that the instanton partition function admits expansions in terms of partition functions of unitary gauge theories living on the 3d subspaces C_q× S^1 , C_{t^{-1}}× S^1 and their intersection along S^1 . These new expansions are natural from the BPS/CFT viewpoint, as they can be matched with W q,t correlators involving an arbitrary number of screening charges of two kinds. Our constructions generalize and interpolate existing results in the literature.
Data approximation using a blending type spline construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalmo, Rune; Bratlie, Jostein
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less
Combined node and link partitions method for finding overlapping communities in complex networks
Jin, Di; Gabrys, Bogdan; Dang, Jianwu
2015-01-01
Community detection in complex networks is a fundamental data analysis task in various domains, and how to effectively find overlapping communities in real applications is still a challenge. In this work, we propose a new unified model and method for finding the best overlapping communities on the basis of the associated node and link partitions derived from the same framework. Specifically, we first describe a unified model that accommodates node and link communities (partitions) together, and then present a nonnegative matrix factorization method to learn the parameters of the model. Thereafter, we infer the overlapping communities based on the derived node and link communities, i.e., determine each overlapped community between the corresponding node and link community with a greedy optimization of a local community function conductance. Finally, we introduce a model selection method based on consensus clustering to determine the number of communities. We have evaluated our method on both synthetic and real-world networks with ground-truths, and compared it with seven state-of-the-art methods. The experimental results demonstrate the superior performance of our method over the competing ones in detecting overlapping communities for all analysed data sets. Improved performance is particularly pronounced in cases of more complicated networked community structures. PMID:25715829
Hahus, Ian; Migliaccio, Kati; Douglas-Mankin, Kyle; Klarenberg, Geraldine; Muñoz-Carpena, Rafael
2018-04-27
Hierarchical and partitional cluster analyses were used to compartmentalize Water Conservation Area 1, a managed wetland within the Arthur R. Marshall Loxahatchee National Wildlife Refuge in southeast Florida, USA, based on physical, biological, and climatic geospatial attributes. Single, complete, average, and Ward's linkages were tested during the hierarchical cluster analyses, with average linkage providing the best results. In general, the partitional method, partitioning around medoids, found clusters that were more evenly sized and more spatially aggregated than those resulting from the hierarchical analyses. However, hierarchical analysis appeared to be better suited to identify outlier regions that were significantly different from other areas. The clusters identified by geospatial attributes were similar to clusters developed for the interior marsh in a separate study using water quality attributes, suggesting that similar factors have influenced variations in both the set of physical, biological, and climatic attributes selected in this study and water quality parameters. However, geospatial data allowed further subdivision of several interior marsh clusters identified from the water quality data, potentially indicating zones with important differences in function. Identification of these zones can be useful to managers and modelers by informing the distribution of monitoring equipment and personnel as well as delineating regions that may respond similarly to future changes in management or climate.
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
A Novel Method for Discovering Fuzzy Sequential Patterns Using the Simple Fuzzy Partition Method.
ERIC Educational Resources Information Center
Chen, Ruey-Shun; Hu, Yi-Chung
2003-01-01
Discusses sequential patterns, data mining, knowledge acquisition, and fuzzy sequential patterns described by natural language. Proposes a fuzzy data mining technique to discover fuzzy sequential patterns by using the simple partition method which allows the linguistic interpretation of each fuzzy set to be easily obtained. (Author/LRW)
Integrated data lookup and replication scheme in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Chen, Kai; Nahrstedt, Klara
2001-11-01
Accessing remote data is a challenging task in mobile ad hoc networks. Two problems have to be solved: (1) how to learn about available data in the network; and (2) how to access desired data even when the original copy of the data is unreachable. In this paper, we develop an integrated data lookup and replication scheme to solve these problems. In our scheme, a group of mobile nodes collectively host a set of data to improve data accessibility for all members of the group. They exchange data availability information by broadcasting advertising (ad) messages to the group using an adaptive sending rate policy. The ad messages are used by other nodes to derive a local data lookup table, and to reduce data redundancy within a connected group. Our data replication scheme predicts group partitioning based on each node's current location and movement patterns, and replicates data to other partitions before partitioning occurs. Our simulations show that data availability information can quickly propagate throughout the network, and that the successful data access ratio of each node is significantly improved.
Black holes in higher spin supergravity
NASA Astrophysics Data System (ADS)
Datta, Shouvik; David, Justin R.
2013-07-01
We study black hole solutions in Chern-Simons higher spin supergravity based on the superalgebra sl(3|2). These black hole solutions have a U(1) gauge field and a spin 2 hair in addition to the spin 3 hair. These additional fields correspond to the R-symmetry charges of the supergroup sl(3|2). Using the relation between the bulk field equations and the Ward identities of a CFT with {N} = 2 super- {{{W}}_3} symmetry, we identify the bulk charges and chemical potentials with those of the boundary CFT. From these identifications we see that a suitable set of variables to study this black hole is in terms of the charges present in three decoupled bosonic sub-algebras of the {N} = 2 super- {{{W}}_3} algebra. The entropy and the partition function of these R-charged black holes are then evaluated in terms of the charges of the bulk theory as well as in terms of its chemical potentials. We then compute the partition function in the dual CFT and find exact agreement with the bulk partition function.
Feller, Chrystel; Favre, Patrick; Janka, Ales; Zeeman, Samuel C; Gabriel, Jean-Pierre; Reinhardt, Didier
2015-01-01
Plants are highly plastic in their potential to adapt to changing environmental conditions. For example, they can selectively promote the relative growth of the root and the shoot in response to limiting supply of mineral nutrients and light, respectively, a phenomenon that is referred to as balanced growth or functional equilibrium. To gain insight into the regulatory network that controls this phenomenon, we took a systems biology approach that combines experimental work with mathematical modeling. We developed a mathematical model representing the activities of the root (nutrient and water uptake) and the shoot (photosynthesis), and their interactions through the exchange of the substrates sugar and phosphate (Pi). The model has been calibrated and validated with two independent experimental data sets obtained with Petunia hybrida. It involves a realistic environment with a day-and-night cycle, which necessitated the introduction of a transitory carbohydrate storage pool and an endogenous clock for coordination of metabolism with the environment. Our main goal was to grasp the dynamic adaptation of shoot:root ratio as a result of changes in light and Pi supply. The results of our study are in agreement with balanced growth hypothesis, suggesting that plants maintain a functional equilibrium between shoot and root activity based on differential growth of these two compartments. Furthermore, our results indicate that resource partitioning can be understood as the emergent property of many local physiological processes in the shoot and the root without explicit partitioning functions. Based on its encouraging predictive power, the model will be further developed as a tool to analyze resource partitioning in shoot and root crops.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Dynamic connectivity regression: Determining state-related changes in brain connectivity
Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.
2014-01-01
Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408
NASA Astrophysics Data System (ADS)
Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil
2001-07-01
We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.
NASA Astrophysics Data System (ADS)
Topping, D. O.; Lowe, D.; McFiggans, G.; Zaveri, R. A.
2016-12-01
Gas to particle partitioning of atmospheric compounds occurs through disequilibrium mass transfer rather than through instantaneous equilibrium. However, it is common to treat only the inorganic compounds as partitioning dynamically whilst organic compounds, represented by the Volatility Basis Set (VBS), are partitioned instantaneously. In this study we implement a more realistic dynamic partitioning of organic compounds in a regional framework and assess impact on aerosol mass and microphysics. It is also common to assume condensed phase water is only associated with inorganic components. We thus also assess sensitivity to assuming all organics are hygroscopic according to their prescribed molecular weight.For this study we use WRF-Chem v3.4.1, focusing on anthropogenic dominated North-Western Europe. Gas-phase chemistry is represented using CBM-Z whilst aerosol dynamics are simulated using the 8-section MOSAIC scheme, including a 9-bin volatility basis set (VBS) treatment of organic aerosol. Results indicate that predicted mass loadings can vary significantly. Without gas phase ageing of higher volatility compounds, dynamic partitioning always results in lower mass loadings downwind of emission sources. The inclusion of condensed phase water in both partitioning models increases the predicted PM mass, resulting from a larger contribution from higher volatility organics, if present. If gas phase ageing of VBS compounds is allowed to occur in a dynamic model, this can often lead to higher predicted mass loadings, contrary to expected behaviour from a simple non-reactive gas phase box model. As descriptions of aerosol phase processes improve within regional models, the baseline descriptions of partitioning should retain the ability to treat dynamic partitioning of organic compounds. Using our simulations, we discuss whether derived sensitivities to aerosol processes in existing models may be inherently biased.This work was supported by the Nature Environment Research Council within the RONOCO (NE/F004656/1) and CCN-Vol (NE/L007827/1) projects.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.
Gilbert, Dorothea; Witt, Gesine; Smedes, Foppe; Mayer, Philipp
2016-06-07
Polymers are increasingly applied for the enrichment of hydrophobic organic chemicals (HOCs) from various types of samples and media in many analytical partitioning-based measuring techniques. We propose using polymers as a reference partitioning phase and introduce polymer-polymer partitioning as the basis for a deeper insight into partitioning differences of HOCs between polymers, calibrating analytical methods, and consistency checking of existing and calculation of new partition coefficients. Polymer-polymer partition coefficients were determined for polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and organochlorine pesticides (OCPs) by equilibrating 13 silicones, including polydimethylsiloxane (PDMS) and low-density polyethylene (LDPE) in methanol-water solutions. Methanol as cosolvent ensured that all polymers reached equilibrium while its effect on the polymers' properties did not significantly affect silicone-silicone partition coefficients. However, we noticed minor cosolvent effects on determined polymer-polymer partition coefficients. Polymer-polymer partition coefficients near unity confirmed identical absorption capacities of several PDMS materials, whereas larger deviations from unity were indicated within the group of silicones and between silicones and LDPE. Uncertainty in polymer volume due to imprecise coating thickness or the presence of fillers was identified as the source of error for partition coefficients. New polymer-based (LDPE-lipid, PDMS-air) and multimedia partition coefficients (lipid-water, air-water) were calculated by applying the new concept of a polymer as reference partitioning phase and by using polymer-polymer partition coefficients as conversion factors. The present study encourages the use of polymer-polymer partition coefficients, recognizing that polymers can serve as a linking third phase for a quantitative understanding of equilibrium partitioning of HOCs between any two phases.
Bell, Charreau S; Obstein, Keith L; Valdastri, Pietro
2013-11-01
Colorectal cancer is one of the leading causes of cancer-related deaths in the world, although it can be effectively treated if detected early. Teleoperated flexible endoscopes are an emerging technology to ease patient apprehension about the procedure, and subsequently increase compliance. Essential to teleoperation is robust feedback reflecting the change in pose (i.e., position and orientation) of the tip of the endoscope. The goal of this study is to first describe a novel image-based tracking system for teleoperated flexible endoscopes, and subsequently determine its viability in a clinical setting. The proposed approach leverages artificial neural networks (ANNs) to learn the mapping that links the optical flow between two sequential images to the change in the pose of the camera. Secondly, the study investigates for the first time how narrow band illumination (NBI) - today available in commercial gastrointestinal endoscopes - can be applied to enhance feature extraction, and quantify the effect of NBI and white light illumination (WLI), as well as their color information, on the strength of features extracted from the endoscopic camera stream. In order to provide the best features for the neural networks to learn the change in pose based on the image stream, we investigated two different imaging modalities - WLI and NBI - and we applied two different spatial partitions - lumen-centered and grid-based - to create descriptors used as input to the ANNs. An experiment was performed to compare the error of these four variations, measured in root mean square error (RMSE) from ground truth given by a robotic arm, to that of a commercial state-of-the-art magnetic tracker. The viability of this technique for a clinical setting was then tested using the four ANN variations, a magnetic tracker, and a commercial colonoscope. The trial was performed by an expert endoscopist (>2000 lifetime procedures) on a colonoscopy training model with porcine blood, and the RMSE of the ANN output was calculated with respect to the magnetic tracker readings. Using the image stream obtained from the commercial endoscope, the strength of features extracted was evaluated. In the first experiment, the best ANNs resulted from grid-based partitioning under WLI (2.42mm RMSE) for position, and from lumen-centered partitioning under NBI (1.69° RMSE) for rotation. By comparison, the performance of the tracker was 2.49mm RMSE in position and 0.89° RMSE in rotation. The trial with the commercial endoscope indicated that lumen-centered partitioning was the best overall, while NBI outperformed WLI in terms of illumination modality. The performance of lumen-centered partitioning with NBI was 1.03±0.8mm RMSE in positional degrees of freedom (DOF), and 1.26±0.98° RMSE in rotational DOF, while with WLI, the performance was 1.56±1.15mm RMSE in positional DOF and 2.45±1.90° RMSE in rotational DOF. Finally, the features extracted under NBI were found to be twice as strong as those extracted under WLI, but no significance in feature strengths was observed between a grayscale version of the image, and the red, blue, and green color channels. This work demonstrates that both WLI and NBI, combined with feature partitioning based on the anatomy of the colon, provide valid mechanisms for endoscopic camera pose estimation via image stream. Illumination provided by WLI and NBI produce ANNs with similar performance which are comparable to that of a state-of-the-art magnetic tracker. However, NBI produces features that are stronger than WLI, which enables more robust feature tracking, and better performance of the ANN in terms of accuracy. Thus, NBI with lumen-centered partitioning resulted the best approach among the different variations tested for vision-based pose estimation. The proposed approach takes advantage of components already available in commercial gastrointestinal endoscopes to provide accurate feedback about the motion of the tip of the endoscope. This solution may serve as an enabling technology for closed-loop control of teleoperated flexible endoscopes. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.
1988-01-01
A complete listing is given of the expert system rules for the Entry phase of the Onboard Navigation (ONAV) Ground Based Expert Trainer System for aircraft/space shuttle navigation. These source listings appear in the same format as utilized and required by the C Language Integrated Production System (CLIPS) expert system shell which is the basis for the ONAV entry system. A schematic overview is given of how the rules are organized. These groups result from a partitioning of the rules according to the overall function which a given set of rules performs. This partitioning was established and maintained according to that established in the knowledge specification document. In addition, four other groups of rules are specified. The four groups (control flow, operator inputs, output management, and data tables) perform functions that affect all the other functional rule groups. As the name implies, control flow ensures that the rule groups are executed in the order required for proper operation; operator input rules control the introduction into the CLIPS fact base of various kinds of data required by the expert system; output management rules control the updating of the ONAV expert system user display screen during execution of the system; and data tables are static information utilized by many different rule sets gathered in one convenient place.
Generalization of multifractal theory within quantum calculus
NASA Astrophysics Data System (ADS)
Olemskoi, A.; Shuda, I.; Borisyuk, V.
2010-03-01
On the basis of the deformed series in quantum calculus, we generalize the partition function and the mass exponent of a multifractal, as well as the average of a random variable distributed over a self-similar set. For the partition function, such expansion is shown to be determined by binomial-type combinations of the Tsallis entropies related to manifold deformations, while the mass exponent expansion generalizes the known relation τq=Dq(q-1). We find the equation for the set of averages related to ordinary, escort, and generalized probabilities in terms of the deformed expansion as well. Multifractals related to the Cantor binomial set, exchange currency series, and porous-surface condensates are considered as examples.
NASA Astrophysics Data System (ADS)
Nielsen, R. L.; Ghiorso, M. S.; Trischman, T.
2015-12-01
The database traceDs is designed to provide a transparent and accessible resource of experimental partitioning data. It now includes ~ 90% of all the experimental trace element partitioning data (~4000 experiments) produced over the past 45 years, and is accessible through a web based interface (using the portal lepr.ofm-research.org). We set a minimum standard for inclusion, with the threshold criteria being the inclusion of: Experimental conditions (temperature, pressure, device, container, time, etc.) Major element composition of the phases Trace element analyses of the phases Data sources that did not report these minimum components were not included. The rationale for not including such data is that the degree of equilibration is unknown, and more important, no rigorous approach to modeling the behavior of trace elements is possible without knowledge of composition of the phases, and the temperature and pressure of formation/equilibration. The data are stored using a schema derived from that of the Library of Experimental Phase Relations (LEPR), modified to account for additional metadata, and restructured to permit multiple analytical entries for various element/technique/standard combinations. In the process of populating the database, we have learned a number of things about the existing published experimental partitioning data. Most important are: ~ 20% of the papers do not satisfy one or more of the threshold criteria. The standard format for presenting data is the average. This was developed as the standard during the time where there were space constraints for publication in spite of fact that all the information can now be published as electronic supplements. The uncertainties that are published with the compositional data are often not adequately explained (e.g. 1 or 2 sigma, standard deviation of the average, etc.). We propose a new set of publication standards for experimental data that include the minimum criteria described above, the publication of all analyses with error based on peak count rates and background, plus information on the structural state of the mineral (e.g. orthopyroxene vs. pigeonite).
Partitioning error components for accuracy-assessment of near-neighbor methods of imputation
Albert R. Stage; Nicholas L. Crookston
2007-01-01
Imputation is applied for two quite different purposes: to supply missing data to complete a data set for subsequent modeling analyses or to estimate subpopulation totals. Error properties of the imputed values have different effects in these two contexts. We partition errors of imputation derived from similar observation units as arising from three sources:...
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Albert C.R., E-mail: albert@fisica.ufjf.br; Takakura, Flavio I., E-mail: takakura@fisica.ufjf.br; Abreu, Everton M.C., E-mail: evertonabreu@ufrrj.br
In this work we have obtained a higher-derivative Lagrangian for a charged fluid coupled with the electromagnetic fluid and the Dirac’s constraints analysis was discussed. A set of first-class constraints fixed by noncovariant gauge condition were obtained. The path integral formalism was used to obtain the partition function for the corresponding higher-derivative Hamiltonian and the Faddeev–Popov ansatz was used to construct an effective Lagrangian. Through the partition function, a Stefan–Boltzmann type law was obtained. - Highlights: • Higher-derivative Lagrangian for a charged fluid. • Electromagnetic coupling and Dirac’s constraint analysis. • Partition function through path integral formalism. • Stefan–Boltzmann-kind lawmore » through the partition function.« less
Abe, Toshikazu; Tokuda, Yasuharu; Cook, E Francis
2011-01-01
Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR) for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA) have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648) who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance) or two (moderate cerebral disability) of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605) to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC); and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043), 209/2,292 (9.1%) in all patients with the acceptable time intervals and 1,388/2,706 (52.1%) in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.
Dubarry, Nelly; Pasta, Franck; Lane, David
2006-01-01
Most bacterial chromosomes carry an analogue of the parABS systems that govern plasmid partition, but their role in chromosome partition is ambiguous. parABS systems might be particularly important for orderly segregation of multipartite genomes, where their role may thus be easier to evaluate. We have characterized parABS systems in Burkholderia cenocepacia, whose genome comprises three chromosomes and one low-copy-number plasmid. A single parAB locus and a set of ParB-binding (parS) centromere sites are located near the origin of each replicon. ParA and ParB of the longest chromosome are phylogenetically similar to analogues in other multichromosome and monochromosome bacteria but are distinct from those of smaller chromosomes. The latter form subgroups that correspond to the taxa of their hosts, indicating evolution from plasmids. The parS sites on the smaller chromosomes and the plasmid are similar to the “universal” parS of the main chromosome but with a sequence specific to their replicon. In an Escherichia coli plasmid stabilization test, each parAB exhibits partition activity only with the parS of its own replicon. Hence, parABS function is based on the independent partition of individual chromosomes rather than on a single communal system or network of interacting systems. Stabilization by the smaller chromosome and plasmid systems was enhanced by mutation of parS sites and a promoter internal to their parAB operons, suggesting autoregulatory mechanisms. The small chromosome ParBs were found to silence transcription, a property relevant to autoregulation. PMID:16452432
Yan, Bo; Pan, Chongle; Olman, Victor N; Hettich, Robert L; Xu, Ying
2004-01-01
Mass spectrometry is one of the most popular analytical techniques for identification of individual proteins in a protein mixture, one of the basic problems in proteomics. It identifies a protein through identifying its unique mass spectral pattern. While the problem is theoretically solvable, it remains a challenging problem computationally. One of the key challenges comes from the difficulty in distinguishing the N- and C-terminus ions, mostly b- and y-ions respectively. In this paper, we present a graph algorithm for solving the problem of separating bfrom y-ions in a set of mass spectra. We represent each spectral peak as a node and consider two types of edges: a type-1 edge connects two peaks possibly of the same ion types and a type-2 edge connects two peaks possibly of different ion types, predicted based on local information. The ion-separation problem is then formulated and solved as a graph partition problem, which is to partition the graph into three subgraphs, namely b-, y-ions and others respectively, so to maximize the total weight of type-1 edges while minimizing the total weight of type-2 edges within each subgraph. We have developed a dynamic programming algorithm for rigorously solving this graph partition problem and implemented it as a computer program PRIME. We have tested PRIME on 18 data sets of high accurate FT-ICR tandem mass spectra and found that it achieved ~90% accuracy for separation of b- and y- ions.
Hou, Tingjun; Xu, Xiaojie
2002-12-01
In this study, the relationships between the brain-blood concentration ratio of 96 structurally diverse compounds with a large number of structurally derived descriptors were investigated. The linear models were based on molecular descriptors that can be calculated for any compound simply from a knowledge of its molecular structure. The linear correlation coefficients of the models were optimized by genetic algorithms (GAs), and the descriptors used in the linear models were automatically selected from 27 structurally derived descriptors. The GA optimizations resulted in a group of linear models with three or four molecular descriptors with good statistical significance. The change of descriptor use as the evolution proceeds demonstrates that the octane/water partition coefficient and the partial negative solvent-accessible surface area multiplied by the negative charge are crucial to brain-blood barrier permeability. Moreover, we found that the predictions using multiple QSPR models from GA optimization gave quite good results in spite of the diversity of structures, which was better than the predictions using the best single model. The predictions for the two external sets with 37 diverse compounds using multiple QSPR models indicate that the best linear models with four descriptors are sufficiently effective for predictive use. Considering the ease of computation of the descriptors, the linear models may be used as general utilities to screen the blood-brain barrier partitioning of drugs in a high-throughput fashion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, D. K.; Taylor, A. S.; Edwards, T.B.
2005-06-26
The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
Multi-scale modularity and motif distributional effect in metabolic networks.
Gao, Shang; Chen, Alan; Rahmani, Ali; Zeng, Jia; Tan, Mehmet; Alhajj, Reda; Rokne, Jon; Demetrick, Douglas; Wei, Xiaohui
2016-01-01
Metabolism is a set of fundamental processes that play important roles in a plethora of biological and medical contexts. It is understood that the topological information of reconstructed metabolic networks, such as modular organization, has crucial implications on biological functions. Recent interpretations of modularity in network settings provide a view of multiple network partitions induced by different resolution parameters. Here we ask the question: How do multiple network partitions affect the organization of metabolic networks? Since network motifs are often interpreted as the super families of evolved units, we further investigate their impact under multiple network partitions and investigate how the distribution of network motifs influences the organization of metabolic networks. We studied Homo sapiens, Saccharomyces cerevisiae and Escherichia coli metabolic networks; we analyzed the relationship between different community structures and motif distribution patterns. Further, we quantified the degree to which motifs participate in the modular organization of metabolic networks.
Research of image retrieval technology based on color feature
NASA Astrophysics Data System (ADS)
Fu, Yanjun; Jiang, Guangyu; Chen, Fengying
2009-10-01
Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram make rotating and translation does not change. The HSV color space is used to show color characteristic of image, which is suitable to the visual characteristic of human. Taking advance of human's feeling to color, it quantifies color sector with unequal interval, and get characteristic vector. Finally, it matches the similarity of image with the algorithm of the histogram intersection and the partition-overall histogram. Users can choose a demonstration image to show inquired vision require, and also can adjust several right value through the relevance-feedback method to obtain the best result of search.An image retrieval system based on these approaches is presented. The result of the experiments shows that the image retrieval based on partition-overall histogram can keep the space distribution information while abstracting color feature efficiently, and it is superior to the normal color histograms in precision rate while researching. The query precision rate is more than 95%. In addition, the efficient block expression will lower the complicate degree of the images to be searched, and thus the searching efficiency will be increased. The image retrieval algorithms based on the partition-overall histogram proposed in the paper is efficient and effective.
Chaos synchronization basing on symbolic dynamics with nongenerating partition.
Wang, Xingyuan; Wang, Mogei; Liu, Zhenzhen
2009-06-01
Using symbolic dynamics and information theory, we study the information transmission needed for synchronizing unidirectionally coupled oscillators. It is found that when sustaining chaos synchronization with nongenerating partition, the synchronization error will be larger than a critical value, although the required coupled channel capacity can be smaller than the case of using a generating partition. Then we show that no matter whether a generating or nongenerating partition is in use, a high-quality detector can guarantee the lead of the response oscillator, while the lag responding can make up the low precision of the detector. A practicable synchronization scheme basing on a nongenerating partition is also proposed in this paper.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
Experimentally feasible security check for n-qubit quantum secret sharing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, Stefan; Huber, Marcus; Hiesmayr, Beatrix C.
In this article we present a general security strategy for quantum secret sharing (QSS) protocols based on the scheme presented by Hillery, Buzek, and Berthiaume (HBB) [Phys. Rev. A 59, 1829 (1999)]. We focus on a generalization of the HBB protocol to n communication parties thus including n-partite Greenberger-Horne-Zeilinger states. We show that the multipartite version of the HBB scheme is insecure in certain settings and impractical when going to large n. To provide security for such QSS schemes in general we use the framework presented by some of the authors [M. Huber, F. Mintert, A. Gabriel, B. C. Hiesmayr,more » Phys. Rev. Lett. 104, 210501 (2010)] to detect certain genuine n-partite entanglement between the communication parties. In particular, we present a simple inequality which tests the security.« less
Charge transport and trapping in organic field effect transistors exposed to polar analytes
NASA Astrophysics Data System (ADS)
Duarte, Davianne; Sharma, Deepak; Cobb, Brian; Dodabalapur, Ananth
2011-03-01
Pentacene based organic thin-film transistors were used to study the effects of polar analytes on charge transport and trapping behavior during vapor sensing. Three sets of devices with differing morphology and mobility (0.001-0.5 cm2/V s) were employed. All devices show enhanced trapping upon exposure to analyte molecules. The organic field effect transistors with different mobilities also provide evidence for morphology dependent partition coefficients. This study helps provide a physical basis for many reports on organic transistor based sensor response.
Feller, Chrystel; Favre, Patrick; Janka, Ales; Zeeman, Samuel C.; Gabriel, Jean-Pierre; Reinhardt, Didier
2015-01-01
Plants are highly plastic in their potential to adapt to changing environmental conditions. For example, they can selectively promote the relative growth of the root and the shoot in response to limiting supply of mineral nutrients and light, respectively, a phenomenon that is referred to as balanced growth or functional equilibrium. To gain insight into the regulatory network that controls this phenomenon, we took a systems biology approach that combines experimental work with mathematical modeling. We developed a mathematical model representing the activities of the root (nutrient and water uptake) and the shoot (photosynthesis), and their interactions through the exchange of the substrates sugar and phosphate (Pi). The model has been calibrated and validated with two independent experimental data sets obtained with Petunia hybrida. It involves a realistic environment with a day-and-night cycle, which necessitated the introduction of a transitory carbohydrate storage pool and an endogenous clock for coordination of metabolism with the environment. Our main goal was to grasp the dynamic adaptation of shoot:root ratio as a result of changes in light and Pi supply. The results of our study are in agreement with balanced growth hypothesis, suggesting that plants maintain a functional equilibrium between shoot and root activity based on differential growth of these two compartments. Furthermore, our results indicate that resource partitioning can be understood as the emergent property of many local physiological processes in the shoot and the root without explicit partitioning functions. Based on its encouraging predictive power, the model will be further developed as a tool to analyze resource partitioning in shoot and root crops. PMID:26154262
Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.
Vera, J Fernando; Macías, Rodrigo
2017-06-01
One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.
Over the last decade, several studies reported that the partitioning of PAHs to sediments, in some cases, did not follow predictions based on equilibrium partitioning theory. One explanation for these differences is the presence of a second sedimentary phase with partitioning cha...
Fu, Zhiqiang; Chen, Jingwen; Li, Xuehua; Wang, Ya'nan; Yu, Haiying
2016-04-01
The octanol-air partition coefficient (KOA) is needed for assessing multimedia transport and bioaccumulability of organic chemicals in the environment. As experimental determination of KOA for various chemicals is costly and laborious, development of KOA estimation methods is necessary. We investigated three methods for KOA prediction, conventional quantitative structure-activity relationship (QSAR) models based on molecular structural descriptors, group contribution models based on atom-centered fragments, and a novel model that predicts KOA via solvation free energy from air to octanol phase (ΔGO(0)), with a collection of 939 experimental KOA values for 379 compounds at different temperatures (263.15-323.15 K) as validation or training sets. The developed models were evaluated with the OECD guidelines on QSAR models validation and applicability domain (AD) description. Results showed that although the ΔGO(0) model is theoretically sound and has a broad AD, the prediction accuracy of the model is the poorest. The QSAR models perform better than the group contribution models, and have similar predictability and accuracy with the conventional method that estimates KOA from the octanol-water partition coefficient and Henry's law constant. One QSAR model, which can predict KOA at different temperatures, was recommended for application as to assess the long-range transport potential of chemicals. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-01-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of log BB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (log P), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental log BB data had been determined in vivo. In particular, since molecules with log BB > 0.3 cross the blood-brain barrier (BBB) readily while molecules with log BB < −1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the log BB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. PMID:20427217
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-06-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Foda, O.; Welsh, T. A.
2016-04-01
We study the Andrews-Gordon-Bressoud (AGB) generalisations of the Rogers-Ramanujan q-series identities in the context of cylindric partitions. We recall the definition of r-cylindric partitions, and provide a simple proof of Borodin’s product expression for their generating functions, that can be regarded as a limiting case of an unpublished proof by Krattenthaler. We also recall the relationships between the r-cylindric partition generating functions, the principal characters of {\\hat{{sl}}}r algebras, the {{\\boldsymbol{ M }}}r r,r+d minimal model characters of {{\\boldsymbol{ W }}}r algebras, and the r-string abaci generating functions, providing simple proofs for each. We then set r = 2, and use two-cylindric partitions to re-derive the AGB identities as follows. Firstly, we use Borodin’s product expression for the generating functions of the two-cylindric partitions with infinitely long parts, to obtain the product sides of the AGB identities, times a factor {(q;q)}∞ -1, which is the generating function of ordinary partitions. Next, we obtain a bijection from the two-cylindric partitions, via two-string abaci, into decorated versions of Bressoud’s restricted lattice paths. Extending Bressoud’s method of transforming between restricted paths that obey different restrictions, we obtain sum expressions with manifestly non-negative coefficients for the generating functions of the two-cylindric partitions which contains a factor {(q;q)}∞ -1. Equating the product and sum expressions of the same two-cylindric partitions, and canceling a factor of {(q;q)}∞ -1 on each side, we obtain the AGB identities.
NASA Astrophysics Data System (ADS)
Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.
2017-07-01
Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).
Common y-intercept and single compound regressions of gas-particle partitioning data vs 1/T
NASA Astrophysics Data System (ADS)
Pankow, James F.
Confidence intervals are placed around the log Kp vs 1/ T correlation equations obtained using simple linear regressions (SLR) with the gas-particle partitioning data set of Yamasaki et al. [(1982) Env. Sci. Technol.16, 189-194]. The compounds and groups of compounds studied include the polycylic aromatic hydrocarbons phenanthrene + anthracene, me-phenanthrene + me-anthracene, fluoranthene, pyrene, benzo[ a]fluorene + benzo[ b]fluorene, chrysene + benz[ a]anthracene + triphenylene, benzo[ b]fluoranthene + benzo[ k]fluoranthene, and benzo[ a]pyrene + benzo[ e]pyrene (note: me = methyl). For any given compound, at equilibrium, the partition coefficient Kp equals ( F/ TSP)/ A where F is the particulate-matter associated concentration (ng m -3), A is the gas-phase concentration (ng m -3), and TSP is the concentration of particulate matter (μg m -3). At temperatures more than 10°C from the mean sampling temperature of 17°C, the confidence intervals are quite wide. Since theory predicts that similar compounds sorbing on the same particulate matter should possess very similar y-intercepts, the data set was also fitted using a special common y-intercept regression (CYIR). For most of the compounds, the CYIR equations fell inside of the SLR 95% confidence intervals. The CYIR y-intercept value is -18.48, and is reasonably close to the type of value that can be predicted for PAH compounds. The set of CYIR regression equations is probably more reliable than the set of SLR equations. For example, the CYIR-derived desorption enthalpies are much more highly correlated with vaporization enthalpies than are the SLR-derived desorption enthalpies. It is recommended that the CYIR approach be considered whenever analysing temperature-dependent gas-particle partitioning data.
What are the structural features that drive partitioning of proteins in aqueous two-phase systems?
Wu, Zhonghua; Hu, Gang; Wang, Kui; Zaslavsky, Boris Yu; Kurgan, Lukasz; Uversky, Vladimir N
2017-01-01
Protein partitioning in aqueous two-phase systems (ATPSs) represents a convenient, inexpensive, and easy to scale-up protein separation technique. Since partition behavior of a protein dramatically depends on an ATPS composition, it would be highly beneficial to have reliable means for (even qualitative) prediction of partitioning of a target protein under different conditions. Our aim was to understand which structural features of proteins contribute to partitioning of a query protein in a given ATPS. We undertook a systematic empirical analysis of relations between 57 numerical structural descriptors derived from the corresponding amino acid sequences and crystal structures of 10 well-characterized proteins and the partition behavior of these proteins in 29 different ATPSs. This analysis revealed that just a few structural characteristics of proteins can accurately determine behavior of these proteins in a given ATPS. However, partition behavior of proteins in different ATPSs relies on different structural features. In other words, we could not find a unique set of protein structural features derived from their crystal structures that could be used for the description of the protein partition behavior of all proteins in all ATPSs analyzed in this study. We likely need to gain better insight into relationships between protein-solvent interactions and protein structure peculiarities, in particular given limitations of the used here crystal structures, to be able to construct a model that accurately predicts protein partition behavior across all ATPSs. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Reygondeau, Gabriel; Olivier Irisson, Jean; Guieu, Cecile; Gasparini, Stephane; Ayata, Sakina; Koubbi, Philippe
2013-04-01
In recent decades, it has been found useful to ecoregionalise the pelagic environment assuming that within each partition environmental conditions are distinguishable and unique. Indeed, each partition of the ocean that is proposed aimed to delineate the main oceanographical and ecological patterns to provide a geographical framework of marine ecosystems for ecological studies and management purposes. The aim of the present work is to integrate and process existing data on the pelagic environment of the Mediterranean Sea in order to define biogeochemical regions. Open access databases including remote sensing observations, oceanographic campaign data and physical modeling simulations are used. These various dataset allow the multidisciplinary view required to understand the interactions between climate and Mediterranean marine ecosystems. The first step of our study has consisted in a statistical selection of a set of crucial environmental factors to propose the most parsimonious biogeographical approach that allows detecting the main oceanographic structure of the Mediterranean Sea. Second, based on the identified set of environmental parameters, both non-hierarchical and hierarchical clustering algorithms have been tested. Outputs from each methodology are then inter-compared to propose a robust map of the biotopes (unique range of environmental parameters) of the area. Each biotope was then modeled using a non parametric environmental niche method to infer a dynamic biogeochemical partition. Last, the seasonal, inter annual and long term spatial changes of each biogeochemical regions were investigated. The future of this work will be to perform a second partition to subdivide the biogeochemical regions according to biotic features of the Mediterranean Sea (ecoregions). This second level of division will thus be used as a geographical framework to identify ecosystems that have been altered by human activities (i.e. pollution, fishery, invasive species) for the European project PERSEUS (Protecting EuRopean Seas and borders through the intelligence US of surveillance) and the French program MERMEX (Marine Ecosystems Response in the Mediterranean Experiment).
Prosperi, Mattia C F; De Luca, Andrea; Di Giambenedetto, Simona; Bracciale, Laura; Fabbiani, Massimiliano; Cauda, Roberto; Salemi, Marco
2010-10-25
Phylogenetic methods produce hierarchies of molecular species, inferring knowledge about taxonomy and evolution. However, there is not yet a consensus methodology that provides a crisp partition of taxa, desirable when considering the problem of intra/inter-patient quasispecies classification or infection transmission event identification. We introduce the threshold bootstrap clustering (TBC), a new methodology for partitioning molecular sequences, that does not require a phylogenetic tree estimation. The TBC is an incremental partition algorithm, inspired by the stochastic Chinese restaurant process, and takes advantage of resampling techniques and models of sequence evolution. TBC uses as input a multiple alignment of molecular sequences and its output is a crisp partition of the taxa into an automatically determined number of clusters. By varying initial conditions, the algorithm can produce different partitions. We describe a procedure that selects a prime partition among a set of candidate ones and calculates a measure of cluster reliability. TBC was successfully tested for the identification of type-1 human immunodeficiency and hepatitis C virus subtypes, and compared with previously established methodologies. It was also evaluated in the problem of HIV-1 intra-patient quasispecies clustering, and for transmission cluster identification, using a set of sequences from patients with known transmission event histories. TBC has been shown to be effective for the subtyping of HIV and HCV, and for identifying intra-patient quasispecies. To some extent, the algorithm was able also to infer clusters corresponding to events of infection transmission. The computational complexity of TBC is quadratic in the number of taxa, lower than other established methods; in addition, TBC has been enhanced with a measure of cluster reliability. The TBC can be useful to characterise molecular quasipecies in a broad context.
Partitioning medical image databases for content-based queries on a Grid.
Montagnat, J; Breton, V; E Magnin, I
2005-01-01
In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.
Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.
2015-05-15
Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less
Adsorption of Phthalates on Impervious Indoor Surfaces.
Wu, Yaoxing; Eichler, Clara M A; Leng, Weinan; Cox, Steven S; Marr, Linsey C; Little, John C
2017-03-07
Sorption of semivolatile organic compounds (SVOCs) onto interior surfaces, often referred to as the "sink effect", and their subsequent re-emission significantly affect the fate and transport of indoor SVOCs and the resulting human exposure. Unfortunately, experimental challenges and the large number of SVOC/surface combinations have impeded progress in understanding sorption of SVOCs on indoor surfaces. An experimental approach based on a diffusion model was thus developed to determine the surface/air partition coefficient K of di-2-ethylhexyl phthalate (DEHP) on typical impervious surfaces including aluminum, steel, glass, and acrylic. The results indicate that surface roughness plays an important role in the adsorption process. Although larger data sets are needed, the ability to predict K could be greatly improved by establishing the nature of the relationship between surface roughness and K for clean indoor surfaces. Furthermore, different surfaces exhibit nearly identical K values after being exposed to kitchen grime with values that are close to those reported for the octanol/air partition coefficient. This strongly supports the idea that interactions between gas-phase DEHP and soiled surfaces have been reduced to interactions with an organic film. Collectively, the results provide an improved understanding of equilibrium partitioning of SVOCs on impervious surfaces.
NASA Astrophysics Data System (ADS)
Ise, T.; Litton, C. M.; Giardina, C. P.; Ito, A.
2009-12-01
Plant partitioning of carbon (C) to above- vs. belowground, to growth vs. respiration, and to short vs. long lived tissues exerts a large influence on ecosystem structure and function with implications for the global C budget. Importantly, outcomes of process-based terrestrial vegetation models are likely to vary substantially with different C partitioning algorithms. However, controls on C partitioning patterns remain poorly quantified, and studies have yielded variable, and at times contradictory, results. A recent meta-analysis of forest studies suggests that the ratio of net primary production (NPP) and gross primary production (GPP) is fairly conservative across large scales. To illustrate the effect of this unique meta-analysis-based partitioning scheme (MPS), we compared an application of MPS to a terrestrial satellite-based (MODIS) GPP to estimate NPP vs. two global process-based vegetation models (Biome-BGC and VISIT) to examine the influence of C partitioning on C budgets of woody plants. Due to the temperature dependence of maintenance respiration, NPP/GPP predicted by the process-based models increased with latitude while the ratio remained constant with MPS. Overall, global NPP estimated with MPS was 17 and 27% lower than the process-based models for temperate and boreal biomes, respectively, with smaller differences in the tropics. Global equilibrium biomass of woody plants was then calculated from the NPP estimates and tissue turnover rates from VISIT. Since turnover rates differed greatly across tissue types (i.e., metabolically active vs. structural), global equilibrium biomass estimates were sensitive to the partitioning scheme employed. The MPS estimate of global woody biomass was 7-21% lower than that of the process-based models. In summary, we found that model output for NPP and equilibrium biomass was quite sensitive to the choice of C partitioning schemes. Carbon use efficiency (CUE; NPP/GPP) by forest biome and the globe. Values are means for 2001-2006.
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
Hydraulic geometry of the Platte River in south-central Nebraska
Eschner, T.R.
1982-01-01
At-a-station hydraulic-geometry of the Platte River in south-central Nebraska is complex. The range of exponents of simple power-function relations is large, both between different reaches of the river, and among different sections within a given reach. The at-a-station exponents plot in several fields of the b-f-m diagram, suggesting that morphologic and hydraulic changes with increasing discharge vary considerably. Systematic changes in the plotting positions of the exponents with time indicate that in general, the width exponent has decreased, although trends are not readily apparent in the other exponents. Plots of the hydraulic-geometry relations indicate that simple power functions are not the proper model in all instances. For these sections, breaks in the slopes of the hydraulic geometry relations serve to partition the data sets. Power functions fit separately to the partitioned data described the width-, depth-, and velocity-discharge relations more accurately than did a single power function. Plotting positions of the exponents from hydraulic geometry relations of partitioned data sets on b-f-m diagrams indicate that much of the apparent variations of plotting positions of single power functions results because the single power functions compromise both subsets of partitioned data. For several sections, the shape of the channel primarily accounts for the better fit of two-power functions to partitioned data than a single power function over the entire range of data. These non-log linear relations may have significance for channel maintenance. (USGS)
NASA Astrophysics Data System (ADS)
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
A distributed model predictive control scheme for leader-follower multi-agent systems
NASA Astrophysics Data System (ADS)
Franzè, Giuseppe; Lucia, Walter; Tedesco, Francesco
2018-02-01
In this paper, we present a novel receding horizon control scheme for solving the formation problem of leader-follower configurations. The algorithm is based on set-theoretic ideas and is tuned for agents described by linear time-invariant (LTI) systems subject to input and state constraints. The novelty of the proposed framework relies on the capability to jointly use sequences of one-step controllable sets and polyhedral piecewise state-space partitions in order to online apply the 'better' control action in a distributed receding horizon fashion. Moreover, we prove that the design of both robust positively invariant sets and one-step-ahead controllable regions is achieved in a distributed sense. Simulations and numerical comparisons with respect to centralised and local-based strategies are finally performed on a group of mobile robots to demonstrate the effectiveness of the proposed control strategy.
Crystal-chemistry and partitioning of REE in whitlockite
NASA Technical Reports Server (NTRS)
Colson, R. O.; Jolliff, B. L.
1993-01-01
Partitioning of Rare Earth Elements (REE) in whitlockite is complicated by the fact that two or more charge-balancing substitutions are involved and by the fact that concentrations of REE in natural whitlockites are sufficiently high such that simple partition coefficients are not expected to be constant even if mixing in the system is completely ideal. The present study combines preexisting REE partitioning data in whitlockites with new experiments in the same compositional system and at the same temperature (approximately 1030 C) to place additional constraints on the complex variations of REE partition coefficients and to test theoretical models for how REE partitioning should vary with REE concentration and other compositional variables. With this data set, and by combining crystallographic and thermochemical constraints with a SAS simultaneous-equation best-fitting routine, it is possible to infer answers to the following questions: what is the speciation on the individual sites Ca(B), Mg, and Ca(IIA) (where the ideal structural formula is Ca(B)18 Mg2Ca(IIA)2P14O56); how are REE's charge-balanced in the crystal; and is mixing of REE in whitlockite ideal or non-ideal. This understanding is necessary in order to extrapolate derived partition coefficients to other compositional systems and provides a broadened understanding of the crystal chemistry of whitlockite.
The Optimization of In-Memory Space Partitioning Trees for Cache Utilization
NASA Astrophysics Data System (ADS)
Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo
In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.
A dynamic re-partitioning strategy based on the distribution of key in Spark
NASA Astrophysics Data System (ADS)
Zhang, Tianyu; Lian, Xin
2018-05-01
Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O
2015-01-01
To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.
NASA Astrophysics Data System (ADS)
Oyarbide, E.; Bernal, C.; Molina, P.; Jiménez, L. A.; Gálvez, R.; Martínez, A.
2016-01-01
Ultracapacitors are low voltage devices and therefore, for practical applications, they need to be used in modules of series-connected cells. Because of the inherent manufacturing tolerance of the capacitance parameter of each cell, and as the maximum voltage value cannot be exceeded, the module requires inter-cell voltage equalization. If the intended application suffers repeated fast charging/discharging cycles, active equalization circuits must be rated to full power, and thus the module becomes expensive. Previous work shows that a series connection of several sets of paralleled ultracapacitors minimizes the dispersion of equivalent capacitance values, and also the voltage differences between capacitors. Thus the overall life expectancy is improved. This paper proposes a method to distribute ultracapacitors with a number partitioning-based strategy to reduce the dispersion between equivalent submodule capacitances. Thereafter, the total amount of stored energy and/or the life expectancy of the device can be considerably improved.
Partitioned coupling of advection-diffusion-reaction systems and Brinkman flows
NASA Astrophysics Data System (ADS)
Lenarda, Pietro; Paggi, Marco; Ruiz Baier, Ricardo
2017-09-01
We present a partitioned algorithm aimed at extending the capabilities of existing solvers for the simulation of coupled advection-diffusion-reaction systems and incompressible, viscous flow. The space discretisation of the governing equations is based on mixed finite element methods defined on unstructured meshes, whereas the time integration hinges on an operator splitting strategy that exploits the differences in scales between the reaction, advection, and diffusion processes, considering the global system as a number of sequentially linked sets of partial differential, and algebraic equations. The flow solver presents the advantage that all unknowns in the system (here vorticity, velocity, and pressure) can be fully decoupled and thus turn the overall scheme very attractive from the computational perspective. The robustness of the proposed method is illustrated with a series of numerical tests in 2D and 3D, relevant in the modelling of bacterial bioconvection and Boussinesq systems.
A New Model for Optimal Mechanical and Thermal Performance of Cement-Based Partition Wall
Huang, Shiping; Hu, Mengyu; Cui, Nannan; Wang, Weifeng
2018-01-01
The prefabricated cement-based partition wall has been widely used in assembled buildings because of its high manufacturing efficiency, high-quality surface, and simple and convenient construction process. In this paper, a general porous partition wall that is made from cement-based materials was proposed to meet the optimal mechanical and thermal performance during transportation, construction and its service life. The porosity of the proposed partition wall is formed by elliptic-cylinder-type cavities. The finite element method was used to investigate the mechanical and thermal behaviour, which shows that the proposed model has distinct advantages over the current partition wall that is used in the building industry. It is found that, by controlling the eccentricity of the elliptic-cylinder cavities, the proposed wall stiffness can be adjusted to respond to the imposed loads and to improve the thermal performance, which can be used for the optimum design. Finally, design guidance is provided to obtain the optimal mechanical and thermal performance. The proposed model could be used as a promising candidate for partition wall in the building industry. PMID:29673176
A New Model for Optimal Mechanical and Thermal Performance of Cement-Based Partition Wall.
Huang, Shiping; Hu, Mengyu; Huang, Yonghui; Cui, Nannan; Wang, Weifeng
2018-04-17
The prefabricated cement-based partition wall has been widely used in assembled buildings because of its high manufacturing efficiency, high-quality surface, and simple and convenient construction process. In this paper, a general porous partition wall that is made from cement-based materials was proposed to meet the optimal mechanical and thermal performance during transportation, construction and its service life. The porosity of the proposed partition wall is formed by elliptic-cylinder-type cavities. The finite element method was used to investigate the mechanical and thermal behaviour, which shows that the proposed model has distinct advantages over the current partition wall that is used in the building industry. It is found that, by controlling the eccentricity of the elliptic-cylinder cavities, the proposed wall stiffness can be adjusted to respond to the imposed loads and to improve the thermal performance, which can be used for the optimum design. Finally, design guidance is provided to obtain the optimal mechanical and thermal performance. The proposed model could be used as a promising candidate for partition wall in the building industry.
Evolving bipartite authentication graph partitions
Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.
2017-01-16
As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less
Platinum Partitioning at Low Oxygen Fugacity: Implications for Core Formation Processes
NASA Technical Reports Server (NTRS)
Medard, E.; Martin, A. M.; Righter, K.; Lanziroti, A.; Newville, M.
2016-01-01
Highly siderophile elements (HSE = Au, Re, and the Pt-group elements) are tracers of silicate / metal interactions during planetary processes. Since most core-formation models involve some state of equilibrium between liquid silicate and liquid metal, understanding the partioning of highly siderophile elements (HSE) between silicate and metallic melts is a key issue for models of core / mantle equilibria and for core formation scenarios. However, partitioning models for HSE are still inaccurate due to the lack of sufficient experimental constraints to describe the variations of partitioning with key variable like temperature, pressure, and oxygen fugacity. In this abstract, we describe a self-consistent set of experiments aimed at determining the valence of platinum, one of the HSE, in silicate melts. This is a key information required to parameterize the evolution of platinum partitioning with oxygen fugacity.
Evolving bipartite authentication graph partitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.
As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less
Short-Term Global Horizontal Irradiance Forecasting Based on Sky Imaging and Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Feng, Cong; Cui, Mingjian
Accurate short-term forecasting is crucial for solar integration in the power grid. In this paper, a classification forecasting framework based on pattern recognition is developed for 1-hour-ahead global horizontal irradiance (GHI) forecasting. Three sets of models in the forecasting framework are trained by the data partitioned from the preprocessing analysis. The first two sets of models forecast GHI for the first four daylight hours of each day. Then the GHI values in the remaining hours are forecasted by an optimal machine learning model determined based on a weather pattern classification model in the third model set. The weather pattern ismore » determined by a support vector machine (SVM) classifier. The developed framework is validated by the GHI and sky imaging data from the National Renewable Energy Laboratory (NREL). Results show that the developed short-term forecasting framework outperforms the persistence benchmark by 16% in terms of the normalized mean absolute error and 25% in terms of the normalized root mean square error.« less
Target discrimination method for SAR images based on semisupervised co-training
NASA Astrophysics Data System (ADS)
Wang, Yan; Du, Lan; Dai, Hui
2018-01-01
Synthetic aperture radar (SAR) target discrimination is usually performed in a supervised manner. However, supervised methods for SAR target discrimination may need lots of labeled training samples, whose acquirement is costly, time consuming, and sometimes impossible. This paper proposes an SAR target discrimination method based on semisupervised co-training, which utilizes a limited number of labeled samples and an abundant number of unlabeled samples. First, Lincoln features, widely used in SAR target discrimination, are extracted from the training samples and partitioned into two sets according to their physical meanings. Second, two support vector machine classifiers are iteratively co-trained with the extracted two feature sets based on the co-training algorithm. Finally, the trained classifiers are exploited to classify the test data. The experimental results on real SAR images data not only validate the effectiveness of the proposed method compared with the traditional supervised methods, but also demonstrate the superiority of co-training over self-training, which only uses one feature set.
Chiou, C.T.
1985-01-01
Triolein-water partition coefficients (KtW) have been determined for 38 slightly water-soluble organic compounds, and their magnitudes have been compared with the corresponding octanol-water partition coefficients (KOW). In the absence of major solvent-solute interaction effects in the organic solvent phase, the conventional treatment (based on Raoult's law) predicts sharply lower partition coefficients for most of the solutes in triolein because of its considerably higher molecular weight, whereas the Flory-Huggins treatment predicts higher partition coefficients with triolein. The data are in much better agreement with the Flory-Huggins model. As expected from the similarity in the partition coefficients, the water solubility (which was previously found to be the major determinant of the KOW) is also the major determinant for the Ktw. When the published BCF values (bioconcentration factors) of organic compounds in fish are based on the lipid content rather than on total mass, they are approximately equal to the Ktw, which suggests at least near equilibrium for solute partitioning between water and fish lipid. The close correlation between Ktw and Kow suggests that Kow is also a good predictor for lipid-water partition coefficients and bioconcentration factors.
Evaluation of Hierarchical Clustering Algorithms for Document Datasets
2002-06-03
link, complete-link, and group average ( UPGMA )) and a new set of merging criteria derived from the six partitional criterion functions. Overall, we...used the single-link, complete-link, and UPGMA schemes, as well as, the various partitional criterion functions described in Section 3.1. The single-link...other (complete-link approach). The UPGMA scheme [16] (also known as group average) overcomes these problems by measuring the similarity of two clusters
NASA Astrophysics Data System (ADS)
Gan, Chee Kwan; Challacombe, Matt
2003-05-01
Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.
Accelerating calculations of RNA secondary structure partition functions using GPUs
2013-01-01
Background RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs). Results Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. Conclusions Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu. PMID:24180434
Corzo, Gerald; Solomatine, Dimitri
2007-05-01
Natural phenomena are multistationary and are composed of a number of interacting processes, so one single model handling all processes often suffers from inaccuracies. A solution is to partition data in relation to such processes using the available domain knowledge or expert judgment, to train separate models for each of the processes, and to merge them in a modular model (committee). In this paper a problem of water flow forecast in watershed hydrology is considered where the flow process can be presented as consisting of two subprocesses -- base flow and excess flow, so that these two processes can be separated. Several approaches to data separation techniques are studied. Two case studies with different forecast horizons are considered. Parameters of the algorithms responsible for data partitioning are optimized using genetic algorithms and global pattern search. It was found that modularization of ANN models using domain knowledge makes models more accurate, if compared with a global model trained on the whole data set, especially when forecast horizon (and hence the complexity of the modelled processes) is increased.
Lin, Junfang; Cao, Wenxi; Wang, Guifeng; Hu, Shuibo
2013-06-20
Using a data set of 1333 samples, we assess the spectral absorption relationships of different wave bands for phytoplankton (ph) and particles. We find that a nonlinear model (second-order quadratic equations) delivers good performance in describing their spectral characteristics. Based on these spectral relationships, we develop a method for partitioning the total absorption coefficient into the contributions attributable to phytoplankton [a(ph)(λ)], colored dissolved organic material [CDOM; a(CDOM)(λ)], and nonalgal particles [NAP; a(NAP)(λ)]. This method is validated using a data set that contains 550 simultaneous measurements of phytoplankton, CDOM, and NAP from the NASA bio-Optical Marine Algorithm Dataset. We find that our method is highly efficient and robust, with significant accuracy: the relative root-mean-square errors (RMSEs) are 25.96%, 38.30%, and 19.96% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively. The performance is still satisfactory when the method is applied to water samples from the northern South China Sea as a regional case. The computed and measured absorption coefficients (167 samples) agree well with the RMSEs, i.e., 18.50%, 32.82%, and 10.21% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively. Finally, the partitioning method is applied directly to an independent data set (1160 samples) derived from the Bermuda Bio-Optics Project that contains relatively low absorption values, and we also obtain good inversion accuracy [RMSEs of 32.37%, 32.57%, and 11.52% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively]. Our results indicate that this partitioning method delivers satisfactory performance for the retrieval of a(ph), a(CDOM), and a(NAP). Therefore, this may be a useful tool for extracting absorption coefficients from in situ measurements or remotely sensed ocean-color data.
Hang, X; Greenberg, N L; Shiota, T; Firstenberg, M S; Thomas, J D
2000-01-01
Real-time three-dimensional echocardiography has been introduced to provide improved quantification and description of cardiac function. Data compression is desired to allow efficient storage and improve data transmission. Previous work has suggested improved results utilizing wavelet transforms in the compression of medical data including 2D echocardiogram. Set partitioning in hierarchical trees (SPIHT) was extended to compress volumetric echocardiographic data by modifying the algorithm based on the three-dimensional wavelet packet transform. A compression ratio of at least 40:1 resulted in preserved image quality.
Dominant partition method. [based on a wave function formalism
NASA Technical Reports Server (NTRS)
Dixon, R. M.; Redish, E. F.
1979-01-01
By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.
Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems
2015-05-01
form of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9... processor (as we do for Level-A and -B tasks), but they did not consider MC systems. Altmeyer et al. [1] considered uniprocessor scheduling on a system with a...framework. We randomly generated task sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9
Sharpe, Jennifer B.; Soong, David T.
2015-01-01
This study used the National Land Cover Dataset (NLCD) and developed an automated process for determining the area of the three land cover types, thereby allowing faster updating of future models, and for evaluating land cover changes by use of historical NLCD datasets. The study also carried out a raingage partitioning analysis so that the segmentation of land cover and rainfall in each modeled unit is directly applicable to the HSPF modeling. Historical and existing impervious, grass, and forest land acreages partitioned by percentages covered by two sets of raingages for the Lake Michigan diversion SCAs, gaged basins, and ungaged basins are presented.
NASA Astrophysics Data System (ADS)
Kassem, M.; Soize, C.; Gagliardini, L.
2011-02-01
In a recent work [ Journal of Sound and Vibration 323 (2009) 849-863] the authors presented an energy-density field approach for the vibroacoustic analysis of complex structures in the low and medium frequency ranges. In this approach, a local vibroacoustic energy model as well as a simplification of this model were constructed. In this paper, firstly an extension of the previous theory is performed in order to include the case of general input forces and secondly, a structural partitioning methodology is presented along with a set of tools used for the construction of a partitioning. Finally, an application is presented for an automotive vehicle.
A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems
2005-05-01
Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT
Karunasekara, Thushara; Poole, Colin F
2011-07-15
Partition coefficients for varied compounds were determined for the organic solvent-dimethyl sulfoxide biphasic partition system where the organic solvent is n-heptane or isopentyl ether. These partition coefficient databases are analyzed using the solvation parameter model facilitating a quantitative comparison of the dimethyl sulfoxide-based partition systems with other totally organic partition systems. Dimethyl sulfoxide is a moderately cohesive solvent, reasonably dipolar/polarizable and strongly hydrogen-bond basic. Although generally considered to be non-hydrogen-bond acidic, analysis of the partition coefficient database strongly supports reclassification as a weak hydrogen-bond acid in agreement with recent literature. The system constants for the n-heptane-dimethyl sulfoxide biphasic system provide an explanation of the mechanism for the selective isolation of polycyclic aromatic compounds from mixtures containing low-polarity hydrocarbons based on the capability of the polar interactions (dipolarity/polarizability and hydrogen-bonding) to overcome the opposing cohesive forces in dimethyl sulfoxide that are absent for the interactions with hydrocarbons of low polarity. In addition, dimethyl sulfoxide-organic solvent systems afford a complementary approach to other totally organic biphasic partition systems for descriptor measurements of compounds virtually insoluble in water. Copyright © 2011 Elsevier B.V. All rights reserved.
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
Modular structure of functional networks in olfactory memory.
Meunier, David; Fonlupt, Pierre; Saive, Anne-Lise; Plailly, Jane; Ravel, Nadine; Royet, Jean-Pierre
2014-07-15
Graph theory enables the study of systems by describing those systems as a set of nodes and edges. Graph theory has been widely applied to characterize the overall structure of data sets in the social, technological, and biological sciences, including neuroscience. Modular structure decomposition enables the definition of sub-networks whose components are gathered in the same module and work together closely, while working weakly with components from other modules. This processing is of interest for studying memory, a cognitive process that is widely distributed. We propose a new method to identify modular structure in task-related functional magnetic resonance imaging (fMRI) networks. The modular structure was obtained directly from correlation coefficients and thus retained information about both signs and weights. The method was applied to functional data acquired during a yes-no odor recognition memory task performed by young and elderly adults. Four response categories were explored: correct (Hit) and incorrect (False alarm, FA) recognition and correct and incorrect rejection. We extracted time series data for 36 areas as a function of response categories and age groups and calculated condition-based weighted correlation matrices. Overall, condition-based modular partitions were more homogeneous in young than elderly subjects. Using partition similarity-based statistics and a posteriori statistical analyses, we demonstrated that several areas, including the hippocampus, caudate nucleus, and anterior cingulate gyrus, belonged to the same module more frequently during Hit than during all other conditions. Modularity values were negatively correlated with memory scores in the Hit condition and positively correlated with bias scores (liberal/conservative attitude) in the Hit and FA conditions. We further demonstrated that the proportion of positive and negative links between areas of different modules (i.e., the proportion of correlated and anti-correlated areas) accounted for most of the observed differences in signed modularity. Taken together, our results provided some evidence that the neural networks involved in odor recognition memory are organized into modules and that these modular partitions are linked to behavioral performance and individual strategies. Copyright © 2014 Elsevier Inc. All rights reserved.
Solute partitioning in multi-component γ/γ' Co–Ni-base superalloys with near-zero lattice misfit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meher, S.; Carroll, L. J.; Pollock, T. M.
The addition of nickel to cobalt-base alloys enables alloys with a near zero γ – γ' lattice misfit. The solute partitioning between ordered γ' precipitates and the disordered γ matrix have been investigated using atom probe tomography. Lastly, the unique shift in solute partitioning in these alloys, as compared to that in simpler Co-base alloys, derives from changes in site substitution of solutes as the relative amounts of Co and Ni change, highlighting new opportunities for the development of advanced tailored alloys.
Solute partitioning in multi-component γ/γ' Co–Ni-base superalloys with near-zero lattice misfit
Meher, S.; Carroll, L. J.; Pollock, T. M.; ...
2015-11-21
The addition of nickel to cobalt-base alloys enables alloys with a near zero γ – γ' lattice misfit. The solute partitioning between ordered γ' precipitates and the disordered γ matrix have been investigated using atom probe tomography. Lastly, the unique shift in solute partitioning in these alloys, as compared to that in simpler Co-base alloys, derives from changes in site substitution of solutes as the relative amounts of Co and Ni change, highlighting new opportunities for the development of advanced tailored alloys.
An Investigation of Document Partitions.
ERIC Educational Resources Information Center
Shaw, W. M., Jr.
1986-01-01
Empirical significance of document partitions is investigated as a function of index term-weight and similarity thresholds. Results show the same empirically preferred partitions can be detected by two independent strategies: an analysis of cluster-based retrieval analysis and an analysis of regularities in the underlying structure of the document…
Zhang, H H; Gao, S; Chen, W; Shi, L; D'Souza, W D; Meyer, R R
2013-03-21
An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equallyspaced beams (eplans), we have developed a global search metaheuristic process based on the nested partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are of superior quality.
Zhang, H H; Gao, S; Chen, W; Shi, L; D’Souza, W D; Meyer, R R
2013-01-01
An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equally-spaced beams (eplans), we have developed a global search metaheuristic process based on the Nested Partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are superior quality. PMID:23459411
Predicate Oriented Pattern Analysis for Biomedical Knowledge Discovery
Shen, Feichen; Liu, Hongfang; Sohn, Sunghwan; Larson, David W.; Lee, Yugyung
2017-01-01
In the current biomedical data movement, numerous efforts have been made to convert and normalize a large number of traditional structured and unstructured data (e.g., EHRs, reports) to semi-structured data (e.g., RDF, OWL). With the increasing number of semi-structured data coming into the biomedical community, data integration and knowledge discovery from heterogeneous domains become important research problem. In the application level, detection of related concepts among medical ontologies is an important goal of life science research. It is more crucial to figure out how different concepts are related within a single ontology or across multiple ontologies by analysing predicates in different knowledge bases. However, the world today is one of information explosion, and it is extremely difficult for biomedical researchers to find existing or potential predicates to perform linking among cross domain concepts without any support from schema pattern analysis. Therefore, there is a need for a mechanism to do predicate oriented pattern analysis to partition heterogeneous ontologies into closer small topics and do query generation to discover cross domain knowledge from each topic. In this paper, we present such a model that predicates oriented pattern analysis based on their close relationship and generates a similarity matrix. Based on this similarity matrix, we apply an innovated unsupervised learning algorithm to partition large data sets into smaller and closer topics and generate meaningful queries to fully discover knowledge over a set of interlinked data sources. We have implemented a prototype system named BmQGen and evaluate the proposed model with colorectal surgical cohort from the Mayo Clinic. PMID:28983419
Multi-Objective Community Detection Based on Memetic Algorithm
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646
Multi-objective community detection based on memetic algorithm.
Wu, Peng; Pan, Li
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).
Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.
Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao
2011-12-01
In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE
Adelmann, S; Baldhoff, T; Koepcke, B; Schembecker, G
2013-01-25
The selection of solvent systems in centrifugal partition chromatography (CPC) is the most critical point in setting up a separation. Therefore, lots of research was done on the topic in the last decades. But the selection of suitable operating parameters (mobile phase flow rate, rotational speed and mode of operation) with respect to hydrodynamics and pressure drop limit in CPC is still mainly driven by experience of the chromatographer. In this work we used hydrodynamic analysis for the prediction of most suitable operating parameters. After selection of different solvent systems with respect to partition coefficients for the target compound the hydrodynamics were visualized. Based on flow pattern and retention the operating parameters were selected for the purification runs of nybomycin derivatives that were carried out with a 200 ml FCPC(®) rotor. The results have proven that the selection of optimized operating parameters by analysis of hydrodynamics only is possible. As the hydrodynamics are predictable by the physical properties of the solvent system the optimized operating parameters can be estimated, too. Additionally, we found that dispersion and especially retention are improved if the less viscous phase is mobile. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Borri, Marco; Schmidt, Maria A.; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M.; Partridge, Mike; Bhide, Shreerang A.; Nutting, Christopher M.; Harrington, Kevin J.; Newbold, Katie L.; Leach, Martin O.
2015-01-01
Purpose To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. Material and Methods The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. Results The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. Conclusion The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes. PMID:26398888
NASA Astrophysics Data System (ADS)
Padrón, Ryan S.; Gudmundsson, Lukas; Greve, Peter; Seneviratne, Sonia I.
2017-11-01
The long-term surface water balance over land is described by the partitioning of precipitation (P) into runoff and evapotranspiration (ET), and is commonly characterized by the ratio ET/P. The ratio between potential evapotranspiration (PET) and P is explicitly considered to be the primary control of ET/P within the Budyko framework, whereas all other controls are often integrated into a single parameter, ω. Although the joint effect of these additional controlling factors of ET/P can be significant, a detailed understanding of them is yet to be achieved. This study therefore introduces a new global data set for the long-term mean partitioning of P into ET and runoff in 2,733 catchments, which is based on in situ observations and assembled from a systematic examination of peer-reviewed studies. A total of 26 controls of ET/P that are proposed in the literature are assessed using the new data set. Results reveal that: (i) factors controlling ET/P vary between regions with different climate types; (ii) controls other than PET/P explain at least 35% of the ET/P variance in all regions, and up to ˜90% in arid climates; (iii) among these, climate factors and catchment slope dominate over other landscape characteristics; and (iv) despite the high attention that vegetation-related indices receive as controls of ET/P, they are found to play a minor and often nonsignificant role. Overall, this study provides a comprehensive picture on factors controlling the partitioning of P, with valuable insights for model development, watershed management, and the assessment of water resources around the globe.
Two dissimilar approaches to dynamical systems on hyper MV -algebras and their information entropy
NASA Astrophysics Data System (ADS)
Mehrpooya, Adel; Ebrahimi, Mohammad; Davvaz, Bijan
2017-09-01
Measuring the flow of information that is related to the evolution of a system which is modeled by applying a mathematical structure is of capital significance for science and usually for mathematics itself. Regarding this fact, a major issue in concern with hyperstructures is their dynamics and the complexity of the varied possible dynamics that exist over them. Notably, the dynamics and uncertainty of hyper MV -algebras which are hyperstructures and extensions of a central tool in infinite-valued Lukasiewicz propositional calculus that models many valued logics are of primary concern. Tackling this problem, in this paper we focus on the subject of dynamical systems on hyper MV -algebras and their entropy. In this respect, we adopt two varied approaches. One is the set-based approach in which hyper MV -algebra dynamical systems are developed by employing set functions and set partitions. By the other method that is based on points and point partitions, we establish the concept of hyper injective dynamical systems on hyper MV -algebras. Next, we study the notion of entropy for both kinds of systems. Furthermore, we consider essential ergodic characteristics of those systems and their entropy. In particular, we introduce the concept of isomorphic hyper injective and hyper MV -algebra dynamical systems, and we demonstrate that isomorphic systems have the same entropy. We present a couple of theorems in order to help calculate entropy. In particular, we prove a contemporary version of addition and Kolmogorov-Sinai Theorems. Furthermore, we provide a comparison between the indispensable properties of hyper injective and semi-independent dynamical systems. Specifically, we present and prove theorems that draw comparisons between the entropies of such systems. Lastly, we discuss some possible relationships between the theories of hyper MV -algebra and MV -algebra dynamical systems.
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727
Generic pure quantum states as steady states of quasi-local dissipative dynamics
NASA Astrophysics Data System (ADS)
Karuvade, Salini; Johnson, Peter D.; Ticozzi, Francesco; Viola, Lorenza
2018-04-01
We investigate whether a generic pure state on a multipartite quantum system can be the unique asymptotic steady state of locality-constrained purely dissipative Markovian dynamics. In the tripartite setting, we show that the problem is equivalent to characterizing the solution space of a set of linear equations and establish that the set of pure states obeying the above property has either measure zero or measure one, solely depending on the subsystems’ dimension. A complete analytical characterization is given when the central subsystem is a qubit. In the N-partite case, we provide conditions on the subsystems’ size and the nature of the locality constraint, under which random pure states cannot be quasi-locally stabilized generically. Also, allowing for the possibility to approximately stabilize entangled pure states that cannot be exact steady states in settings where stabilizability is generic, our results offer insights into the extent to which random pure states may arise as unique ground states of frustration-free parent Hamiltonians. We further argue that, to a high probability, pure quantum states sampled from a t-design enjoy the same stabilizability properties of Haar-random ones as long as suitable dimension constraints are obeyed and t is sufficiently large. Lastly, we demonstrate a connection between the tasks of quasi-local state stabilization and unique state reconstruction from local tomographic information, and provide a constructive procedure for determining a generic N-partite pure state based only on knowledge of the support of any two of the reduced density matrices of about half the parties, improving over existing results.
Partitioning Strategy Using Static Analysis Techniques
NASA Astrophysics Data System (ADS)
Seo, Yongjin; Soo Kim, Hyeon
2016-08-01
Flight software is software used in satellites' on-board computers. It has requirements such as real time and reliability. The IMA architecture is used to satisfy these requirements. The IMA architecture has the concept of partitions and this affected the configuration of flight software. That is, situations occurred in which software that had been loaded on one system was divided into many partitions when being loaded. For new issues, existing studies use experience based partitioning methods. However, these methods have a problem that they cannot be reused. In this respect, this paper proposes a partitioning method that is reusable and consistent.
A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method
NASA Astrophysics Data System (ADS)
Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.
2007-09-01
Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.
NASA Astrophysics Data System (ADS)
Molina, J. F.; Moreno, J. A.; Castro, A.; Rodríguez, C.; Fershtater, G. B.
2015-09-01
Dependencies of plagioclase/amphibole Al-Si partitioning, DAl/Siplg/amp, and amphibole/liquid Mg partitioning, DMgamp/liq, on temperature, pressure and phase compositions are investigated employing robust regression methods based on MM-estimators. A database with 92 amphibole-plagioclase pairs - temperature range: 650-1050 °C; amphibole compositional limits: > 0.02 apfu (23O) Ti and > 0.05 apfu Al - and 148 amphibole-glass pairs - temperature range: 800-1100 °C; amphibole compositional limit: CaM4/(CaM4 + NaM4) > 0.75 - compiled from experiments in the literature was used for the calculations (amphibole normalization scheme: 13-CNK method).
Partitioning a macroscopic system into independent subsystems
NASA Astrophysics Data System (ADS)
Delle Site, Luigi; Ciccotti, Giovanni; Hartmann, Carsten
2017-08-01
We discuss the problem of partitioning a macroscopic system into a collection of independent subsystems. The partitioning of a system into replica-like subsystems is nowadays a subject of major interest in several fields of theoretical and applied physics. The thermodynamic approach currently favoured by practitioners is based on a phenomenological definition of an interface energy associated with the partition, due to a lack of easily computable expressions for a microscopic (i.e. particle-based) interface energy. In this article, we outline a general approach to derive sharp and computable bounds for the interface free energy in terms of microscopic statistical quantities. We discuss potential applications in nanothermodynamics and outline possible future directions.
Winding Schemes for Wide Constant Power Range of Double Stator Transverse Flux Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Hassan, Iftekhar; Sozer, Yilmaz
2015-05-01
Different ring winding schemes for double sided transverse flux machines are investigated in this paper for wide speed operation. The windings under investigation are based on two inverters used in parallel. At higher power applications this arrangement improves the drive efficiency. The new winding structure through manipulation of the end connection splits individual sets into two and connects the partitioned turns from individual stator sets in series. This configuration offers the flexibility of torque profiling and a greater flux weakening region. At low speeds and low torque only one winding set is capable of providing the required torque thus providingmore » greater fault tolerance. At higher speeds one set is dedicated to torque production and the other for flux control. The proposed method improves the machine efficiency and allows better flux weakening which is desirable for traction applications.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
Inference and Analysis of Population Structure Using Genetic Data and Network Theory.
Greenbaum, Gili; Templeton, Alan R; Bar-David, Shirli
2016-04-01
Clustering individuals to subpopulations based on genetic data has become commonplace in many genetic studies. Inference about population structure is most often done by applying model-based approaches, aided by visualization using distance-based approaches such as multidimensional scaling. While existing distance-based approaches suffer from a lack of statistical rigor, model-based approaches entail assumptions of prior conditions such as that the subpopulations are at Hardy-Weinberg equilibria. Here we present a distance-based approach for inference about population structure using genetic data by defining population structure using network theory terminology and methods. A network is constructed from a pairwise genetic-similarity matrix of all sampled individuals. The community partition, a partition of a network to dense subgraphs, is equated with population structure, a partition of the population to genetically related groups. Community-detection algorithms are used to partition the network into communities, interpreted as a partition of the population to subpopulations. The statistical significance of the structure can be estimated by using permutation tests to evaluate the significance of the partition's modularity, a network theory measure indicating the quality of community partitions. To further characterize population structure, a new measure of the strength of association (SA) for an individual to its assigned community is presented. The strength of association distribution (SAD) of the communities is analyzed to provide additional population structure characteristics, such as the relative amount of gene flow experienced by the different subpopulations and identification of hybrid individuals. Human genetic data and simulations are used to demonstrate the applicability of the analyses. The approach presented here provides a novel, computationally efficient model-free method for inference about population structure that does not entail assumption of prior conditions. The method is implemented in the software NetStruct (available at https://giligreenbaum.wordpress.com/software/). Copyright © 2016 by the Genetics Society of America.
Bandyopadhyay, Debashree; Mehler, Ernest L
2008-08-01
A general method has been developed to characterize the hydrophobicity or hydrophilicity of the microenvironment (MENV), in which a given amino acid side chain is immersed, by calculating a quantitative property descriptor (QPD) based on the relative (to water) hydrophobicity of the MENV. Values of the QPD were calculated for a test set of 733 proteins to analyze the modulating effects on amino acid residue properties by the MENV in which they are imbedded. The QPD values and solvent accessibility were used to derive a partitioning of residues based on the MENV hydrophobicities. From this partitioning, a new hydrophobicity scale was developed, entirely in the context of protein structure, where amino acid residues are immersed in one or more "MENVpockets." Thus, the partitioning is based on the residues "sampling" a large number of "solvents" (MENVs) that represent a very large range of hydrophobicity values. It was found that the hydrophobicity of around 80% of amino acid side chains and their MENV are complementary to each other, but for about 20%, the MENV and their imbedded residue can be considered as mismatched. Many of these mismatches could be rationalized in terms of the structural stability of the protein and/or the involvement of the imbedded residue in function. The analysis also indicated a remarkable conservation of local environments around highly conserved active site residues that have similar functions across protein families, but where members have relatively low sequence homology. Thus, quantitative evaluation of this QPD is suggested, here, as a tool for structure-function prediction, analysis, and parameter development for the calculation of properties in proteins. (c) 2008 Wiley-Liss, Inc.
Strainrange partitioning behavior of the nickel-base superalloys, Rene' 80 and in 100
NASA Technical Reports Server (NTRS)
Halford, G. R.; Nachtigall, A. J.
1978-01-01
A study was made to assess the ability of the method of Strainrange Partitioning (SRP) to both correlate and predict high-temperature, low cycle fatigue lives of nickel base superalloys for gas turbine applications. The partitioned strainrange versus life relationships for uncoated Rene' 80 and cast IN 100 were also determined from the ductility normalized-Strainrange Partitioning equations. These were used to predict the cyclic lives of the baseline tests. The life predictability of the method was verified for cast IN 100 by applying the baseline results to the cyclic life prediction of a series of complex strain cycling tests with multiple hold periods at constant strain. It was concluded that the method of SRP can correlate and predict the cyclic lives of laboratory specimens of the nickel base superalloys evaluated in this program.
Gate-tunable current partition in graphene-based topological zero lines
NASA Astrophysics Data System (ADS)
Wang, Ke; Ren, Yafei; Deng, Xinzhou; Yang, Shengyuan A.; Jung, Jeil; Qiao, Zhenhua
2017-06-01
We demonstrate new mechanisms for gate-tunable current partition at topological zero-line intersections in a graphene-based current splitter. Based on numerical calculations of the nonequilibrium Green's functions and Landauer-Büttiker formula, we show that the presence of a perpendicular magnetic field on the order of a few Teslas allows for carrier sign dependent current routing. In the zero-field limit the control on current routing and partition can be achieved within a range of 10-90 % of the total incoming current by tuning the carrier density at tilted intersections or by modifying the relative magnitude of the bulk band gaps via gate voltage. We discuss the implications of our findings in the design of topological zero-line networks where finite orbital magnetic moments are expected when the current partition is asymmetric.
Podhorniak, L V; Negron, J F; Griffith, F D
2001-01-01
A gas chromatographic method with a pulsed flame photometric detector (P-FPD) is presented for the analysis of 28 parent organophosphate (OP) pesticides and their OP metabolites. A total of 57 organophosphates were analyzed in 10 representative fruit and vegetable crop groups. The method is based on a judicious selection of known procedures from FDA sources such as the Pesticide Analytical Manual and Laboratory Information Bulletins, combined in a manner to recover the OPs and their metabolite(s) at the part-per-billion (ppb) level. The method uses an acetone extraction with either miniaturized Hydromatrix column partitioning or alternately a miniaturized methylene dichloride liquid-liquid partitioning, followed by solid-phase extraction (SPE) cleanup with graphitized carbon black (GCB) and PSA cartridges. Determination of residues is by programmed temperature capillary column gas chromatography fitted with a P-FPD set in the phosphorus mode. The method is designed so that a set of samples can be prepared in 1 working day for overnight instrumental analysis. The recovery data indicates that a daily column-cutting procedure used in combination with the SPE extract cleanup effectively reduces matrix enhancement at the ppb level for many organophosphates. The OPs most susceptible to elevated recoveries around or greater than 150%, based on peak area calculations, were trichlorfon, phosmet, and the metabolites of dimethoate, fenamiphos, fenthion, and phorate.
NASA Astrophysics Data System (ADS)
Cartier, Camille; Hammouda, Tahar; Doucelance, Régis; Boyet, Maud; Devidal, Jean-Luc; Moine, Bertrand
2014-04-01
In order to investigate the influence of very reducing conditions, we report enstatite-melt trace element partition coefficients (D) obtained on enstatite chondrite material at 5 GPa and under oxygen fugacities (fO2) ranging between 0.8 and 8.2 log units below the iron-wustite (IW) buffer. Experiments were conducted in a multianvil apparatus between 1580 and 1850 °C, using doped (Sc, V, REE, HFSE, U, Th) starting materials. We used a two-site lattice strain model and a Monte-Carlo-type approach to model experimentally determined partition coefficient data. The model can fit our partitioning data, i.e. trace elements repartition in enstatite, which provides evidence for the attainment of equilibrium in our experiments. The precision on the lattice strain model parameters obtained from modelling does not enable determination of the influence of intensive parameters on crystal chemical partitioning, within our range of conditions (fO2, P, T, composition). We document the effect of variable oxygen fugacity on the partitioning of multivalent elements. Cr and V, which are trivalent in the pyroxene at around IW - 1 are reduced to 2+ state with increasingly reducing conditions, thus affecting their partition coefficients. In our range of redox conditions Ti is always present as a mixture between 4+ and 3+ states. However the Ti3+-Ti4+ ratio increases strongly with increasingly reducing conditions. Moreover in highly reducing conditions, Nb and Ta, that usually are pentavalent in magmatic systems, appear to be reduced to lower valence species, which may be Nb2+ and Ta3+. We propose a new proxy for fO2 based on D(Cr)/D(V). Our new data extend the redox range covered by previous studies and allows this proxy to be used in the whole range of redox conditions of the solar system objects. We selected trace-element literature data of six chondrules on the criterion of their equilibrium. Applying the proxy to opx-matrix systems, we estimated that three type I chondrules have equilibrated at IW - 7 ± 1, one type I chondrule at IW - 4 ± 1, and two type II chondrules at IW + 3 ± 1. This first accurate estimation of enstatite-melt fO2 for type I chondrules is very close to CAI values. Find the best-fit for trivalent elements. We set the r0M1 (3+) range to 0.55-0.75 Å, based on visual observations of the datapoints. For the other variables we have set boundary values beyond which the solutions would be unacceptable. For example, r0M2 (3+) has to be larger than r0M1 (3+). Finally we restricted the D0 range as follow: 0.2
Barillot, Romain; Louarn, Gaëtan; Escobar-Gutiérrez, Abraham J; Huynh, Pierre; Combes, Didier
2011-10-01
Most studies dealing with light partitioning in intercropping systems have used statistical models based on the turbid medium approach, thus assuming homogeneous canopies. However, these models could not be directly validated although spatial heterogeneities could arise in such canopies. The aim of the present study was to assess the ability of the turbid medium approach to accurately estimate light partitioning within grass-legume mixed canopies. Three contrasted mixtures of wheat-pea, tall fescue-alfalfa and tall fescue-clover were sown according to various patterns and densities. Three-dimensional plant mock-ups were derived from magnetic digitizations carried out at different stages of development. The benchmarks for light interception efficiency (LIE) estimates were provided by the combination of a light projective model and plant mock-ups, which also provided the inputs of a turbid medium model (SIRASCA), i.e. leaf area index and inclination. SIRASCA was set to gradually account for vertical heterogeneity of the foliage, i.e. the canopy was described as one, two or ten horizontal layers of leaves. Mixtures exhibited various and heterogeneous profiles of foliar distribution, leaf inclination and component species height. Nevertheless, most of the LIE was satisfactorily predicted by SIRASCA. Biased estimations were, however, observed for (1) grass species and (2) tall fescue-alfalfa mixtures grown at high density. Most of the discrepancies were due to vertical heterogeneities and were corrected by increasing the vertical description of canopies although, in practice, this would require time-consuming measurements. The turbid medium analogy could be successfully used in a wide range of canopies. However, a more detailed description of the canopy is required for mixtures exhibiting vertical stratifications and inter-/intra-species foliage overlapping. Architectural models remain a relevant tool for studying light partitioning in intercropping systems that exhibit strong vertical heterogeneities. Moreover, these models offer the possibility to integrate the effects of microclimate variations on plant growth.
Van Hulst, Andraea; Roy-Gagnon, Marie-Hélène; Gauvin, Lise; Kestens, Yan; Henderson, Mélanie; Barnett, Tracie A
2015-02-15
Few studies consider how risk factors within multiple levels of influence operate synergistically to determine childhood obesity. We used recursive partitioning analysis to identify unique combinations of individual, familial, and neighborhood factors that best predict obesity in children, and tested whether these predict 2-year changes in body mass index (BMI). Data were collected in 2005-2008 and in 2008-2011 for 512 Quebec youth (8-10 years at baseline) with a history of parental obesity (QUALITY study). CDC age- and sex-specific BMI percentiles were computed and children were considered obese if their BMI was ≥95th percentile. Individual (physical activity and sugar-sweetened beverage intake), familial (household socioeconomic status and measures of parental obesity including both BMI and waist circumference), and neighborhood (disadvantage, prestige, and presence of parks, convenience stores, and fast food restaurants) factors were examined. Recursive partitioning, a method that generates a classification tree predicting obesity based on combined exposure to a series of variables, was used. Associations between resulting varying risk group membership and BMI percentile at baseline and 2-year follow up were examined using linear regression. Recursive partitioning yielded 7 subgroups with a prevalence of obesity equal to 8%, 11%, 26%, 28%, 41%, 60%, and 63%, respectively. The 2 highest risk subgroups comprised i) children not meeting physical activity guidelines, with at least one BMI-defined obese parent and 2 abdominally obese parents, living in disadvantaged neighborhoods without parks and, ii) children with these characteristics, except with access to ≥1 park and with access to ≥1 convenience store. Group membership was strongly associated with BMI at baseline, but did not systematically predict change in BMI. Findings support the notion that obesity is predicted by multiple factors in different settings and provide some indications of potentially obesogenic environments. Alternate group definitions as well as longer duration of follow up should be investigated to predict change in obesity.
Constant-Time Pattern Matching For Real-Time Production Systems
NASA Astrophysics Data System (ADS)
Parson, Dale E.; Blank, Glenn D.
1989-03-01
Many intelligent systems must respond to sensory data or critical environmental conditions in fixed, predictable time. Rule-based systems, including those based on the efficient Rete matching algorithm, cannot guarantee this result. Improvement in execution-time efficiency is not all that is needed here; it is important to ensure constant, 0(1) time limits for portions of the matching process. Our approach is inspired by two observations about human performance. First, cognitive psychologists distinguish between automatic and controlled processing. Analogously, we partition the matching process across two networks. The first is the automatic partition; it is characterized by predictable 0(1) time and space complexity, lack of persistent memory, and is reactive in nature. The second is the controlled partition; it includes the search-based goal-driven and data-driven processing typical of most production system programming. The former is responsible for recognition and response to critical environmental conditions. The latter is responsible for the more flexible problem-solving behaviors consistent with the notion of intelligence. Support for learning and refining the automatic partition can be placed in the controlled partition. Our second observation is that people are able to attend to more critical stimuli or requirements selectively. Our match algorithm uses priorities to focus matching. It compares priority of information during matching, rather than deferring this comparison until conflict resolution. Messages from the automatic partition are able to interrupt the controlled partition, enhancing system responsiveness. Our algorithm has numerous applications for systems that must exhibit time-constrained behavior.
NASA Astrophysics Data System (ADS)
OgéE, J.; Peylin, P.; Ciais, P.; Bariac, T.; Brunet, Y.; Berbigier, P.; Roche, C.; Richard, P.; Bardoux, G.; Bonnefond, J.-M.
2003-06-01
The current emphasis on global climate studies has led the scientific community to set up a number of sites for measuring the long-term biosphere-atmosphere net CO2 exchange (net ecosystem exchange, NEE). Partitioning this flux into its elementary components, net assimilation (FA), and respiration (FR), remains necessary in order to get a better understanding of biosphere functioning and design better surface exchange models. Noting that FR and FA have different isotopic signatures, we evaluate the potential of isotopic 13CO2 measurements in the air (combined with CO2 flux and concentration measurements) to partition NEE into FR and FA on a routine basis. The study is conducted at a temperate coniferous forest where intensive isotopic measurements in air, soil, and biomass were performed in summer 1997. The multilayer soil-vegetation-atmosphere transfer model MuSICA is adapted to compute 13CO2 flux and concentration profiles. Using MuSICA as a "perfect" simulator and taking advantage of the very dense spatiotemporal resolution of the isotopic data set (341 flasks over a 24-hour period) enable us to test each hypothesis and estimate the performance of the method. The partitioning works better in midafternoon when isotopic disequilibrium is strong. With only 15 flasks, i.e., two 13CO2 nighttime profiles (to estimate the isotopic signature of FR) and five daytime measurements (to perform the partitioning) we get mean daily estimates of FR and FA that agree with the model within 15-20%. However, knowledge of the mesophyll conductance seems crucial and may be a limitation to the method.
Influence of Silicate Melt Composition on Metal/Silicate Partitioning of W, Ge, Ga and Ni
NASA Technical Reports Server (NTRS)
Singletary, S. J.; Domanik, K.; Drake, M. J.
2005-01-01
The depletion of the siderophile elements in the Earth's upper mantle relative to the chondritic meteorites is a geochemical imprint of core segregation. Therefore, metal/silicate partition coefficients (Dm/s) for siderophile elements are essential to investigations of core formation when used in conjunction with the pattern of elemental abundances in the Earth's mantle. The partitioning of siderophile elements is controlled by temperature, pressure, oxygen fugacity, and by the compositions of the metal and silicate phases. Several recent studies have shown the importance of silicate melt composition on the partitioning of siderophile elements between silicate and metallic liquids. It has been demonstrated that many elements display increased solubility in less polymerized (mafic) melts. However, the importance of silicate melt composition was believed to be minor compared to the influence of oxygen fugacity until studies showed that melt composition is an important factor at high pressures and temperatures. It was found that melt composition is also important for partitioning of high valency siderophile elements. Atmospheric experiments were conducted, varying only silicate melt composition, to assess the importance of silicate melt composition for the partitioning of W, Co and Ga and found that the valence of the dissolving species plays an important role in determining the effect of composition on solubility. In this study, we extend the data set to higher pressures and investigate the role of silicate melt composition on the partitioning of the siderophile elements W, Ge, Ga and Ni between metallic and silicate liquid.
Acceleration of Binding Site Comparisons by Graph Partitioning.
Krotzky, Timo; Klebe, Gerhard
2015-08-01
The comparison of protein binding sites is a prominent task in computational chemistry and has been studied in many different ways. For the automatic detection and comparison of putative binding cavities the Cavbase system has been developed which uses a coarse-grained set of pseudocenters to represent the physicochemical properties of a binding site and employs a graph-based procedure to calculate similarities between two binding sites. However, the comparison of two graphs is computationally quite demanding which makes large-scale studies such as the rapid screening of entire databases hardly feasible. In a recent work, we proposed the method Local Cliques (LC) for the efficient comparison of Cavbase binding sites. It employs a clique heuristic to detect the maximum common subgraph of two binding sites and an extended graph model to additionally compare the shape of individual surface patches. In this study, we present an alternative to further accelerate the LC method by partitioning the binding-site graphs into disjoint components prior to their comparisons. The pseudocenter sets are split with regard to their assigned phyiscochemical type, which leads to seven much smaller graphs than the original one. Applying this approach on the same test scenarios as in the former comprehensive way results in a significant speed-up without sacrificing accuracy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing
NASA Astrophysics Data System (ADS)
Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.
2018-04-01
We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.
Fu, P; Panneerselvam, A; Clifford, B; Dowlati, A; Ma, P C; Zeng, G; Halmos, B; Leidner, R S
2015-12-01
It is well known that non-small cell lung cancer (NSCLC) is a heterogeneous group of diseases. Previous studies have demonstrated genetic variation among different ethnic groups in the epidermal growth factor receptor (EGFR) in NSCLC. Research by our group and others has recently shown a lower frequency of EGFR mutations in African Americans with NSCLC, as compared to their White counterparts. In this study, we use our original study data of EGFR pathway genetics in African American NSCLC as an example to illustrate that univariate analyses based on aggregation versus partition of data leads to contradictory results, in order to emphasize the importance of controlling statistical confounding. We further investigate analytic approaches in logistic regression for data with separation, as is the case in our example data set, and apply appropriate methods to identify predictors of EGFR mutation. Our simulation shows that with separated or nearly separated data, penalized maximum likelihood (PML) produces estimates with smallest bias and approximately maintains the nominal value with statistical power equal to or better than that from maximum likelihood and exact conditional likelihood methods. Application of the PML method in our example data set shows that race and EGFR-FISH are independently significant predictors of EGFR mutation. © The Author(s) 2011.
Knacker, T; Schallnaß, H J; Klaschka, U; Ahlers, J
1995-11-01
The criteria for classification and labelling of substances as "dangerous for the environment" agreed upon within the European Union (EU) were applied to two sets of existing chemicals. One set (sample A) consisted of 41 randomly selected compounds listed in the European Inventory of Existing Chemical Substances (EINECS). The other set (sample B) comprised 115 substances listed in Annex I of Directive 67/548/EEC which were classified by the EU Working Group on Classification and Labelling of Existing Chemicals. The aquatic toxicity (fish mortality,Daphnia immobilisation, algal growth inhibition), ready biodegradability and n-octanol/water partition coefficient were measured for sample A by one and the same laboratory. For sample B, the available ecotoxicological data originated from many different sources and therefore was rather heterogeneous. In both samples, algal toxicity was the most sensitive effect parameter for most substances. Furthermore, it was found that, classification based on a single aquatic test result differs in many cases from classification based on a complete data set, although a correlation exists between the biological end-points of the aquatic toxicity test systems.
The impact of aerosol composition on the particle to gas partitioning of reactive mercury.
Rutter, Andrew P; Schauer, James J
2007-06-01
A laboratory system was developed to study the gas-particle partitioning of reactive mercury (RM) as a function of aerosol composition in synthetic atmospheric particulate matter. The collection of RM was achieved by filter- and sorbent-based methods. Analyses of the RM collected on the filters and sorbents were performed using thermal extraction combined with cold vapor atomic fluorescence spectroscopy (CVAFS), allowing direct measurement of the RM load on the substrates. Laboratory measurements of the gas-particle partitioning coefficients of RM to atmospheric aerosol particles revealed a strong dependence on aerosol composition, with partitioning coefficients that varied by orders of magnitude depending on the composition of the particles. Particles of sodium nitrate and the chlorides of potassium and sodium had high partitioning coefficients, shifting the RM partitioning toward the particle phase, while ammonium sulfate, levoglucosan, and adipic acid caused the RM to partition toward the gas phase and, therefore, had partitioning coefficients that were lower by orders of magnitude.
Barrett, Craig F; Specht, Chelsea D; Leebens-Mack, Jim; Stevenson, Dennis Wm; Zomlefer, Wendy B; Davis, Jerrold I
2014-01-01
Zingiberales comprise a clade of eight tropical monocot families including approx. 2500 species and are hypothesized to have undergone an ancient, rapid radiation during the Cretaceous. Zingiberales display substantial variation in floral morphology, and several members are ecologically and economically important. Deep phylogenetic relationships among primary lineages of Zingiberales have proved difficult to resolve in previous studies, representing a key region of uncertainty in the monocot tree of life. Next-generation sequencing was used to construct complete plastid gene sets for nine taxa of Zingiberales, which were added to five previously sequenced sets in an attempt to resolve deep relationships among families in the order. Variation in taxon sampling, process partition inclusion and partition model parameters were examined to assess their effects on topology and support. Codon-based likelihood analysis identified a strongly supported clade of ((Cannaceae, Marantaceae), (Costaceae, Zingiberaceae)), sister to (Musaceae, (Lowiaceae, Strelitziaceae)), collectively sister to Heliconiaceae. However, the deepest divergences in this phylogenetic analysis comprised short branches with weak support. Additionally, manipulation of matrices resulted in differing deep topologies in an unpredictable fashion. Alternative topology testing allowed statistical rejection of some of the topologies. Saturation fails to explain observed topological uncertainty and low support at the base of Zingiberales. Evidence for conflict among the plastid data was based on a support metric that accounts for conflicting resampled topologies. Many relationships were resolved with robust support, but the paucity of character information supporting the deepest nodes and the existence of conflict suggest that plastid coding regions are insufficient to resolve and support the earliest divergences among families of Zingiberales. Whole plastomes will continue to be highly useful in plant phylogenetics, but the current study adds to a growing body of literature suggesting that they may not provide enough character information for resolving ancient, rapid radiations.
Barrett, Craig F.; Specht, Chelsea D.; Leebens-Mack, Jim; Stevenson, Dennis Wm.; Zomlefer, Wendy B.; Davis, Jerrold I.
2014-01-01
Background and Aims Zingiberales comprise a clade of eight tropical monocot families including approx. 2500 species and are hypothesized to have undergone an ancient, rapid radiation during the Cretaceous. Zingiberales display substantial variation in floral morphology, and several members are ecologically and economically important. Deep phylogenetic relationships among primary lineages of Zingiberales have proved difficult to resolve in previous studies, representing a key region of uncertainty in the monocot tree of life. Methods Next-generation sequencing was used to construct complete plastid gene sets for nine taxa of Zingiberales, which were added to five previously sequenced sets in an attempt to resolve deep relationships among families in the order. Variation in taxon sampling, process partition inclusion and partition model parameters were examined to assess their effects on topology and support. Key Results Codon-based likelihood analysis identified a strongly supported clade of ((Cannaceae, Marantaceae), (Costaceae, Zingiberaceae)), sister to (Musaceae, (Lowiaceae, Strelitziaceae)), collectively sister to Heliconiaceae. However, the deepest divergences in this phylogenetic analysis comprised short branches with weak support. Additionally, manipulation of matrices resulted in differing deep topologies in an unpredictable fashion. Alternative topology testing allowed statistical rejection of some of the topologies. Saturation fails to explain observed topological uncertainty and low support at the base of Zingiberales. Evidence for conflict among the plastid data was based on a support metric that accounts for conflicting resampled topologies. Conclusions Many relationships were resolved with robust support, but the paucity of character information supporting the deepest nodes and the existence of conflict suggest that plastid coding regions are insufficient to resolve and support the earliest divergences among families of Zingiberales. Whole plastomes will continue to be highly useful in plant phylogenetics, but the current study adds to a growing body of literature suggesting that they may not provide enough character information for resolving ancient, rapid radiations. PMID:24280362
Solving Multi-variate Polynomial Equations in a Finite Field
2013-06-01
Algebraic Background In this section, some algebraic definitions and basics are discussed as they pertain to this re- search. For a more detailed...definitions and basics are discussed as they pertain to this research. For a more detailed treatment, consult a graph theory text such as [10]. A graph G...graph if V(G) can be partitioned into k subsets V1,V2, ...,Vk such that uv is only an edge of G if u and v belong to different partite sets. If, in
Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation
NASA Astrophysics Data System (ADS)
Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.
2018-05-01
Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
An iterative network partition algorithm for accurate identification of dense network modules
Sun, Siqi; Dong, Xinran; Fu, Yao; Tian, Weidong
2012-01-01
A key step in network analysis is to partition a complex network into dense modules. Currently, modularity is one of the most popular benefit functions used to partition network modules. However, recent studies suggested that it has an inherent limitation in detecting dense network modules. In this study, we observed that despite the limitation, modularity has the advantage of preserving the primary network structure of the undetected modules. Thus, we have developed a simple iterative Network Partition (iNP) algorithm to partition a network. The iNP algorithm provides a general framework in which any modularity-based algorithm can be implemented in the network partition step. Here, we tested iNP with three modularity-based algorithms: multi-step greedy (MSG), spectral clustering and Qcut. Compared with the original three methods, iNP achieved a significant improvement in the quality of network partition in a benchmark study with simulated networks, identified more modules with significantly better enrichment of functionally related genes in both yeast protein complex network and breast cancer gene co-expression network, and discovered more cancer-specific modules in the cancer gene co-expression network. As such, iNP should have a broad application as a general method to assist in the analysis of biological networks. PMID:22121225
Partial Storage Optimization and Load Control Strategy of Cloud Data Centers
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444
Partial storage optimization and load control strategy of cloud data centers.
Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.
Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J
2016-04-01
Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.
Plagianakos, V P; Magoulas, G D; Vrahatis, M N
2006-03-01
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.
Two-lattice models of trace element behavior: A response
NASA Astrophysics Data System (ADS)
Ellison, Adam J. G.; Hess, Paul C.
1990-08-01
Two-lattice melt components of Bottinga and Weill (1972), Nielsen and Drake (1979), and Nielsen (1985) are applied to major and trace element partitioning between coexisting immiscible liquids studied by RYERSON and Hess (1978) and Watson (1976). The results show that (1) the set of components most successful in one system is not necessarily portable to another system; (2) solution non-ideality within a sublattice severely limits applicability of two-lattice models; (3) rigorous application of two-lattice melt components may yield effective partition coefficients for major element components with no physical interpretation; and (4) the distinction between network-forming and network-modifying components in the sense of the two-lattice models is not clear cut. The algebraic description of two-lattice models is such that they will most successfully limit the compositional dependence of major and trace element solution behavior when the effective partition coefficient of the component of interest is essentially the same as the bulk partition coefficient of all other components within its sublattice.
Toward prediction of alkane/water partition coefficients.
Toulmin, Anita; Wood, J Matthew; Kenny, Peter W
2008-07-10
Partition coefficients were measured for 47 compounds in the hexadecane/water ( P hxd) and 1-octanol/water ( P oct) systems. Some types of hydrogen bond acceptor presented by these compounds to the partitioning systems are not well represented in the literature of alkane/water partitioning. The difference, DeltalogP, between logP oct and logP hxd is a measure of the hydrogen bonding potential of a molecule and is identified as a target for predictive modeling. Minimized molecular electrostatic potential ( V min) was shown to be an effective predictor of the contribution of hydrogen bond acceptors to DeltalogP. Carbonyl oxygen atoms were found to be stronger hydrogen bond acceptors for their electrostatic potential than heteroaromatic nitrogen or oxygen bound to hypervalent sulfur or nitrogen. Values of V min calculated for hydrogen-bonded complexes were used to explore polarization effects. Predicted logP hxd and DeltalogP were shown to be more effective than logP oct for modeling brain penetration for a data set of 18 compounds.
Sharing the cell's bounty - organelle inheritance in yeast.
Knoblach, Barbara; Rachubinski, Richard A
2015-02-15
Eukaryotic cells replicate and partition their organelles between the mother cell and the daughter cell at cytokinesis. Polarized cells, notably the budding yeast Saccharomyces cerevisiae, are well suited for the study of organelle inheritance, as they facilitate an experimental dissection of organelle transport and retention processes. Much progress has been made in defining the molecular players involved in organelle partitioning in yeast. Each organelle uses a distinct set of factors - motor, anchor and adaptor proteins - that ensures its inheritance by future generations of cells. We propose that all organelles, regardless of origin or copy number, are partitioned by the same fundamental mechanism involving division and segregation. Thus, the mother cell keeps, and the daughter cell receives, their fair and equitable share of organelles. This mechanism of partitioning moreover facilitates the segregation of organelle fragments that are not functionally equivalent. In this Commentary, we describe how this principle of organelle population control affects peroxisomes and other organelles, and outline its implications for yeast life span and rejuvenation. © 2015. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Mann, Ute; Frost, Daniel J.; Rubie, David C.; Becker, Harry; Audétat, Andreas
2012-05-01
The apparent overabundance of the highly siderophile elements (HSEs: Pt-group elements, Re and Au) in the mantles of Earth, Moon and Mars has not been satisfactorily explained. Although late accretion of a chondritic component seems to provide the most plausible explanation, metal-silicate equilibration in a magma ocean cannot be ruled out due to a lack of HSE partitioning data suitable for extrapolations to the relevant high pressure and high temperature conditions. We provide a new data set of partition coefficients simultaneously determined for Ru, Rh, Pd, Re, Ir and Pt over a range of 3.5-18 GPa and 2423-2773 K. In multianvil experiments, molten peridotite was equilibrated in MgO single crystal capsules with liquid Fe-alloy that contained bulk HSE concentrations of 53.2-98.9 wt% (XFe = 0.03-0.67) such that oxygen fugacities of IW - 1.5 to IW + 1.6 (i.e. logarithmic units relative to the iron-wüstite buffer) were established at run conditions. To analyse trace concentrations of the HSEs in the silicate melt with LA-ICP-MS, two silicate glass standards (1-119 ppm Ru, Rh, Pd, Re, Ir, Pt) were produced and evaluated for this study. Using an asymmetric regular solution model we have corrected experimental partition coefficients to account for the differences between HSE metal activities in the multicomponent Fe-alloys and infinite dilution. Based on the experimental data, the P and T dependence of the partition coefficients (D) was parameterized. The partition coefficients of all HSEs studied decrease with increasing pressure and to a greater extent with increasing temperature. Except for Pt, the decrease with pressure is stronger below ˜6 GPa and much weaker in the range 6-18 GPa. This change might result from pressure induced coordination changes in the silicate liquid. Extrapolating the D values over a large range of potential P-T conditions in a terrestrial magma ocean (peridotite liquidus at P ⩽ 60-80 GPa) we conclude that the P-T-induced decrease of D would not have been sufficient to explain HSE mantle abundances by metal-silicate equilibration at a common set of P-T-oxygen fugacity conditions. Therefore, the mantle concentrations of most HSEs cannot have been established during core formation. The comparatively less siderophile Pd might have been partly retained in the magma ocean if effective equilibration pressures reached 35-50 GPa. To a much smaller extent this could also apply to Pt and Rh providing that equilibration pressures reached ⩾60 GPa in the late stage of accretion. With most of the HSE partition coefficients at 60 GPa still differing by 0.5-3 orders of magnitude, metal-silicate equilibration alone cannot have produced the observed near-chondritic HSE abundances of the mantles of the Earth as well as of the Moon or Mars. Our results show that an additional process, such as the accretion of a late veneer composed of some type of chondritic material, was required. The results, therefore, support recent hybrid models, which propose that the observed HSE signatures are a combined result of both metal-silicate partitioning as well as an overprint by late accretion.
The "p"-Median Model as a Tool for Clustering Psychological Data
ERIC Educational Resources Information Center
Kohn, Hans-Friedrich; Steinley, Douglas; Brusco, Michael J.
2010-01-01
The "p"-median clustering model represents a combinatorial approach to partition data sets into disjoint, nonhierarchical groups. Object classes are constructed around "exemplars", that is, manifest objects in the data set, with the remaining instances assigned to their closest cluster centers. Effective, state-of-the-art implementations of…
A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steensland, Johan; Ray, Jaideep
2003-07-01
This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In manymore » cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.« less
Purification of biomaterials by phase partitioning
NASA Technical Reports Server (NTRS)
Harris, J. M.
1984-01-01
A technique which is particularly suited to microgravity environments and which is potentially more powerful than electrophoresis is phase partitioning. Phase partitioning is purification by partitioning between the two immiscible aqueous layers formed by solution of the polymers poly(ethylene glycol) and dextran in water. This technique proved to be very useful for separations in one-g but is limited for cells because the cells are more dense than the phase solutions thus tend to sediment to the bottom of the container before reaching equilibrium with the preferred phase. There are three phases to work in this area: synthesis of new polymers for affinity phase partitioning; development of automated apparatus for ground-based separations; and design of apparatus for performing simple phase partitioning space experiments, including examination of mechanisms for separating phases in the absence of gravity.
Rayne, Sierra; Forest, Kaya
2014-09-19
The air-water partition coefficient (Kaw) of perfluoro-2-methyl-3-pentanone (PFMP) was estimated using the G4MP2/G4 levels of theory and the SMD solvation model. A suite of 31 fluorinated compounds was employed to calibrate the theoretical method. Excellent agreement between experimental and directly calculated Kaw values was obtained for the calibration compounds. The PCM solvation model was found to yield unsatisfactory Kaw estimates for fluorinated compounds at both levels of theory. The HENRYWIN Kaw estimation program also exhibited poor Kaw prediction performance on the training set. Based on the resulting regression equation for the calibration compounds, the G4MP2-SMD method constrained the estimated Kaw of PFMP to the range 5-8 × 10(-6) M atm(-1). The magnitude of this Kaw range indicates almost all PFMP released into the atmosphere or near the land-atmosphere interface will reside in the gas phase, with only minor quantities dissolved in the aqueous phase as the parent compound and/or its hydrate/hydrate conjugate base. Following discharge into aqueous systems not at equilibrium with the atmosphere, significant quantities of PFMP will be present as the dissolved parent compound and/or its hydrate/hydrate conjugate base.
NASA Astrophysics Data System (ADS)
Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.
2017-09-01
Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
Detecting communities in large networks
NASA Astrophysics Data System (ADS)
Capocci, A.; Servedio, V. D. P.; Caldarelli, G.; Colaiori, F.
2005-07-01
We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.
Exploring Sampling in the Detection of Multicategory EEG Signals
Siuly, Siuly; Kabir, Enamul; Wang, Hua; Zhang, Yanchun
2015-01-01
The paper presents a structure based on samplings and machine leaning techniques for the detection of multicategory EEG signals where random sampling (RS) and optimal allocation sampling (OS) are explored. In the proposed framework, before using the RS and OS scheme, the entire EEG signals of each class are partitioned into several groups based on a particular time period. The RS and OS schemes are used in order to have representative observations from each group of each category of EEG data. Then all of the selected samples by the RS from the groups of each category are combined in a one set named RS set. In the similar way, for the OS scheme, an OS set is obtained. Then eleven statistical features are extracted from the RS and OS set, separately. Finally this study employs three well-known classifiers: k-nearest neighbor (k-NN), multinomial logistic regression with a ridge estimator (MLR), and support vector machine (SVM) to evaluate the performance for the RS and OS feature set. The experimental outcomes demonstrate that the RS scheme well represents the EEG signals and the k-NN with the RS is the optimum choice for detection of multicategory EEG signals. PMID:25977705
Censored quantile regression with recursive partitioning-based weights
Wey, Andrew; Wang, Lan; Rudser, Kyle
2014-01-01
Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800
Robust MST-Based Clustering Algorithm.
Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing
2018-06-01
Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.
Exact deconstruction of the 6D (2,0) theory
NASA Astrophysics Data System (ADS)
Hayling, J.; Papageorgakis, C.; Pomoni, E.; Rodríguez-Gómez, D.
2017-06-01
The dimensional-deconstruction prescription of Arkani-Hamed, Cohen, Kaplan, Karch and Motl provides a mechanism for recovering the A-type (2,0) theories on T 2, starting from a four-dimensional N=2 circular-quiver theory. We put this conjecture to the test using two exact-counting arguments: in the decompactification limit, we compare the Higgs-branch Hilbert series of the 4D N=2 quiver to the "half-BPS" limit of the (2,0) superconformal index. We also compare the full partition function for the 4D quiver on S 4 to the (2,0) partition function on S 4 × T 2. In both cases we find exact agreement. The partition function calculation sets up a dictionary between exact results in 4D and 6D.
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
Metatranscriptome analyses indicate resource partitioning between diatoms in the field.
Alexander, Harriet; Jenkins, Bethany D; Rynearson, Tatiana A; Dyhrman, Sonya T
2015-04-28
Diverse communities of marine phytoplankton carry out half of global primary production. The vast diversity of the phytoplankton has long perplexed ecologists because these organisms coexist in an isotropic environment while competing for the same basic resources (e.g., inorganic nutrients). Differential niche partitioning of resources is one hypothesis to explain this "paradox of the plankton," but it is difficult to quantify and track variation in phytoplankton metabolism in situ. Here, we use quantitative metatranscriptome analyses to examine pathways of nitrogen (N) and phosphorus (P) metabolism in diatoms that cooccur regularly in an estuary on the east coast of the United States (Narragansett Bay). Expression of known N and P metabolic pathways varied between diatoms, indicating apparent differences in resource utilization capacity that may prevent direct competition. Nutrient amendment incubations skewed N/P ratios, elucidating nutrient-responsive patterns of expression and facilitating a quantitative comparison between diatoms. The resource-responsive (RR) gene sets deviated in composition from the metabolic profile of the organism, being enriched in genes associated with N and P metabolism. Expression of the RR gene set varied over time and differed significantly between diatoms, resulting in opposite transcriptional responses to the same environment. Apparent differences in metabolic capacity and the expression of that capacity in the environment suggest that diatom-specific resource partitioning was occurring in Narragansett Bay. This high-resolution approach highlights the molecular underpinnings of diatom resource utilization and how cooccurring diatoms adjust their cellular physiology to partition their niche space.
Improving Unstructured Mesh Partitions for Multiple Criteria Using Mesh Adjacencies
Smith, Cameron W.; Rasquin, Michel; Ibanez, Dan; ...
2018-02-13
The scalability of unstructured mesh based applications depends on partitioning methods that quickly balance the computational work while reducing communication costs. Zhou et al. [SIAM J. Sci. Comput., 32 (2010), pp. 3201{3227; J. Supercomput., 59 (2012), pp. 1218{1228] demonstrated the combination of (hyper)graph methods with vertex and element partition improvement for PHASTA CFD scaling to hundreds of thousands of processes. Our work generalizes partition improvement to support balancing combinations of all the mesh entity dimensions (vertices, edges, faces, regions) in partitions with imbalances exceeding 70%. Improvement results are then presented for multiple entity dimensions on up to one million processesmore » on meshes with over 12 billion tetrahedral elements.« less
Improving Unstructured Mesh Partitions for Multiple Criteria Using Mesh Adjacencies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Cameron W.; Rasquin, Michel; Ibanez, Dan
The scalability of unstructured mesh based applications depends on partitioning methods that quickly balance the computational work while reducing communication costs. Zhou et al. [SIAM J. Sci. Comput., 32 (2010), pp. 3201{3227; J. Supercomput., 59 (2012), pp. 1218{1228] demonstrated the combination of (hyper)graph methods with vertex and element partition improvement for PHASTA CFD scaling to hundreds of thousands of processes. Our work generalizes partition improvement to support balancing combinations of all the mesh entity dimensions (vertices, edges, faces, regions) in partitions with imbalances exceeding 70%. Improvement results are then presented for multiple entity dimensions on up to one million processesmore » on meshes with over 12 billion tetrahedral elements.« less
Centrifuge models simulating magma emplacement during oblique rifting
NASA Astrophysics Data System (ADS)
Corti, Giacomo; Bonini, Marco; Innocenti, Fabrizio; Manetti, Piero; Mulugeta, Genene
2001-07-01
A series of centrifuge analogue experiments have been performed to model the mechanics of continental oblique extension (in the range of 0° to 60°) in the presence of underplated magma at the base of the continental crust. The experiments reproduced the main characteristics of oblique rifting, such as (1) en-echelon arrangement of structures, (2) mean fault trends oblique to the extension vector, (3) strain partitioning between different sets of faults and (4) fault dips higher than in purely normal faults (e.g. Tron, V., Brun, J.-P., 1991. Experiments on oblique rifting in brittle-ductile systems. Tectonophysics 188, 71-84). The model results show that the pattern of deformation is strongly controlled by the angle of obliquity ( α), which determines the ratio between the shearing and stretching components of movement. For α⩽35°, the deformation is partitioned between oblique-slip and normal faults, whereas for α⩾45° a strain partitioning arises between oblique-slip and strike-slip faults. The experimental results show that for α⩽35°, there is a strong coupling between deformation and the underplated magma: the presence of magma determines a strain localisation and a reduced strain partitioning; deformation, in turn, focuses magma emplacement. Magmatic chambers form in the core of lower crust domes with an oblique trend to the initial magma reservoir and, in some cases, an en-echelon arrangement. Typically, intrusions show an elongated shape with a high length/width ratio. In nature, this pattern is expected to result in magmatic and volcanic belts oblique to the rift axis and arranged en-echelon, in agreement with some selected natural examples of continental rifts (i.e. Main Ethiopian Rift) and oceanic ridges (i.e. Mohns and Reykjanes Ridges).
Active control of sound transmission through a double panel partition
NASA Astrophysics Data System (ADS)
Sas, P.; Bao, C.; Augusztinovicz, F.; Desmet, W.
1995-03-01
The feasibility of improving the insertion loss of lightweight double panel partitions by using small loudspeakers as active noise control sources inside the air gap between both panels of the partition is investigated analytically, numerically and experimentally in this paper. A theoretical analysis of the mechanisms of the fluid-structure interaction of double panel structures is presented in order to gain insight into the physical phenomena underlying the behaviour of a coupled vibro-acoustic system controlled by active methods. The analysis, based on modal coupling theory, enables one to derive some qualitative predictions concerning the potentials and limitations of the proposed approach. The theoretical analysis is valid only for geometrically simple structures. For more complex geometries, numerical simulations are required. Therefore the potential use of active noise control inside double panel structures has been analyzed by using coupled finite element and boundary element methods. To verify the conclusions drawn from the theoretical analysis and the numerical calculation and, above all, to demonstrate the potential of the proposed approach, experiments have been conducted with a laboratory set-up. The performance of the proposed approach was evaluated in terms of relative insertion loss measurements. It is shown that a considerable improvement of the insertion loss has been achieved around the lightly damped resonances of the system for the frequency range investigated (60-220 Hz).
Cluster Stability Estimation Based on a Minimal Spanning Trees Approach
NASA Astrophysics Data System (ADS)
Volkovich, Zeev (Vladimir); Barzily, Zeev; Weber, Gerhard-Wilhelm; Toledano-Kitai, Dvora
2009-08-01
Among the areas of data and text mining which are employed today in science, economy and technology, clustering theory serves as a preprocessing step in the data analyzing. However, there are many open questions still waiting for a theoretical and practical treatment, e.g., the problem of determining the true number of clusters has not been satisfactorily solved. In the current paper, this problem is addressed by the cluster stability approach. For several possible numbers of clusters we estimate the stability of partitions obtained from clustering of samples. Partitions are considered consistent if their clusters are stable. Clusters validity is measured as the total number of edges, in the clusters' minimal spanning trees, connecting points from different samples. Actually, we use the Friedman and Rafsky two sample test statistic. The homogeneity hypothesis, of well mingled samples within the clusters, leads to asymptotic normal distribution of the considered statistic. Resting upon this fact, the standard score of the mentioned edges quantity is set, and the partition quality is represented by the worst cluster corresponding to the minimal standard score value. It is natural to expect that the true number of clusters can be characterized by the empirical distribution having the shortest left tail. The proposed methodology sequentially creates the described value distribution and estimates its left-asymmetry. Numerical experiments, presented in the paper, demonstrate the ability of the approach to detect the true number of clusters.
Brain Network Regional Synchrony Analysis in Deafness
Xu, Lei; Liang, Mao-Jin
2018-01-01
Deafness, the most common auditory disease, has greatly affected people for a long time. The major treatment for deafness is cochlear implantation (CI). However, till today, there is still a lack of objective and precise indicator serving as evaluation of the effectiveness of the cochlear implantation. The goal of this EEG-based study is to effectively distinguish CI children from those prelingual deafened children without cochlear implantation. The proposed method is based on the functional connectivity analysis, which focuses on the brain network regional synchrony. Specifically, we compute the functional connectivity between each channel pair first. Then, we quantify the brain network synchrony among regions of interests (ROIs), where both intraregional synchrony and interregional synchrony are computed. And finally the synchrony values are concatenated to form the feature vector for the SVM classifier. What is more, we develop a new ROI partition method of 128-channel EEG recording system. That is, both the existing ROI partition method and the proposed ROI partition method are used in the experiments. Compared with the existing EEG signal classification methods, our proposed method has achieved significant improvements as large as 87.20% and 86.30% when the existing ROI partition method and the proposed ROI partition method are used, respectively. It further demonstrates that the new ROI partition method is comparable to the existing ROI partition method. PMID:29854776
A roadmap of clustering algorithms: finding a match for a biomedical application.
Andreopoulos, Bill; An, Aijun; Wang, Xiaogang; Schroeder, Michael
2009-05-01
Clustering is ubiquitously applied in bioinformatics with hierarchical clustering and k-means partitioning being the most popular methods. Numerous improvements of these two clustering methods have been introduced, as well as completely different approaches such as grid-based, density-based and model-based clustering. For improved bioinformatics analysis of data, it is important to match clusterings to the requirements of a biomedical application. In this article, we present a set of desirable clustering features that are used as evaluation criteria for clustering algorithms. We review 40 different clustering algorithms of all approaches and datatypes. We compare algorithms on the basis of desirable clustering features, and outline algorithms' benefits and drawbacks as a basis for matching them to biomedical applications.
Karl W. Kleiner; Kenneth F. Raffa; Richard E. Dickson
1999-01-01
Theories on allelochemical concentrations in plants are often based upon the relative carbon costs and benefits of multiple metabolic fractions. Tests of these theories often rely on measuring metabolite concentrations, but frequently overlook priorities in carbon partitioning. We conducted a pulse-labeling experiment to follow the partitioning of 14...
Baseline Architecture of ITER Control System
NASA Astrophysics Data System (ADS)
Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.
2011-08-01
The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.
Daniel, J B; Friggens, N C; van Laar, H; Ingvartsen, K L; Sauvant, D
2018-06-01
The control of nutrient partitioning is complex and affected by many factors, among them physiological state and production potential. Therefore, the current model aims to provide for dairy cows a dynamic framework to predict a consistent set of reference performance patterns (milk component yields, body composition change, dry-matter intake) sensitive to physiological status across a range of milk production potentials (within and between breeds). Flows and partition of net energy toward maintenance, growth, gestation, body reserves and milk components are described in the model. The structure of the model is characterized by two sub-models, a regulating sub-model of homeorhetic control which sets dynamic partitioning rules along the lactation, and an operating sub-model that translates this into animal performance. The regulating sub-model describes lactation as the result of three driving forces: (1) use of previously acquired resources through mobilization, (2) acquisition of new resources with a priority of partition towards milk and (3) subsequent use of resources towards body reserves gain. The dynamics of these three driving forces were adjusted separately for fat (milk and body), protein (milk and body) and lactose (milk). Milk yield is predicted from lactose and protein yields with an empirical equation developed from literature data. The model predicts desired dry-matter intake as an outcome of net energy requirements for a given dietary net energy content. The parameters controlling milk component yields and body composition changes were calibrated using two data sets in which the diet was the same for all animals. Weekly data from Holstein dairy cows was used to calibrate the model within-breed across milk production potentials. A second data set was used to evaluate the model and to calibrate it for breed differences (Holstein, Danish Red and Jersey) on the mobilization/reconstitution of body composition and on the yield of individual milk components. These calibrations showed that the model framework was able to adequately simulate milk yield, milk component yields, body composition changes and dry-matter intake throughout lactation for primiparous and multiparous cows differing in their production level.
Comparison of direct and flow integration based charge density population analyses.
Francisco, E; Martín Pendas, A; Blanco, M A; Costales, A
2007-12-06
Different exhaustive and fuzzy partitions of the molecular electron density (rho) into atomic densities (rho(A)) are used to compute the atomic charges (Q(A)) of a representative set of molecules. The Q(A)'s derived from a direct integration of rho(A) are compared to those obtained from integrating the deformation density rho(def) = rho - rho(0) within each atomic domain. Our analysis shows that the latter methods tend to give Q(A)'s similar to those of the (arbitrary) reference atomic densities rho(A)(0) used in the definition of the promolecular density, rho(0) = SigmaArho(A)(0). Moreover, we show that the basis set independence of these charges is a sign not of their intrinsic quality, as commonly stated, but of the practical insensitivity on the basis set of the atomic domains that are employed in this type of methods.
NASA Astrophysics Data System (ADS)
Lapington, M. T.; Crudden, D. J.; Reed, R. C.; Moody, M. P.; Bagot, P. A. J.
2018-06-01
A family of novel polycrystalline Ni-based superalloys with varying Ti:Nb ratios has been created using computational alloy design techniques, and subsequently characterized using atom probe tomography and electron microscopy. Phase chemistry, elemental partitioning, and γ' character have been analyzed and compared with thermodynamic predictions created using Thermo-Calc. Phase compositions and γ' volume fraction were found to compare favorably with the thermodynamically predicted values, while predicted partitioning behavior for Ti, Nb, Cr, and Co tended to overestimate γ' preference over the γ matrix, often with opposing trends vs Nb concentration.
Various forms of indexing HDMR for modelling multivariate classification problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksu, Çağrı; Tunga, M. Alper
2014-12-10
The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Air Traffic Sector Configuration Change Frequency
NASA Technical Reports Server (NTRS)
Chatterji, Gano Broto; Drew, Michael
2009-01-01
Several techniques for partitioning airspace have been developed in the literature. The question of whether a region of airspace created by such methods can be used with other days of traffic, and the number of times a different partition is needed during the day is examined in this paper. Both these aspects are examined for the Fort Worth Center airspace sectors. A Mixed Integer Linear Programming method is used with actual air traffic data of ten high-volume low-weather-delay days for creating sectors. Nine solutions were obtained for each two-hour period of the day by partitioning the center airspace into two through 18 sectors in steps of two sectors. Actual track-data were played back with the generated partitions for creating histograms of the traffic-counts. The best partition for each two-hour period was then identified based on the nine traffic-count distributions. Numbers of sectors in such partitions were analyzed to determine the number of times a different configuration is needed during the day. One to three partitions were selected for the 24-hour period, and traffic data from ten days were played back to test if the traffic-counts stayed below the threshold values associated with these partitions. Results show that these partitions are robust and can be used for longer durations than they were designed for
Yang, Senpei; Li, Lingyi; Chen, Tao; Han, Lujia; Lian, Guoping
2018-05-14
Sebum is an important shunt pathway for transdermal permeation and targeted delivery, but there have been limited studies on its permeation properties. Here we report a measurement and modelling study of solute partition to artificial sebum. Equilibrium experiments were carried out for the sebum-water partition coefficients of 23 neutral, cationic and anionic compounds at different pH. Sebum-water partition coefficients not only depend on the hydrophobicity of the chemical but also on pH. As pH increases from 4.2 to 7.4, the partition of cationic chemicals to sebum increased rapidly. This appears to be due to increased electrostatic attraction between the cationic chemical and the fatty acids in sebum. Whereas for anionic chemicals, their sebum partition coefficients are negligibly small, which might result from their electrostatic repulsion to fatty acids. Increase in pH also resulted in a slight decrease of sebum partition of neutral chemicals. Based on the observed pH impact on the sebum-water partition of neutral, cationic and anionic compounds, a new quantitative structure-property relationship (QSPR) model has been proposed. This mathematical model considers the hydrophobic interaction and electrostatic interaction as the main mechanisms for the partition of neutral, cationic and anionic chemicals to sebum.
Inference and Analysis of Population Structure Using Genetic Data and Network Theory
Greenbaum, Gili; Templeton, Alan R.; Bar-David, Shirli
2016-01-01
Clustering individuals to subpopulations based on genetic data has become commonplace in many genetic studies. Inference about population structure is most often done by applying model-based approaches, aided by visualization using distance-based approaches such as multidimensional scaling. While existing distance-based approaches suffer from a lack of statistical rigor, model-based approaches entail assumptions of prior conditions such as that the subpopulations are at Hardy-Weinberg equilibria. Here we present a distance-based approach for inference about population structure using genetic data by defining population structure using network theory terminology and methods. A network is constructed from a pairwise genetic-similarity matrix of all sampled individuals. The community partition, a partition of a network to dense subgraphs, is equated with population structure, a partition of the population to genetically related groups. Community-detection algorithms are used to partition the network into communities, interpreted as a partition of the population to subpopulations. The statistical significance of the structure can be estimated by using permutation tests to evaluate the significance of the partition’s modularity, a network theory measure indicating the quality of community partitions. To further characterize population structure, a new measure of the strength of association (SA) for an individual to its assigned community is presented. The strength of association distribution (SAD) of the communities is analyzed to provide additional population structure characteristics, such as the relative amount of gene flow experienced by the different subpopulations and identification of hybrid individuals. Human genetic data and simulations are used to demonstrate the applicability of the analyses. The approach presented here provides a novel, computationally efficient model-free method for inference about population structure that does not entail assumption of prior conditions. The method is implemented in the software NetStruct (available at https://giligreenbaum.wordpress.com/software/). PMID:26888080
NASA Astrophysics Data System (ADS)
Wagstaff, Kiri L.
2012-03-01
On obtaining a new data set, the researcher is immediately faced with the challenge of obtaining a high-level understanding from the observations. What does a typical item look like? What are the dominant trends? How many distinct groups are included in the data set, and how is each one characterized? Which observable values are common, and which rarely occur? Which items stand out as anomalies or outliers from the rest of the data? This challenge is exacerbated by the steady growth in data set size [11] as new instruments push into new frontiers of parameter space, via improvements in temporal, spatial, and spectral resolution, or by the desire to "fuse" observations from different modalities and instruments into a larger-picture understanding of the same underlying phenomenon. Data clustering algorithms provide a variety of solutions for this task. They can generate summaries, locate outliers, compress data, identify dense or sparse regions of feature space, and build data models. It is useful to note up front that "clusters" in this context refer to groups of items within some descriptive feature space, not (necessarily) to "galaxy clusters" which are dense regions in physical space. The goal of this chapter is to survey a variety of data clustering methods, with an eye toward their applicability to astronomical data analysis. In addition to improving the individual researcher’s understanding of a given data set, clustering has led directly to scientific advances, such as the discovery of new subclasses of stars [14] and gamma-ray bursts (GRBs) [38]. All clustering algorithms seek to identify groups within a data set that reflect some observed, quantifiable structure. Clustering is traditionally an unsupervised approach to data analysis, in the sense that it operates without any direct guidance about which items should be assigned to which clusters. There has been a recent trend in the clustering literature toward supporting semisupervised or constrained clustering, in which some partial information about item assignments or other components of the resulting output are already known and must be accommodated by the solution. Some algorithms seek a partition of the data set into distinct clusters, while others build a hierarchy of nested clusters that can capture taxonomic relationships. Some produce a single optimal solution, while others construct a probabilistic model of cluster membership. More formally, clustering algorithms operate on a data set X composed of items represented by one or more features (dimensions). These could include physical location, such as right ascension and declination, as well as other properties such as brightness, color, temporal change, size, texture, and so on. Let D be the number of dimensions used to represent each item, xi ∈ RD. The clustering goal is to produce an organization P of the items in X that optimizes an objective function f : P -> R, which quantifies the quality of solution P. Often f is defined so as to maximize similarity within a cluster and minimize similarity between clusters. To that end, many algorithms make use of a measure d : X x X -> R of the distance between two items. A partitioning algorithm produces a set of clusters P = {c1, . . . , ck} such that the clusters are nonoverlapping (c_i intersected with c_j = empty set, i != j) subsets of the data set (Union_i c_i=X). Hierarchical algorithms produce a series of partitions P = {p1, . . . , pn }. For a complete hierarchy, the number of partitions n’= n, the number of items in the data set; the top partition is a single cluster containing all items, and the bottom partition contains n clusters, each containing a single item. For model-based clustering, each cluster c_j is represented by a model m_j , such as the cluster center or a Gaussian distribution. The wide array of available clustering algorithms may seem bewildering, and covering all of them is beyond the scope of this chapter. Choosing among them for a particular application involves considerations of the kind of data being analyzed, algorithm runtime efficiency, and how much prior knowledge is available about the problem domain, which can dictate the nature of clusters sought. Fundamentally, the clustering method and its representations of clusters carries with it a definition of what a cluster is, and it is important that this be aligned with the analysis goals for the problem at hand. In this chapter, I emphasize this point by identifying for each algorithm the cluster representation as a model, m_j , even for algorithms that are not typically thought of as creating a “model.” This chapter surveys a basic collection of clustering methods useful to any practitioner who is interested in applying clustering to a new data set. The algorithms include k-means (Section 25.2), EM (Section 25.3), agglomerative (Section 25.4), and spectral (Section 25.5) clustering, with side mentions of variants such as kernel k-means and divisive clustering. The chapter also discusses each algorithm’s strengths and limitations and provides pointers to additional in-depth reading for each subject. Section 25.6 discusses methods for incorporating domain knowledge into the clustering process. This chapter concludes with a brief survey of interesting applications of clustering methods to astronomy data (Section 25.7). The chapter begins with k-means because it is both generally accessible and so widely used that understanding it can be considered a necessary prerequisite for further work in the field. EM can be viewed as a more sophisticated version of k-means that uses a generative model for each cluster and probabilistic item assignments. Agglomerative clustering is the most basic form of hierarchical clustering and provides a basis for further exploration of algorithms in that vein. Spectral clustering permits a departure from feature-vector-based clustering and can operate on data sets instead represented as affinity, or similarity matrices—cases in which only pairwise information is known. The list of algorithms covered in this chapter is representative of those most commonly in use, but it is by no means comprehensive. There is an extensive collection of existing books on clustering that provide additional background and depth. Three early books that remain useful today are Anderberg’s Cluster Analysis for Applications [3], Hartigan’s Clustering Algorithms [25], and Gordon’s Classification [22]. The latter covers basics on similarity measures, partitioning and hierarchical algorithms, fuzzy clustering, overlapping clustering, conceptual clustering, validations methods, and visualization or data reduction techniques such as principal components analysis (PCA),multidimensional scaling, and self-organizing maps. More recently, Jain et al. provided a useful and informative survey [27] of a variety of different clustering algorithms, including those mentioned here as well as fuzzy, graph-theoretic, and evolutionary clustering. Everitt’s Cluster Analysis [19] provides a modern overview of algorithms, similarity measures, and evaluation methods.
47 CFR 101.1415 - Partitioning and disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... American Datum (NAD83). (d) Unjust enrichment. 12 GHz licensees that received a bidding credit and... be subject to the provisions concerning unjust enrichment as set forth in § 1.2111 of this chapter...
NASA Astrophysics Data System (ADS)
Oulehle, Filip; Jones, Timothy; Burden, Annette; Evans, Chris
2013-04-01
Dissolved organic carbon (DOC) is an important component of the global carbon (C) cycle and has profound impacts on water chemistry and metabolism in lakes and rivers. Reported increases of DOC concentration in surface waters across Europe and Northern America have been attributed to several drivers; from changing climate and land-use to eutrophication and declining acid deposition. The last of these suggests that acidic deposition suppressed the solubility of DOC, and that this historic suppression is now being reversed by reducing emissions of acidifying pollutants. We studied a set of four parallel acidification and alkalization experiments in organic rich soils which, after three years of manipulation, have shown clear soil solution DOC responses to acidity change. We tested whether these DOC concentration changes were related to changes in the acid/base properties of DOC. Based on laboratory determination of DOC site density (S.D. = amount of carboxylic groups per milligram DOC) and charge density (C.D. = organic acid anion concentration per milligram DOC) we found that the change in DOC soil-solution partitioning was tightly related to the change in degree of dissociation (α = C.D./S.D. ratio) of organic acids (R2=0.74, p<0.01). Carbon turnover in soil organic matter (SOM), determined by soil respiration and β-D-glucosidase enzyme activity measurements, also appears to have some impact on DOC leaching, via constraints on the actual supply of available DOC from SOM; when the turnover rate of C in SOM is low, the effect of α on DOC leaching is reduced. Thus, differences in the magnitude of DOC changes seen across different environments might be explained by interactions between physicochemical restrictions of DOC soil-solution partitioning, and SOM carbon turnover effects on DOC supply.
NASA Astrophysics Data System (ADS)
Song, Lisheng; Kustas, William P.; Liu, Shaomin; Colaizzi, Paul D.; Nieto, Hector; Xu, Ziwei; Ma, Yanfei; Li, Mingsong; Xu, Tongren; Agam, Nurit; Tolk, Judy A.; Evett, Steven R.
2016-09-01
In this study ground measured soil and vegetation component temperatures and composite temperature from a high spatial resolution thermal camera and a network of thermal-IR sensors collected in an irrigated maize field and in an irrigated cotton field are used to assess and refine the component temperature partitioning approach in the Two-Source Energy Balance (TSEB) model. A refinement to TSEB using a non-iterative approach based on the application of the Priestley-Taylor formulation for surface temperature partitioning and estimating soil evaporation from soil moisture observations under advective conditions (TSEB-A) was developed. This modified TSEB formulation improved the agreement between observed and modeled soil and vegetation temperatures. In addition, the TSEB-A model output of evapotranspiration (ET) and the components evaporation (E), transpiration (T) when compared to ground observations using the stable isotopic method and eddy covariance (EC) technique from the HiWATER experiment and with microlysimeters and a large monolithic weighing lysimeter from the BEAREX08 experiment showed good agreement. Difference between the modeled and measured ET measurements were less than 10% and 20% on a daytime basis for HiWATER and BEAREX08 data sets, respectively. The TSEB-A model was found to accurately reproduce the temporal dynamics of E, T and ET over a full growing season under the advective conditions existing for these irrigated crops located in arid/semi-arid climates. With satellite data this TSEB-A modeling framework could potentially be used as a tool for improving water use efficiency and conservation practices in water limited regions. However, TSEB-A requires soil moisture information which is not currently available routinely from satellite at the field scale.
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Clustering, Seriation, and Subset Extraction of Confusion Data
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2006-01-01
The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although…
Framework for making better predictions by directly estimating variables' predictivity.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2016-12-13
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.
Framework for making better predictions by directly estimating variables’ predictivity
Chernoff, Herman; Lo, Shaw-Hwa
2016-01-01
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830
NASA Astrophysics Data System (ADS)
Abatzoglou, John T.; Ficklin, Darren L.
2017-09-01
The geographic variability in the partitioning of precipitation into surface runoff (Q) and evapotranspiration (ET) is fundamental to understanding regional water availability. The Budyko equation suggests this partitioning is strictly a function of aridity, yet observed deviations from this relationship for individual watersheds impede using the framework to model surface water balance in ungauged catchments and under future climate and land use scenarios. A set of climatic, physiographic, and vegetation metrics were used to model the spatial variability in the partitioning of precipitation for 211 watersheds across the contiguous United States (CONUS) within Budyko's framework through the free parameter ω. A generalized additive model found that four widely available variables, precipitation seasonality, the ratio of soil water holding capacity to precipitation, topographic slope, and the fraction of precipitation falling as snow, explained 81.2% of the variability in ω. The ω model applied to the Budyko equation explained 97% of the spatial variability in long-term Q for an independent set of watersheds. The ω model was also applied to estimate the long-term water balance across the CONUS for both contemporary and mid-21st century conditions. The modeled partitioning of observed precipitation to Q and ET compared favorably across the CONUS with estimates from more sophisticated land-surface modeling efforts. For mid-21st century conditions, the model simulated an increase in the fraction of precipitation used by ET across the CONUS with declines in Q for much of the eastern CONUS and mountainous watersheds across the western United States.
A Dimensionally Reduced Clustering Methodology for Heterogeneous Occupational Medicine Data Mining.
Saâdaoui, Foued; Bertrand, Pierre R; Boudet, Gil; Rouffiac, Karine; Dutheil, Frédéric; Chamoux, Alain
2015-10-01
Clustering is a set of techniques of the statistical learning aimed at finding structures of heterogeneous partitions grouping homogenous data called clusters. There are several fields in which clustering was successfully applied, such as medicine, biology, finance, economics, etc. In this paper, we introduce the notion of clustering in multifactorial data analysis problems. A case study is conducted for an occupational medicine problem with the purpose of analyzing patterns in a population of 813 individuals. To reduce the data set dimensionality, we base our approach on the Principal Component Analysis (PCA), which is the statistical tool most commonly used in factorial analysis. However, the problems in nature, especially in medicine, are often based on heterogeneous-type qualitative-quantitative measurements, whereas PCA only processes quantitative ones. Besides, qualitative data are originally unobservable quantitative responses that are usually binary-coded. Hence, we propose a new set of strategies allowing to simultaneously handle quantitative and qualitative data. The principle of this approach is to perform a projection of the qualitative variables on the subspaces spanned by quantitative ones. Subsequently, an optimal model is allocated to the resulting PCA-regressed subspaces.
Managing Network Partitions in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif
Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.
NASA Astrophysics Data System (ADS)
Die, Qingqi; Nie, Zhiqiang; Liu, Feng; Tian, Yajun; Fang, Yanyan; Gao, Hefeng; Tian, Shulei; He, Jie; Huang, Qifei
2015-10-01
Gas and particle phase air samples were collected in summer and winter around industrial sites in Shanghai, China, to allow the concentrations, profiles, and gas-particle partitioning of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) and dioxin-like polychlorinated biphenyls (dl-PCBs) to be determined. The total 2,3,7,8-substituted PCDD/F and dl-PCB toxic equivalent (TEQ) concentrations were 14.2-182 fg TEQ/m3 (mean 56.8 fg TEQ/m3) in summer and 21.9-479 fg TEQ/m3 (mean 145 fg TEQ/m3) in winter. The PCDD/Fs tended to be predominantly in the particulate phase, while the dl-PCBs were predominantly found in the gas phase, and the proportions of all of the PCDD/F and dl-PCB congeners in the particle phase increased as the temperature decreased. The logarithms of the gas-particle partition coefficients correlated well with the subcooled liquid vapor pressures of the PCDD/Fs and dl-PCBs for most of the samples. Gas-particle partitioning of the PCDD/Fs deviated from equilibrium either in summer or winter close to local sources, and the Junge-Pankow model and predictions made using a model based on the octanol-air partition coefficient fitted the measured particulate PCDD/F fractions well, indicating that absorption and adsorption mechanism both contributed to the partitioning process. However, gas-particle equilibrium of the dl-PCBs was reached more easily in winter than in summer. The Junge-Pankow model predictions fitted the dl-PCB data better than did the predictions made using the model based on the octanol-air partition coefficient, indicating that adsorption mechanism made dominated contribution to the partitioning process.
NASA Astrophysics Data System (ADS)
Lowe, Douglas; Topping, David; McFiggans, Gordon
2017-04-01
Gas to particle partitioning of atmospheric compounds occurs through disequilibrium mass transfer rather than through instantaneous equilibrium. However, it is common to treat only the inorganic compounds as partitioning dynamically whilst organic compounds, represented by the Volatility Basis Set (VBS), are partitioned instantaneously. In this study we implement a more realistic dynamic partitioning of organic compounds in a regional framework and assess impact on aerosol mass and microphysics. It is also common to assume condensed phase water is only associated with inorganic components. We thus also assess sensitivity to assuming all organics are hygroscopic according to their prescribed molecular weight. For this study we use WRF-Chem v3.4.1, focusing on anthropogenic dominated North-Western Europe. Gas-phase chemistry is represented using CBM-Z whilst aerosol dynamics are simulated using the 8-section MOSAIC scheme, including a 9-bin VBS treatment of organic aerosol. Results indicate that predicted mass loadings can vary significantly. Without gas phase ageing of higher volatility compounds, dynamic partitioning always results in lower mass loadings downwind of emission sources. The inclusion of condensed phase water in both partitioning models increases the predicted PM mass, resulting from a larger contribution from higher volatility organics, if present. If gas phase ageing of VBS compounds is allowed to occur in a dynamic model, this can often lead to higher predicted mass loadings, contrary to expected behaviour from a simple non-reactive gas phase box model. As descriptions of aerosol phase processes improve within regional models, the baseline descriptions of partitioning should retain the ability to treat dynamic partitioning of organics compounds. Using our simulations, we discuss whether derived sensitivities to aerosol processes in existing models may be inherently biased. This work was supported by the Natural Environment Research Council within the RONOCO (NE/F004656/1) and CCN-Vol (NE/L007827/1) projects.
NASA Astrophysics Data System (ADS)
Li, Y.-F.; Ma, W.-L.; Yang, M.
2015-02-01
Gas/particle (G/P) partitioning of semi-volatile organic compounds (SVOCs) is an important process that primarily governs their atmospheric fate, long-range atmospheric transport, and their routes of entering the human body. All previous studies on this issue are hypothetically based on equilibrium conditions, the results of which do not predict results from monitoring studies well in most cases. In this study, a steady-state model instead of an equilibrium-state model for the investigation of the G/P partitioning behavior of polybrominated diphenyl ethers (PBDEs) was established, and an equation for calculating the partition coefficients under steady state (KPS) of PBDEs (log KPS = log KPE + logα) was developed in which an equilibrium term (log KPE = log KOA + logfOM -11.91 where fOM is organic matter content of the particles) and a non-equilibrium term (log α, caused by dry and wet depositions of particles), both being functions of log KOA (octanol-air partition coefficient), are included. It was found that the equilibrium is a special case of steady state when the non-equilibrium term equals zero. A criterion to classify the equilibrium and non-equilibrium status of PBDEs was also established using two threshold values of log KOA, log KOA1, and log KOA2, which divide the range of log KOA into three domains: equilibrium, non-equilibrium, and maximum partition domain. Accordingly, two threshold values of temperature t, tTH1 when log KOA = log KOA1 and tTH2 when log KOA = log KOA2, were identified, which divide the range of temperature also into the same three domains for each PBDE congener. We predicted the existence of the maximum partition domain (the values of log KPS reach a maximum constant of -1.53) that every PBDE congener can reach when log KOA ≥ log KOA2, or t ≤ tTH2. The novel equation developed in this study was applied to predict the G/P partition coefficients of PBDEs for our Chinese persistent organic pollutants (POPs) Soil and Air Monitoring Program, Phase 2 (China-SAMP-II) program and other monitoring programs worldwide, including in Asia, Europe, North America, and the Arctic, and the results matched well with all the monitoring data, except those obtained at e-waste sites due to the unpredictable PBDE emissions at these sites. This study provided evidence that the newly developed steady-state-based equation is superior to the equilibrium-state-based equation that has been used in describing the G/P partitioning behavior over decades. We suggest that the investigation on G/P partitioning behavior for PBDEs should be based onsteady-state, not equilibrium state, and equilibrium is just a special case of steady-state when non-equilibrium factors can be ignored. We also believe that our new equation provides a useful tool for environmental scientists in both monitoring and modeling research on G/P partitioning of PBDEs and can be extended to predict G/P partitioning behavior for other SVOCs as well.
ERIC Educational Resources Information Center
Saxton, Matthew; Cakir, Kadir
2006-01-01
Factors affecting performance on base-10 tasks were investigated in a series of four studies with a total of 453 children aged 5-7 years. Training in counting-on was found to enhance child performance on base-10 tasks (Studies 2, 3, and 4), while prior knowledge of counting-on (Study 1), trading (Studies 1 and 3), and partitioning (Studies 1 and…
Baum, K. G.; Menezes, G.; Helguera, M.
2011-01-01
Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256 3 voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.
Baum, K G; Menezes, G; Helguera, M
2011-01-01
Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256(3) voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.
Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Bloem, Michael J.
2014-01-01
In air traffic management systems, airspace is partitioned into regions in part to distribute the tasks associated with managing air traffic among different systems and people. These regions, as well as the systems and people allocated to each, are changed dynamically so that air traffic can be safely and efficiently managed. It is expected that new air traffic control systems will enable greater flexibility in how airspace is partitioned and how resources are allocated to airspace regions. In this talk, I will begin by providing an overview of some previous work and open questions in Dynamic Airspace Configuration research, which is concerned with how to partition airspace and assign resources to regions of airspace. For example, I will introduce airspace partitioning algorithms based on clustering, integer programming optimization, and computational geometry. I will conclude by discussing the development of a tablet-based tool that is intended to help air traffic controller supervisors configure airspace and controllers in current operations.
Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
2016-06-23
Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less
NASA Astrophysics Data System (ADS)
Pankow, James F.
Gas-particle partitioning is examined using a partitioning constant Kp = ( F/ TSP)/ A, where F (ng m -3) and A (ng m -3) are the particulate-associated and gas-phase concentrations, respectively, and TSP is the total suspended particulate matter level (μg m -3). Compound-dependent values of Kp depend on temperature ( T) according to Kp = mp/ T + bp. Limitations in data quality can cause errors in estimates of mp and bp obtained by simple linear regression (SLR). However, within a group of similar compounds, the bp values will be similar. By pooling data, an improved set of mp and a single bp can be obtained by common y-intercept regression (CYIR). SLR estimates for mp and bp for polycyclic aromatic hydrocarbons (PAHs) sorbing to urban Osaka particulate matter are available (Yamasaki et al., 1982, Envir. Sci. Technol.16, 189-194), as are CYIR estimates for the same particulate matter (Pankow, 1991, Atmospheric Environment25A, 2229-2239). In this work, a comparison was conducted of the ability of these two sets of mp and bp to predict A/ F ratios for PAHs based on measured T and TSP values for data obtained in other urban locations, specifically: (1) in and near the Baltimore Harbor Tunnel by Benner (1988, Ph.D thesis, University of Maryland) and Benner et al. (1989, Envir. Sci. Technol.23, 1269-1278); and (2) in Chicago by Cotham (1990, Ph.D. thesis, University of South Carolina). In general, the CYIR estimates for mp and bp obtained for Osaka particulate matter were found to be at least as reliable, and for some compounds more reliable than their SLR counterparts in predicting gas-particle ratios for PAHs. This result provides further evidence of the utility of the CYIR approach in quantitating the dependence of log Kp values on 1/ T.
Knowles, L Lacey; Huang, Huateng; Sukumaran, Jeet; Smith, Stephen A
2018-03-01
Discordant gene trees are commonly encountered when sequences from thousands of loci are applied to estimate phylogenetic relationships. Several processes contribute to this discord. Yet, we have no methods that jointly model different sources of conflict when estimating phylogenies. An alternative to analyzing entire genomes or all the sequenced loci is to identify a subset of loci for phylogenetic analysis. If we can identify data partitions that are most likely to reflect descent from a common ancestor (i.e., discordant loci that indeed reflect incomplete lineage sorting [ILS], as opposed to some other process, such as lateral gene transfer [LGT]), we can analyze this subset using powerful coalescent-based species-tree approaches. Test data sets were simulated where discord among loci could arise from ILS and LGT. Data sets where analyzed using the newly developed program CLASSIPHY (Huang et al., ) to assess whether our ability to distinguish the cause of discord among loci varied when ILS and LGT occurred in the recent versus deep past and whether the accuracy of these inferences were affected by the mutational process. We show that accuracy of probabilistic classification of individual loci by the cause of discord differed when ILS and LGT events occurred more recently compared with the distant past and that the signal-to-noise ratio arising from the mutational process contributes to difficulties in inferring LGT data partitions. We discuss our findings in terms of the promise and limitations of identifying subsets of loci for species-tree inference that will not violate the underlying coalescent model (i.e., data partitions in which ILS, and not LGT, contributes to discord). We also discuss the empirical implications of our work given the many recalcitrant nodes in the tree of life (e.g., origins of angiosperms, amniotes, or Neoaves), and recent arguments for concatenating loci. © 2018 Botanical Society of America.
Integration of domain and resource-based reasoning for real-time control in dynamic environments
NASA Technical Reports Server (NTRS)
Morgan, Keith; Whitebread, Kenneth R.; Kendus, Michael; Cromarty, Andrew S.
1993-01-01
A real-time software controller that successfully integrates domain-based and resource-based control reasoning to perform task execution in a dynamically changing environment is described. The design of the controller is based on the concept of partitioning the process to be controlled into a set of tasks, each of which achieves some process goal. It is assumed that, in general, there are multiple ways (tasks) to achieve a goal. The controller dynamically determines current goals and their current criticality, choosing and scheduling tasks to achieve those goals in the time available. It incorporates rule-based goal reasoning, a TMS-based criticality propagation mechanism, and a real-time scheduler. The controller has been used to build a knowledge-based situation assessment system that formed a major component of a real-time, distributed, cooperative problem solving system built under DARPA contract. It is also being employed in other applications now in progress.
A partitioned correlation function interaction approach for describing electron correlation in atoms
NASA Astrophysics Data System (ADS)
Verdebout, S.; Rynkun, P.; Jönsson, P.; Gaigalas, G.; Froese Fischer, C.; Godefroid, M.
2013-04-01
The traditional multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) methods are based on a single orthonormal orbital basis. For atoms with many closed core shells, or complicated shell structures, a large orbital basis is needed to saturate the different electron correlation effects such as valence, core-valence and correlation within the core shells. The large orbital basis leads to massive configuration state function (CSF) expansions that are difficult to handle, even on large computer systems. We show that it is possible to relax the orthonormality restriction on the orbital basis and break down the originally very large calculations into a series of smaller calculations that can be run in parallel. Each calculation determines a partitioned correlation function (PCF) that accounts for a specific correlation effect. The PCFs are built on optimally localized orbital sets and are added to a zero-order multireference (MR) function to form a total wave function. The expansion coefficients of the PCFs are determined from a low dimensional generalized eigenvalue problem. The interaction and overlap matrices are computed using a biorthonormal transformation technique (Verdebout et al 2010 J. Phys. B: At. Mol. Phys. 43 074017). The new method, called partitioned correlation function interaction (PCFI), converges rapidly with respect to the orbital basis and gives total energies that are lower than the ones from ordinary MCHF and CI calculations. The PCFI method is also very flexible when it comes to targeting different electron correlation effects. Focusing our attention on neutral lithium, we show that by dedicating a PCF to the single excitations from the core, spin- and orbital-polarization effects can be captured very efficiently, leading to highly improved convergence patterns for hyperfine parameters compared with MCHF calculations based on a single orthogonal radial orbital basis. By collecting separately optimized PCFs to correct the MR function, the variational degrees of freedom in the relative mixing coefficients of the CSFs building the PCFs are inhibited. The constraints on the mixing coefficients lead to small off-sets in computed properties such as hyperfine structure, isotope shift and transition rates, with respect to the correct values. By (partially) deconstraining the mixing coefficients one converges to the correct limits and keeps the tremendous advantage of improved convergence rates that comes from the use of several orbital sets. Reducing ultimately each PCF to a single CSF with its own orbital basis leads to a non-orthogonal CI approach. Various perspectives of the new method are given.
NASA Astrophysics Data System (ADS)
Odabasi, Mustafa; Cetin, Eylem; Sofuoglu, Aysun
Octanol-air partition coefficients ( KOA) for 14 polycyclic aromatic hydrocarbons (PAHs) were determined as a function of temperature using the gas chromatographic retention time method. log KOA values at 25° ranged over six orders of magnitude, between 6.34 (acenaphthylene) and 12.59 (dibenz[ a,h]anthracene). The determined KOA values were within factor of 0.7 (dibenz[ a,h]anthracene) to 15.1 (benz[ a]anthracene) of values calculated as the ratio of octanol-water partition coefficient to dimensionless Henry's law constant. Supercooled liquid vapor pressures ( PL) of 13 PAHs were also determined using the gas chromatographic retention time technique. Activity coefficients in octanol calculated using KOA and PL ranged between 3.2 and 6.2 indicating near-ideal solution behavior. Atmospheric concentrations measured in this study in Izmir, Turkey were used to investigate the partitioning of PAHs between particle and gas-phases. Experimental gas-particle partition coefficients ( Kp) were compared to the predictions of KOA absorption and KSA (soot-air partition coefficient) models. Octanol-based absorptive partitioning model predicted lower partition coefficients especially for relatively volatile PAHs. Ratios of measured/modeled partition coefficients ranged between 1.1 and 15.5 (4.5±6.0, average±SD) for KOA model. KSA model predictions were relatively better and measured to modeled ratios ranged between 0.6 and 5.6 (2.3±2.7, average±SD).
A supermatrix analysis of genomic, morphological, and paleontological data from crown Cetacea
2011-01-01
Background Cetacea (dolphins, porpoises, and whales) is a clade of aquatic species that includes the most massive, deepest diving, and largest brained mammals. Understanding the temporal pattern of diversification in the group as well as the evolution of cetacean anatomy and behavior requires a robust and well-resolved phylogenetic hypothesis. Although a large body of molecular data has accumulated over the past 20 years, DNA sequences of cetaceans have not been directly integrated with the rich, cetacean fossil record to reconcile discrepancies among molecular and morphological characters. Results We combined new nuclear DNA sequences, including segments of six genes (~2800 basepairs) from the functionally extinct Yangtze River dolphin, with an expanded morphological matrix and published genomic data. Diverse analyses of these data resolved the relationships of 74 taxa that represent all extant families and 11 extinct families of Cetacea. The resulting supermatrix (61,155 characters) and its sub-partitions were analyzed using parsimony methods. Bayesian and maximum likelihood (ML) searches were conducted on the molecular partition, and a molecular scaffold obtained from these searches was used to constrain a parsimony search of the morphological partition. Based on analysis of the supermatrix and model-based analyses of the molecular partition, we found overwhelming support for 15 extant clades. When extinct taxa are included, we recovered trees that are significantly correlated with the fossil record. These trees were used to reconstruct the timing of cetacean diversification and the evolution of characters shared by "river dolphins," a non-monophyletic set of species according to all of our phylogenetic analyses. Conclusions The parsimony analysis of the supermatrix and the analysis of morphology constrained to fit the ML/Bayesian molecular tree yielded broadly congruent phylogenetic hypotheses. In trees from both analyses, all Oligocene taxa included in our study fell outside crown Mysticeti and crown Odontoceti, suggesting that these two clades radiated in the late Oligocene or later, contra some recent molecular clock studies. Our trees also imply that many character states shared by river dolphins evolved in their oceanic ancestors, contradicting the hypothesis that these characters are convergent adaptations to fluvial habitats. PMID:21518443
A supermatrix analysis of genomic, morphological, and paleontological data from crown Cetacea.
Geisler, Jonathan H; McGowen, Michael R; Yang, Guang; Gatesy, John
2011-04-25
Cetacea (dolphins, porpoises, and whales) is a clade of aquatic species that includes the most massive, deepest diving, and largest brained mammals. Understanding the temporal pattern of diversification in the group as well as the evolution of cetacean anatomy and behavior requires a robust and well-resolved phylogenetic hypothesis. Although a large body of molecular data has accumulated over the past 20 years, DNA sequences of cetaceans have not been directly integrated with the rich, cetacean fossil record to reconcile discrepancies among molecular and morphological characters. We combined new nuclear DNA sequences, including segments of six genes (~2800 basepairs) from the functionally extinct Yangtze River dolphin, with an expanded morphological matrix and published genomic data. Diverse analyses of these data resolved the relationships of 74 taxa that represent all extant families and 11 extinct families of Cetacea. The resulting supermatrix (61,155 characters) and its sub-partitions were analyzed using parsimony methods. Bayesian and maximum likelihood (ML) searches were conducted on the molecular partition, and a molecular scaffold obtained from these searches was used to constrain a parsimony search of the morphological partition. Based on analysis of the supermatrix and model-based analyses of the molecular partition, we found overwhelming support for 15 extant clades. When extinct taxa are included, we recovered trees that are significantly correlated with the fossil record. These trees were used to reconstruct the timing of cetacean diversification and the evolution of characters shared by "river dolphins," a non-monophyletic set of species according to all of our phylogenetic analyses. The parsimony analysis of the supermatrix and the analysis of morphology constrained to fit the ML/Bayesian molecular tree yielded broadly congruent phylogenetic hypotheses. In trees from both analyses, all Oligocene taxa included in our study fell outside crown Mysticeti and crown Odontoceti, suggesting that these two clades radiated in the late Oligocene or later, contra some recent molecular clock studies. Our trees also imply that many character states shared by river dolphins evolved in their oceanic ancestors, contradicting the hypothesis that these characters are convergent adaptations to fluvial habitats.
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations
Mitchell, William F.
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.
Mitchell, William F
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.
A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.
1999-01-01
Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.
NASA Technical Reports Server (NTRS)
Saltsman, J. F.; Halford, G. R.
1979-01-01
The method of strainrange partitioning is used to predict the cyclic lives of the Metal Properties Council's long time creep-fatigue interspersion tests of several steel alloys. Comparisons are made with predictions based upon the time- and cycle-fraction approach. The method of strainrange partitioning is shown to give consistently more accurate predictions of cyclic life than is given by the time- and cycle-fraction approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.
The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945
Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu
2016-06-01
Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.
2013-12-01
The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.
The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit
NASA Technical Reports Server (NTRS)
Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete;
1998-01-01
Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.
Schmickl, Thomas; Karsai, Istvan
2014-01-01
We develop a model to produce plausible patterns of task partitioning in the ponerine ant Ectatomma ruidum based on the availability of living prey and prey corpses. The model is based on the organizational capabilities of a “common stomach” through which the colony utilizes the availability of a natural (food) substance as a major communication channel to regulate the income and expenditure of the very same substance. This communication channel has also a central role in regulating task partitioning of collective hunting behavior in a supply&demand-driven manner. Our model shows that task partitioning of the collective hunting behavior in E. ruidum can be explained by regulation due to a common stomach system. The saturation of the common stomach provides accessible information to individual ants so that they can adjust their hunting behavior accordingly by engaging in or by abandoning from stinging or transporting tasks. The common stomach is able to establish and to keep stabilized an effective mix of workforce to exploit the prey population and to transport food into the nest. This system is also able to react to external perturbations in a de-centralized homeostatic way, such as to changes in the prey density or to accumulation of food in the nest. In case of stable conditions the system develops towards an equilibrium concerning colony size and prey density. Our model shows that organization of work through a common stomach system can allow Ectatomma ruidum to collectively forage for food in a robust, reactive and reliable way. The model is compared to previously published models that followed a different modeling approach. Based on our model analysis we also suggest a series of experiments for which our model gives plausible predictions. These predictions are used to formulate a set of testable hypotheses that should be investigated empirically in future experimentation. PMID:25493558
A hybrid segmentation method for partitioning the liver based on 4D DCE-MR images
NASA Astrophysics Data System (ADS)
Zhang, Tian; Wu, Zhiyi; Runge, Jurgen H.; Lavini, Cristina; Stoker, Jaap; van Gulik, Thomas; Cieslak, Kasia P.; van Vliet, Lucas J.; Vos, Frans M.
2018-03-01
The Couinaud classification of hepatic anatomy partitions the liver into eight functionally independent segments. Detection and segmentation of the hepatic vein (HV), portal vein (PV) and inferior vena cava (IVC) plays an important role in the subsequent delineation of the liver segments. To facilitate pharmacokinetic modeling of the liver based on the same data, a 4D DCE-MR scan protocol was selected. This yields images with high temporal resolution but low spatial resolution. Since the liver's vasculature consists of many tiny branches, segmentation of these images is challenging. The proposed framework starts with registration of the 4D DCE-MRI series followed by region growing from manually annotated seeds in the main branches of key blood vessels in the liver. It calculates the Pearson correlation between the time intensity curves (TICs) of a seed and all voxels. A maximum correlation map for each vessel is obtained by combining the correlation maps for all branches of the same vessel through a maximum selection per voxel. The maximum correlation map is incorporated in a level set scheme to individually delineate the main vessels. Subsequently, the eight liver segments are segmented based on three vertical intersecting planes fit through the three skeleton branches of HV and IVC's center of mass as well as a horizontal plane fit through the skeleton of PV. Our segmentation regarding delineation of the vessels is more accurate than the results of two state-of-the-art techniques on five subjects in terms of the average symmetric surface distance (ASSD) and modified Hausdorff distance (MHD). Furthermore, the proposed liver partitioning achieves large overlap with manual reference segmentations (expressed in Dice Coefficient) in all but a small minority of segments (mean values between 87% and 94% for segments 2-8). The lower mean overlap for segment 1 (72%) is due to the limited spatial resolution of our DCE-MR scan protocol.
SAMPLING AND ANALYSIS OF SEMIVOLATILE AEROSOLS
Denuder based samplers can effectively separate semivolatile gases from particles and 'freeze' the partitioning in time. Conversely, samples collected on filters partition mass according to the conditions of the influent airstream, which may change over time. As a result thes...
TriageTools: tools for partitioning and prioritizing analysis of high-throughput sequencing data.
Fimereli, Danai; Detours, Vincent; Konopka, Tomasz
2013-04-01
High-throughput sequencing is becoming a popular research tool but carries with it considerable costs in terms of computation time, data storage and bandwidth. Meanwhile, some research applications focusing on individual genes or pathways do not necessitate processing of a full sequencing dataset. Thus, it is desirable to partition a large dataset into smaller, manageable, but relevant pieces. We present a toolkit for partitioning raw sequencing data that includes a method for extracting reads that are likely to map onto pre-defined regions of interest. We show the method can be used to extract information about genes of interest from DNA or RNA sequencing samples in a fraction of the time and disk space required to process and store a full dataset. We report speedup factors between 2.6 and 96, depending on settings and samples used. The software is available at http://www.sourceforge.net/projects/triagetools/.
Anonymous quantum nonlocality.
Liang, Yeong-Cherng; Curchod, Florian John; Bowles, Joseph; Gisin, Nicolas
2014-09-26
We investigate the phenomenon of anonymous quantum nonlocality, which refers to the existence of multipartite quantum correlations that are not local in the sense of being Bell-inequality-violating but where the nonlocality is--due to its biseparability with respect to all bipartitions--seemingly nowhere to be found. Such correlations can be produced by the nonlocal collaboration involving definite subset(s) of parties but to an outsider, the identity of these nonlocally correlated parties is completely anonymous. For all n≥3, we present an example of an n-partite quantum correlation exhibiting anonymous nonlocality derived from the n-partite Greenberger-Horne-Zeilinger state. An explicit biseparable decomposition of these correlations is provided for any partitioning of the n parties into two groups. Two applications of these anonymous Greenberger-Horne-Zeilinger correlations in the device-independent setting are discussed: multipartite secret sharing between any two groups of parties and bipartite quantum key distribution that is robust against nearly arbitrary leakage of information.
Genetic reinforcement learning through symbiotic evolution for fuzzy controller design.
Juang, C F; Lin, J Y; Lin, C T
2000-01-01
An efficient genetic reinforcement learning algorithm for designing fuzzy controllers is proposed in this paper. The genetic algorithm (GA) adopted in this paper is based upon symbiotic evolution which, when applied to fuzzy controller design, complements the local mapping property of a fuzzy rule. Using this Symbiotic-Evolution-based Fuzzy Controller (SEFC) design method, the number of control trials, as well as consumed CPU time, are considerably reduced when compared to traditional GA-based fuzzy controller design methods and other types of genetic reinforcement learning schemes. Moreover, unlike traditional fuzzy controllers, which partition the input space into a grid, SEFC partitions the input space in a flexible way, thus creating fewer fuzzy rules. In SEFC, different types of fuzzy rules whose consequent parts are singletons, fuzzy sets, or linear equations (TSK-type fuzzy rules) are allowed. Further, the free parameters (e.g., centers and widths of membership functions) and fuzzy rules are all tuned automatically. For the TSK-type fuzzy rule especially, which put the proposed learning algorithm in use, only the significant input variables are selected to participate in the consequent of a rule. The proposed SEFC design method has been applied to different simulated control problems, including the cart-pole balancing system, a magnetic levitation system, and a water bath temperature control system. The proposed SEFC has been verified to be efficient and superior from these control problems, and from comparisons with some traditional GA-based fuzzy systems.
NASA Astrophysics Data System (ADS)
Clesi, V.; Bouhifd, M. A.; Bolfan-Casanova, N.; Manthilake, G.; Fabbrizio, A.; Andrault, D.
2016-11-01
This study investigates the metal-silicate partitioning of Ni, Co, V, Cr, Mn and Fe during core mantle differentiation of terrestrial planets under hydrous conditions. For this, we equilibrated a molten hydrous CI chondrite model composition with various Fe-rich alloys in the system Fe-C-Ni-Co-Si-S in a multi-anvil over a range of P, T, fO2 and water content (5-20 GPa, 2073-2500 K, from 1 to 5 log units below the iron-wüstite (IW) buffer and for XH2O varying from 500 ppm to 1.5 wt%). By comparing the present experiments with the available data sets on dry systems, we observes that the effect of water on the partition coefficients of moderately siderophile elements is only moderate. For example, for iron we observed a decrease in the partition coefficient of Fe (Dmet/silFe) from 9.5 to 4.3, with increasing water content of the silicate melt, from 0 to 1.44 wt%, respectively. The evolution of metal-silicate partition coefficients of Ni, Co, V, Cr, Mn and Fe are modelled based on sets of empirical parameters. These empirical models are then used to refine the process of core segregation during accretion of Mars and the Earth. It appears that the likely presence of 3.5 wt% water on Mars during the core-mantle segregation could account for ∼74% of the FeO content of the Martian mantle. In contrast, water does not play such an important role for the Earth; only 4-6% of the FeO content of its mantle could be due to the water-induced Fe-oxidation, for a likely initial water concentration of 1.8 wt%. Thus, in order to reproduce the present-day FeO content of 8 wt% in the mantle, the Earth could initially have been accreted from a large fraction (between 85% and 90%) of reducing bodies (similar to EH chondrites), with 10-15% of the Earth's mass likely made of more oxidized components that introduced the major part of water and FeO to the Earth. This high proportion of enstatite chondrites in the original constitution of the Earth is consistent with the 17O,48Ca,50Ti,62Ni and 90Mo isotopic study by Dauphas et al. (2014). If we assume that the CI-chondrite was oxidized during accretion, its intrinsically high water content suggests a maximum initial water concentration in the range of 1.2-1.8 wt% for the Earth, and 2.5-3.5 wt% for Mars.
NASA Astrophysics Data System (ADS)
Krieger, Ulrich; Marcolli, Claudia; Siegrist, Franziska
2015-04-01
The production of secondary organic aerosol (SOA) by gas-to-particle partitioning is generally represented by an equilibrium partitioning model. A key physical parameter which governs gas-particle partitioning is the pure component vapor pressure, which is difficult to measure for low- and semivolatile compounds. For typical atmospheric compounds like e.g. citric acid or tartaric acid, vapor pressures have been reported in the literature which differ by up to six orders of magnitude [Huisman et al., 2013]. Here, we report vapor pressures of a homologous series of polyethylene glycols (triethylene glycol to octaethylene glycol) determined by measuring the evaporation rate of single, levitated aerosol particles in an electrodynamic balance. We propose to use those as a reference data set for validating different vapor pressure measurement techniques. With each addition of a (O-CH2-CH2)-group the vapor pressure is lowered by about one order of magnitude which makes it easy to detect the lower limit of vapor pressures accessible with a particular technique down to a pressure of 10-8 Pa at room temperature. Reference: Huisman, A. J., Krieger, U. K., Zuend, A., Marcolli, C., and Peter, T., Atmos. Chem. Phys., 13, 6647-6662, 2013.
NASA Astrophysics Data System (ADS)
Zhou, Chi-Chun; Dai, Wu-Sheng
2018-02-01
In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.
Patient-specific coronary artery blood flow simulation using myocardial volume partitioning
NASA Astrophysics Data System (ADS)
Kim, Kyung Hwan; Kang, Dongwoo; Kang, Nahyup; Kim, Ji-Yeon; Lee, Hyong-Euk; Kim, James D. K.
2013-03-01
Using computational simulation, we can analyze cardiovascular disease in non-invasive and quantitative manners. More specifically, computational modeling and simulation technology has enabled us to analyze functional aspect such as blood flow, as well as anatomical aspect such as stenosis, from medical images without invasive measurements. Note that the simplest ways to perform blood flow simulation is to apply patient-specific coronary anatomy with other average-valued properties; in this case, however, such conditions cannot fully reflect accurate physiological properties of patients. To resolve this limitation, we present a new patient-specific coronary blood flow simulation method by myocardial volume partitioning considering artery/myocardium structural correspondence. We focus on that blood supply is closely related to the mass of each myocardial segment corresponding to the artery. Therefore, we applied this concept for setting-up simulation conditions in the way to consider many patient-specific features as possible from medical image: First, we segmented coronary arteries and myocardium separately from cardiac CT; then the myocardium is partitioned into multiple regions based on coronary vasculature. The myocardial mass and required blood mass for each artery are estimated by converting myocardial volume fraction. Finally, the required blood mass is used as boundary conditions for each artery outlet, with given average aortic blood flow rate and pressure. To show effectiveness of the proposed method, fractional flow reserve (FFR) by simulation using CT image has been compared with invasive FFR measurement of real patient data, and as a result, 77% of accuracy has been obtained.
NASA Astrophysics Data System (ADS)
molina, antonio; llorens, pilar; biel, carme
2014-05-01
Studies on rainfall interception in fast-growing tree plantations are less numerous than those in natural forests. Trees in these plantations are regularly distributed, and the canopy cover is clumped but changes quickly, resulting on high variability in the volume and composition of water that reach the soil. In addition, irrigation supply is normally required in semiarid areas to get optimal wood production; consequently, knowing rainfall interception and its yearly evolution is crucial to manage the irrigation scheme properly. This work studies the rainfall partitioning seasonality in a cherry tree (Prunus avium) plantation orientated to timber production under Mediterranean conditions. The monitoring design started on March 2012 and consists of a set of 58 throughfall tipping buckets randomly distributed (based on a 1x1 m2 grid) in a plot of 128 m2 with 8 trees. Stemflow is measured in all the trees with 2 tipping buckets and 6 accumulative collectors. Canopy cover is regularly measured throughout the study period, in leaf and leafless periods, by mean of sky-orientated photographs taken 50 cm above the center of each tipping bucket. Others tree biometrics are also measured such as diameter and leaf area index. Meteorological conditions are measured at 2 m above the forest cover. This work presents the first analyses describing the rainfall partitioning and its dependency on canopy cover, distance to tree and meteorological conditions. The modified Gash' model for rainfall interception in dispersed vegetation is also preliminary evaluated.
A mathematical programming approach for sequential clustering of dynamic networks
NASA Astrophysics Data System (ADS)
Silva, Jonathan C.; Bennett, Laura; Papageorgiou, Lazaros G.; Tsoka, Sophia
2016-02-01
A common analysis performed on dynamic networks is community structure detection, a challenging problem that aims to track the temporal evolution of network modules. An emerging area in this field is evolutionary clustering, where the community structure of a network snapshot is identified by taking into account both its current state as well as previous time points. Based on this concept, we have developed a mixed integer non-linear programming (MINLP) model, SeqMod, that sequentially clusters each snapshot of a dynamic network. The modularity metric is used to determine the quality of community structure of the current snapshot and the historical cost is accounted for by optimising the number of node pairs co-clustered at the previous time point that remain so in the current snapshot partition. Our method is tested on social networks of interactions among high school students, college students and members of the Brazilian Congress. We show that, for an adequate parameter setting, our algorithm detects the classes that these students belong more accurately than partitioning each time step individually or by partitioning the aggregated snapshots. Our method also detects drastic discontinuities in interaction patterns across network snapshots. Finally, we present comparative results with similar community detection methods for time-dependent networks from the literature. Overall, we illustrate the applicability of mathematical programming as a flexible, adaptable and systematic approach for these community detection problems. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.
NASA Astrophysics Data System (ADS)
Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.
2018-01-01
In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
Analysis of red blood cell partitioning at bifurcations in simulated microvascular networks
NASA Astrophysics Data System (ADS)
Balogh, Peter; Bagchi, Prosenjit
2018-05-01
Partitioning of red blood cells (RBCs) at vascular bifurcations has been studied over many decades using in vivo, in vitro, and theoretical models. These studies have shown that RBCs usually do not distribute to the daughter vessels with the same proportion as the blood flow. Such disproportionality occurs, whereby the cell distribution fractions are either higher or lower than the flow fractions and have been referred to as classical partitioning and reverse partitioning, respectively. The current work presents a study of RBC partitioning based on, for the first time, a direct numerical simulation (DNS) of a flowing cell suspension through modeled vascular networks that are comprised of multiple bifurcations and have topological similarity to microvasculature in vivo. The flow of deformable RBCs at physiological hematocrits is considered through the networks, and the 3D dynamics of each individual cell are accurately resolved. The focus is on the detailed analysis of the partitioning, based on the DNS data, as it develops naturally in successive bifurcations, and the underlying mechanisms. We find that while the time-averaged partitioning at a bifurcation manifests in one of two ways, namely, the classical or reverse partitioning, the time-dependent behavior can cycle between these two types. We identify and analyze four different cellular-scale mechanisms underlying the time-dependent partitioning. These mechanisms arise, in general, either due to an asymmetry in the RBC distribution in the feeding vessels caused by the events at an upstream bifurcation or due to a temporary increase in cell concentration near capillary bifurcations. Using the DNS results, we show that a positive skewness in the hematocrit profile in the feeding vessel is associated with the classical partitioning, while a negative skewness is associated with the reverse one. We then present a detailed analysis of the two components of disproportionate partitioning as identified in prior studies, namely, plasma skimming and cell screening. The plasma skimming component is shown to under-predict the disproportionality, leaving the cell screening component to make up for the difference. The crossing of the separation surface by the cells is observed to be a dominant mechanism underlying the cell screening, which is shown to mitigate extreme heterogeneity in RBC distribution across the networks.
NASA Astrophysics Data System (ADS)
Li, Y.-F.; Ma, W.-L.; Yang, M.
2014-09-01
Gas/particle (G / P) partitioning for most semivolatile organic compounds (SVOCs) is an important process that primarily governs their atmospheric fate, long-range atmospheric transport potential, and their routs to enter human body. All previous studies on this issue have been hypothetically derived from equilibrium conditions, the results of which do not predict results from monitoring studies well in most cases. In this study, a steady-state model instead of an equilibrium-state model for the investigation of the G / P partitioning behavior for polybrominated diphenyl ethers (PBDEs) was established, and an equation for calculating the partition coefficients under steady state (KPS) for PBDE congeners (log KPS = log KPE + logα) was developed, in which an equilibrium term (log KPE = log KOA + logfOM -11.91, where fOM is organic matter content of the particles) and a nonequilibrium term (logα, mainly caused by dry and wet depositions of particles), both being functions of log KOA (octanol-air partition coefficient), are included, and the equilibrium is a special case of steady state when the nonequilibrium term equals to zero. A criterion to classify the equilibrium and nonequilibrium status for PBDEs was also established using two threshold values of log KOA, log KOA1 and log KOA2, which divide the range of log KOA into 3 domains: equilibrium, nonequilibrium, and maximum partition domains; and accordingly, two threshold values of temperature t, tTH1 when log KOA = log KOA1 and tTH2 when log KOA = log KOA2, were identified, which divide the range of temperature also into the same 3 domains for each BDE congener. We predicted the existence of the maximum partition domain (the values of log KPS reach a maximum constant of -1.53) that every PBDE congener can reach when log KOA ≥ log KOA2, or t ≤ tTH2. The novel equation developed in this study was applied to predict the G / P partition coefficients of PBDEs for the published monitoring data worldwide, including Asia, Europe, North America, and the Arctic, and the results matched well with all the monitoring data, except those obtained at e-waste sites due to the unpredictable PBDE emissions at these sites. This study provided evidence that, the new developed steady-state-based equation is superior to the equilibrium-state-based equation that has been used in describing the G / P partitioning behavior in decades. We suggest that, the investigation on G / P partitioning behavior for PBDEs should be based on steady state, not equilibrium state, and equilibrium is just a special case of steady state when nonequilibrium factors can be ignored. We also believe that our new equation provides a useful tool for environmental scientists in both monitoring and modeling research on G / P partitioning for PBDEs and can be extended to predict G / P partitioning behavior for other SVOCs as well.
Efficient bulk-loading of gridfiles
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Nicol, David M.
1994-01-01
This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.
NASA Astrophysics Data System (ADS)
Liu, Zhengmin; Liu, Peide
2017-04-01
The Bonferroni mean (BM) was originally introduced by Bonferroni and generalised by many other researchers due to its capacity to capture the interrelationship between input arguments. Nevertheless, in many situations, interrelationships do not always exist between all of the attributes. Attributes can be partitioned into several different categories and members of intra-partition are interrelated while no interrelationship exists between attributes of different partitions. In this paper, as complements to the existing generalisations of BM, we investigate the partitioned Bonferroni mean (PBM) under intuitionistic uncertain linguistic environments and develop two linguistic aggregation operators: intuitionistic uncertain linguistic partitioned Bonferroni mean (IULPBM) and its weighted form (WIULPBM). Then, motivated by the ideal of geometric mean and PBM, we further present the partitioned geometric Bonferroni mean (PGBM) and develop two linguistic geometric aggregation operators: intuitionistic uncertain linguistic partitioned geometric Bonferroni mean (IULPGBM) and its weighted form (WIULPGBM). Some properties and special cases of these proposed operators are also investigated and discussed in detail. Based on these operators, an approach for multiple attribute decision-making problems with intuitionistic uncertain linguistic information is developed. Finally, a practical example is presented to illustrate the developed approach and comparison analyses are conducted with other representative methods to verify the effectiveness and feasibility of the developed approach.
A similarity based agglomerative clustering algorithm in networks
NASA Astrophysics Data System (ADS)
Liu, Zhiyuan; Wang, Xiujuan; Ma, Yinghong
2018-04-01
The detection of clusters is benefit for understanding the organizations and functions of networks. Clusters, or communities, are usually groups of nodes densely interconnected but sparsely linked with any other clusters. To identify communities, an efficient and effective community agglomerative algorithm based on node similarity is proposed. The proposed method initially calculates similarities between each pair of nodes, and form pre-partitions according to the principle that each node is in the same community as its most similar neighbor. After that, check each partition whether it satisfies community criterion. For the pre-partitions who do not satisfy, incorporate them with others that having the biggest attraction until there are no changes. To measure the attraction ability of a partition, we propose an attraction index that based on the linked node's importance in networks. Therefore, our proposed method can better exploit the nodes' properties and network's structure. To test the performance of our algorithm, both synthetic and empirical networks ranging in different scales are tested. Simulation results show that the proposed algorithm can obtain superior clustering results compared with six other widely used community detection algorithms.
Improved parallel data partitioning by nested dissection with applications to information retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less
Trace element partitioning between ionic crystal and liquid
NASA Technical Reports Server (NTRS)
Tsang, T.; Philpotts, J. A.; Yin, L.
1978-01-01
The partitioning of trace elements between ionic crystals and the melt has been correlated with lattice energy of the host. The solid-liquid partition coefficient has been expressed in terms of the difference in relative ionic radius of the trace element and the homogeneous and heterogeneous strain of the host lattice. Predictions based on this model appear to be in general agreement with data for alkali nitrates and for rare-earth elements in natural garnet phenocrysts.
NASA Astrophysics Data System (ADS)
Beretta, Elena; Micheletti, Stefano; Perotto, Simona; Santacesaria, Matteo
2018-01-01
In this paper, we develop a shape optimization-based algorithm for the electrical impedance tomography (EIT) problem of determining a piecewise constant conductivity on a polygonal partition from boundary measurements. The key tool is to use a distributed shape derivative of a suitable cost functional with respect to movements of the partition. Numerical simulations showing the robustness and accuracy of the method are presented for simulated test cases in two dimensions.
Yang, Qian; Lew, Hwee Yeong; Peh, Raymond Hock Huat; Metz, Michael Patrick; Loh, Tze Ping
2016-10-01
Reference intervals are the most commonly used decision support tool when interpreting quantitative laboratory results. They may require partitioning to better describe subpopulations that display significantly different reference values. Partitioning by age is particularly important for the paediatric population since there are marked physiological changes associated with growth and maturation. However, most partitioning methods are either technically complex or require prior knowledge of the underlying physiology/biological variation of the population. There is growing interest in the use of continuous centile curves, which provides seamless laboratory reference values as a child grows, as an alternative to rigidly described fixed reference intervals. However, the mathematical functions that describe these curves can be complex and may not be easily implemented in laboratory information systems. Hence, the use of fixed reference intervals is expected to continue for a foreseeable time. We developed a method that objectively proposes optimised age partitions and reference intervals for quantitative laboratory data (http://research.sph.nus.edu.sg/pp/ppResult.aspx), based on the sum of gradient that best describes the underlying distribution of the continuous centile curves. It is hoped that this method may improve the selection of age intervals for partitioning, which is receiving increasing attention in paediatric laboratory medicine. Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.
Henneberger, Luise; Goss, Kai-Uwe; Endo, Satoshi
2016-07-05
The in vivo partitioning behavior of ionogenic organic chemicals (IOCs) is of paramount importance for their toxicokinetics and bioaccumulation. Among other proteins, structural proteins including muscle proteins could be an important sorption phase for IOCs, because of their high quantity in the human and other animals' body and their polar nature. Binding data for IOCs to structural proteins are, however, severely limited. Therefore, in this study muscle protein-water partition coefficients (KMP/w) of 51 systematically selected organic anions and cations were determined experimentally. A comparison of the measured KMP/w with bovine serum albumin (BSA)-water partition coefficients showed that anionic chemicals sorb more strongly to BSA than to muscle protein (by up to 3.5 orders of magnitude), while cations sorb similarly to both proteins. Sorption isotherms of selected IOCs to muscle protein are linear (i.e., KMP/w is concentration independent), and KMP/w is only marginally influenced by pH value and salt concentration. Using the obtained data set of KMP/w a polyparameter linear free energy relationship (PP-LFER) model was established. The derived equation fits the data well (R(2) = 0.89, RMSE = 0.29). Finally, it was demonstrated that the in vitro measured KMP/w values of this study have the potential to be used to evaluate tissue-plasma partitioning of IOCs in vivo.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivasseau, Vincent, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org
The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property tomore » lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.« less
NASA Astrophysics Data System (ADS)
Nielsen, Roger L.; Ustunisik, Gokce; Weinsteiger, Allison B.; Tepley, Frank J.; Johnston, A. Dana; Kent, Adam J. R.
2017-09-01
Quantitative models of petrologic processes require accurate partition coefficients. Our ability to obtain accurate partition coefficients is constrained by their dependence on pressure temperature and composition, and on the experimental and analytical techniques we apply. The source and magnitude of error in experimental studies of trace element partitioning may go unrecognized if one examines only the processed published data. The most important sources of error are relict crystals, and analyses of more than one phase in the analytical volume. Because we have typically published averaged data, identification of compromised data is difficult if not impossible. We addressed this problem by examining unprocessed data from plagioclase/melt partitioning experiments, by comparing models based on that data with existing partitioning models, and evaluated the degree to which the partitioning models are dependent on the calibration data. We found that partitioning models are dependent on the calibration data in ways that result in erroneous model values, and that the error will be systematic and dependent on the value of the partition coefficient. In effect, use of different calibration datasets will result in partitioning models whose results are systematically biased, and that one can arrive at different and conflicting conclusions depending on how a model is calibrated, defeating the purpose of applying the models. Ultimately this is an experimental data problem, which can be solved if we publish individual analyses (not averages) or use a projection method wherein we use an independent compositional constraint to identify and estimate the uncontaminated composition of each phase.
Salgado, J Cristian; Andrews, Barbara A; Ortuzar, Maria Fernanda; Asenjo, Juan A
2008-01-18
The prediction of the partition behaviour of proteins in aqueous two-phase systems (ATPS) using mathematical models based on their amino acid composition was investigated. The predictive models are based on the average surface hydrophobicity (ASH). The ASH was estimated by means of models that use the three-dimensional structure of proteins and by models that use only the amino acid composition of proteins. These models were evaluated for a set of 11 proteins with known experimental partition coefficient in four-phase systems: polyethylene glycol (PEG) 4000/phosphate, sulfate, citrate and dextran and considering three levels of NaCl concentration (0.0% w/w, 0.6% w/w and 8.8% w/w). The results indicate that such prediction is feasible even though the quality of the prediction depends strongly on the ATPS and its operational conditions such as the NaCl concentration. The ATPS 0 model which use the three-dimensional structure obtains similar results to those given by previous models based on variables measured in the laboratory. In addition it maintains the main characteristics of the hydrophobic resolution and intrinsic hydrophobicity reported before. Three mathematical models, ATPS I-III, based only on the amino acid composition were evaluated. The best results were obtained by the ATPS I model which assumes that all of the amino acids are completely exposed. The performance of the ATPS I model follows the behaviour reported previously, i.e. its correlation coefficients improve as the NaCl concentration increases in the system and, therefore, the effect of the protein hydrophobicity prevails over other effects such as charge or size. Its best predictive performance was obtained for the PEG/dextran system at high NaCl concentration. An increase in the predictive capacity of at least 54.4% with respect to the models which use the three-dimensional structure of the protein was obtained for that system. In addition, the ATPS I model exhibits high correlation coefficients in that system being higher than 0.88 on average. The ATPS I model exhibited correlation coefficients higher than 0.67 for the rest of the ATPS at high NaCl concentration. Finally, we tested our best model, the ATPS I model, on the prediction of the partition coefficient of the protein invertase. We found that the predictive capacities of the ATPS I model are better in PEG/dextran systems, where the relative error of the prediction with respect to the experimental value is 15.6%.
Thompson-Bean, E; Das, R; McDaid, A
2016-10-31
We present a novel methodology for the design and manufacture of complex biologically inspired soft robotic fluidic actuators. The methodology is applied to the design and manufacture of a prosthetic for the hand. Real human hands are scanned to produce a 3D model of a finger, and pneumatic networks are implemented within it to produce a biomimetic bending motion. The finger is then partitioned into material sections, and a genetic algorithm based optimization, using finite element analysis, is employed to discover the optimal material for each section. This is based on two biomimetic performance criteria. Two sets of optimizations using two material sets are performed. Promising optimized material arrangements are fabricated using two techniques to validate the optimization routine, and the fabricated and simulated results are compared. We find that the optimization is successful in producing biomimetic soft robotic fingers and that fabrication of the fingers is possible. Limitations and paths for development are discussed. This methodology can be applied for other fluidic soft robotic devices.
NASA Astrophysics Data System (ADS)
Wei, Xiao-Ran; Zhang, Yu-He; Geng, Guo-Hua
2016-09-01
In this paper, we examined how printing the hollow objects without infill via fused deposition modeling, one of the most widely used 3D-printing technologies, by partitioning the objects to shell parts. More specifically, we linked the partition to the exact cover problem. Given an input watertight mesh shape S, we developed region growing schemes to derive a set of surfaces that had inside surfaces that were printable without support on the mesh for the candidate parts. We then employed Monte Carlo tree search over the candidate parts to obtain the optimal set cover. All possible candidate subsets of exact cover from the optimal set cover were then obtained and the bounded tree was used to search the optimal exact cover. We oriented each shell part to the optimal position to guarantee the inside surface was printed without support, while the outside surface was printed with minimum support. Our solution can be applied to a variety of models, closed-hollowed or semi-closed, with or without holes, as evidenced by experiments and performance evaluation on our proposed algorithm.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Effect of video server topology on contingency capacity requirements
NASA Astrophysics Data System (ADS)
Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.
1996-03-01
Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.
Minimum nonuniform graph partitioning with unrelated weights
NASA Astrophysics Data System (ADS)
Makarychev, K. S.; Makarychev, Yu S.
2017-12-01
We give a bi-criteria approximation algorithm for the Minimum Nonuniform Graph Partitioning problem, recently introduced by Krauthgamer, Naor, Schwartz and Talwar. In this problem, we are given a graph G=(V,E) and k numbers ρ_1,\\dots, ρ_k. The goal is to partition V into k disjoint sets (bins) P_1,\\dots, P_k satisfying \\vert P_i\\vert≤ ρi \\vert V\\vert for all i, so as to minimize the number of edges cut by the partition. Our bi-criteria algorithm gives an O(\\sqrt{log \\vert V\\vert log k}) approximation for the objective function in general graphs and an O(1) approximation in graphs excluding a fixed minor. The approximate solution satisfies the relaxed capacity constraints \\vert P_i\\vert ≤ (5+ \\varepsilon)ρi \\vert V\\vert. This algorithm is an improvement upon the O(log \\vert V\\vert)-approximation algorithm by Krauthgamer, Naor, Schwartz and Talwar. We extend our results to the case of 'unrelated weights' and to the case of 'unrelated d-dimensional weights'. A preliminary version of this work was presented at the 41st International Colloquium on Automata, Languages and Programming (ICALP 2014). Bibliography: 7 titles.
Sources and atmospheric transformations of semivolatile organic aerosols
NASA Astrophysics Data System (ADS)
Grieshop, Andrew P.
Fine atmospheric particulate matter (PM2.5) is associated with increased mortality, a fact which led the EPA to promulgate a National Ambient Air Quality Standard (NAAQS) for PM2.5 in 1997. Organic material contributes a substantial portion of the PM2.5 mass; organic aerosols (OA) are either directly emitted (primary OA or POA) or formed via the atmospheric oxidation of volatile precursor compounds as secondary OA (SOA). The relative contributions of POA and SOA to atmospheric OA are uncertain, as are the contributions from various source classes (e.g. motor vehicles, biomass burning). This dissertation first assesses the importance of organic PM within the context of current US air pollution regulations. Most control efforts to date have focused on the inorganic component of PM. Although growing evidence strongly implicates OA, especially which from motor vehicles, in the health effects of PM, uncertain and complex source-receptor relationships for OA discourage its direct control for NAAQS compliance. Analysis of both ambient data and chemical transport modeling results indicate that OA does not play a dominant role in NAAQS violations in most areas of the country under current and likely future regulations. Therefore, new regulatory approaches will likely be required to directly address potential health impacts associated with OA. To help develop the scientific understanding needed to better regulate OA, this dissertation examined the evolution of organic aerosol emitted by combustion systems. The current conceptual model of POA is that it is non-volatile and non-reactive. Both of these assumptions were experimental investigated in this dissertation. Novel dilution measurements were carried out to investigate the gas-particle partitioning of OA at atmospherically-relevant conditions. The results demonstrate that POA from combustion sources is semivolatile. Therefore its gas-particle partitioning depends on temperature and atmospheric concentrations; heating and dilution both cause it to evaporate. Gas-particle partitioning was parameterized using absorptive partitioning theory and the volatility basis-set framework. The dynamics of particle evaporation proved to be much slower than expected and measurements of aerosol composition indicate that particle composition varies with partitioning. These findings have major implications for the measurement and modeling of POA from combustion sources. Source tests need to be conducted at atmospheric concentrations and temperatures. Upon entering the atmosphere, organic aerosol emissions are aged via photochemical reactions. Experiments with dilute wood-smoke demonstrate the dramatic evolution these emissions undergo within hours of emission. Aging produced substantial new OA (doubling or tripling OA levels within hours) and changed particle composition and volatility. These changes are consistent with model predictions based on the partitioning and aging (via gas-phase photochemistry) of semi-volatile species represented with the basis-set framework. Aging of wood-smoke OA made created a much more oxygenated aerosol and formed material spectrally similar to oxygenated OA found widely in the atmosphere. The oxygenated aerosol is also similar that formed with similar experiments conducted with diesel engine emissions. Therefore, aging of emissions from diverse sources may produce chemically similar OA, complicating the establishment of robust source-receptor relationships.
Computer code for controller partitioning with IFPC application: A user's manual
NASA Technical Reports Server (NTRS)
Schmidt, Phillip H.; Yarkhan, Asim
1994-01-01
A user's manual for the computer code for partitioning a centralized controller into decentralized subcontrollers with applicability to Integrated Flight/Propulsion Control (IFPC) is presented. Partitioning of a centralized controller into two subcontrollers is described and the algorithm on which the code is based is discussed. The algorithm uses parameter optimization of a cost function which is described. The major data structures and functions are described. Specific instructions are given. The user is led through an example of an IFCP application.
Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.
NASA Astrophysics Data System (ADS)
Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca
2015-12-01
The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.
Park, Jun-Bean; Hwang, In-Chang; Lee, Whal; Han, Jung-Kyu; Kim, Chi-Hoon; Lee, Seung-Pyo; Yang, Han-Mo; Park, Eun-Ah; Kim, Hyung-Kwan; Chiam, Paul T L; Kim, Yong-Jin; Koo, Bon-Kwon; Sohn, Dae-Won; Ahn, Hyuk; Kang, Joon-Won; Park, Seung-Jung; Kim, Hyo-Soo
2018-05-15
Limited data exist regarding the impact of aortic valve calcification (AVC) eccentricity on the risk of paravalvular regurgitation (PVR) and response to balloon post-dilation (BPD) after transcatheter aortic valve replacement (TAVR). We investigated the prognostic value of AVC eccentricity in predicting the risk of PVR and response to BPD in patients undergoing TAVR. We analyzed 85 patients with severe aortic stenosis who underwent self-expandable TAVR (43 women; 77.2±7.1years). AVC was quantified as the total amount of calcification (total AVC load) and as the eccentricity of calcium (EoC) using calcium volume scoring with contrast computed tomography angiography (CTA). The EoC was defined as the maximum absolute difference in calcium volume scores between 2 adjacent sectors (bi-partition method) or between sectors based on leaflets (leaflet-based method). Total AVC load and bi-partition EoC, but not leaflet-based EoC, were significant predictors for the occurrence of ≥moderate PVR, and bi-partition EoC had a better predictive value than total AVC load (area under the curve [AUC]=0.863 versus 0.760, p for difference=0.006). In multivariate analysis, bi-partition EoC was an independent predictor for the risk of ≥moderate PVR regardless of perimeter oversizing index. The greater bi-partition EoC was the only significant parameter to predict poor response to BPD (AUC=0.775, p=0.004). Pre-procedural assessment of AVC eccentricity using CTA as "bi-partition EoC" provides useful predictive information on the risk of significant PVR and response to BPD in patients undergoing TAVR with self-expandable valves. Copyright © 2017 Elsevier B.V. All rights reserved.
Phylogenetic search through partial tree mixing
2012-01-01
Background Recent advances in sequencing technology have created large data sets upon which phylogenetic inference can be performed. Current research is limited by the prohibitive time necessary to perform tree search on a reasonable number of individuals. This research develops new phylogenetic algorithms that can operate on tens of thousands of species in a reasonable amount of time through several innovative search techniques. Results When compared to popular phylogenetic search algorithms, better trees are found much more quickly for large data sets. These algorithms are incorporated in the PSODA application available at http://dna.cs.byu.edu/psoda Conclusions The use of Partial Tree Mixing in a partition based tree space allows the algorithm to quickly converge on near optimal tree regions. These regions can then be searched in a methodical way to determine the overall optimal phylogenetic solution. PMID:23320449
NASA Astrophysics Data System (ADS)
Teh, R. Y.; Reid, M. D.
2014-12-01
Following previous work, we distinguish between genuine N -partite entanglement and full N -partite inseparability. Accordingly, we derive criteria to detect genuine multipartite entanglement using continuous-variable (position and momentum) measurements. Our criteria are similar but different to those based on the van Loock-Furusawa inequalities, which detect full N -partite inseparability. We explain how the criteria can be used to detect the genuine N -partite entanglement of continuous variable states generated from squeezed and vacuum state inputs, including the continuous-variable Greenberger-Horne-Zeilinger state, with explicit predictions for up to N =9 . This makes our work accessible to experiment. For N =3 , we also present criteria for tripartite Einstein-Podolsky-Rosen (EPR) steering. These criteria provide a means to demonstrate a genuine three-party EPR paradox, in which any single party is steerable by the remaining two parties.
Time and Space Partition Platform for Safe and Secure Flight Software
NASA Astrophysics Data System (ADS)
Esquinas, Angel; Zamorano, Juan; de la Puente, Juan A.; Masmano, Miguel; Crespo, Alfons
2012-08-01
There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable realtime kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.
Feenstra, Peter; Brunsteiner, Michael; Khinast, Johannes
2014-10-01
The interaction between drug products and polymeric packaging materials is an important topic in the pharmaceutical industry and often associated with high costs because of the required elaborative interaction studies. Therefore, a theoretical prediction of such interactions would be beneficial. Often, material parameters such as the octanol water partition coefficient are used to predict the partitioning of migrant molecules between a solvent and a polymeric packaging material. Here, we present the investigation of the partitioning of various migrant molecules between polymers and solvents using molecular dynamics simulations for the calculation of interaction energies. Our results show that the use of a model for the interaction between the migrant and the polymer at atomistic detail can yield significantly better results when predicting the polymer solvent partitioning than a model based on the octanol water partition coefficient. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
NASA Astrophysics Data System (ADS)
Zheng, Mingfei; Li, Hongjian; Chen, Zhiquan; He, Zhihui; Xu, Hui; Zhao, Mingzhuo
2017-11-01
We propose a compact plasmonic nanofilter in partitioned semicircle or semiring stub waveguide, and investigate the transmission characteristics of the two novel systems by using the finite-difference time-domain method. An ultra-broad stopband phenomenon is generated by partitioning a single stub into a double stub with a rectangular metal partition, which is caused by the destructive interference superposition of the reflected and transmitted waves from each stub. A tunable stopband is realized in the multiple plasmonic nanofilter by adjusting the width of the partition and the (outer) radius and inner radius of the stub, whose starting wavelength, ending wavelength, center wavelength, bandwidth and total tunable bandwidth are discussed, and specific filtering waveband and optimum structural parameter are obtained. The proposed structures realize asymmetrical stub and achieve ultra-broad stopband, and have potential applications in band-stop nanofilters and high-density plasmonic integrated optical circuits.
Ferrari, Thomas; Lombardo, Anna; Benfenati, Emilio
2018-05-14
Several methods exist to develop QSAR models automatically. Some are based on indices of the presence of atoms, other on the most similar compounds, other on molecular descriptors. Here we introduce QSARpy v1.0, a new QSAR modeling tool based on a different approach: the dissimilarity. This tool fragments the molecules of the training set to extract fragments that can be associated to a difference in the property/activity value, called modulators. If the target molecule share part of the structure with a molecule of the training set and differences can be explained with one or more modulators, the property/activity value of the molecule of the training set is adjusted using the value associated to the modulator(s). This tool is tested here on the n-octanol/water partition coefficient (Kow, usually expressed in logarithmic units as log Kow). It is a key parameter in risk assessment since it is a measure of hydrophobicity. Its wide spread use makes these estimation methods very useful to reduce testing costs. Using QSARpy v1.0, we obtained a new model to predict log Kow with accurate performance (RMSE 0.43 and R 2 0.94 for the external test set), comparing favorably with other programs. QSARpy is freely available on request. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, X. Y.; Dou, J. M.; Shen, H.; Li, J.; Yang, G. S.; Fan, R. Q.; Shen, Q.
2018-03-01
With the continuous strengthening of power grids, the network structure is becoming more and more complicated. An open and regional data modeling is used to complete the calculation of the protection fixed value based on the local region. At the same time, a high precision, quasi real-time boundary fusion technique is needed to seamlessly integrate the various regions so as to constitute an integrated fault computing platform which can conduct transient stability analysis of covering the whole network with high accuracy and multiple modes, deal with the impact results of non-single fault, interlocking fault and build “the first line of defense” of the power grid. The boundary fusion algorithm in this paper is an automatic fusion algorithm based on the boundary accurate coupling of the networking power grid partition, which takes the actual operation mode for qualification, complete the boundary coupling algorithm of various weak coupling partition based on open-loop mode, improving the fusion efficiency, truly reflecting its transient stability level, and effectively solving the problems of too much data, too many difficulties of partition fusion, and no effective fusion due to mutually exclusive conditions. In this paper, the basic principle of fusion process is introduced firstly, and then the method of boundary fusion customization is introduced by scene description. Finally, an example is given to illustrate the specific algorithm on how it effectively implements the boundary fusion after grid partition and to verify the accuracy and efficiency of the algorithm.
High Performance Computing Based Parallel HIearchical Modal Association Clustering (HPAR HMAC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patlolla, Dilip R; Surendran Nair, Sujithkumar; Graves, Daniel A.
For many applications, clustering is a crucial step in order to gain insight into the makeup of a dataset. The best approach to a given problem often depends on a variety of factors, such as the size of the dataset, time restrictions, and soft clustering requirements. The HMAC algorithm seeks to combine the strengths of 2 particular clustering approaches: model-based and linkage-based clustering. One particular weakness of HMAC is its computational complexity. HMAC is not practical for mega-scale data clustering. For high-definition imagery, a user would have to wait months or years for a result; for a 16-megapixel image, themore » estimated runtime skyrockets to over a decade! To improve the execution time of HMAC, it is reasonable to consider an multi-core implementation that utilizes available system resources. An existing imple-mentation (Ray and Cheng 2014) divides the dataset into N partitions - one for each thread prior to executing the HMAC algorithm. This implementation benefits from 2 types of optimization: parallelization and divide-and-conquer. By running each partition in parallel, the program is able to accelerate computation by utilizing more system resources. Although the parallel implementation provides considerable improvement over the serial HMAC, it still suffers from poor computational complexity, O(N2). Once the maximum number of cores on a system is exhausted, the program exhibits slower behavior. We now consider a modification to HMAC that involves a recursive partitioning scheme. Our modification aims to exploit divide-and-conquer benefits seen by the parallel HMAC implementation. At each level in the recursion tree, partitions are divided into 2 sub-partitions until a threshold size is reached. When the partition can no longer be divided without falling below threshold size, the base HMAC algorithm is applied. This results in a significant speedup over the parallel HMAC.« less
RESOURCE-BASED NICHES PROVIDE A BASIS FOR PLANT SPECIES DIVERSITY AND DOMINANCE IN ARCTIC TUNDRA
Ecologists have long been intrigued by the ways co-occurring species divide limiting resources, and have proposed that such resource partitioning, or niche differentiation, promotes species diversity by reducing competition. Although resource partitioning is an important determi...
Yin, Kedong; Yang, Benshuo; Li, Xuemei
2018-01-24
In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.
Yin, Kedong; Yang, Benshuo
2018-01-01
In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making. PMID:29364849
Ocean surface partitioning strategies using ocean colour remote Sensing: A review
NASA Astrophysics Data System (ADS)
Krug, Lilian Anne; Platt, Trevor; Sathyendranath, Shubha; Barbosa, Ana B.
2017-06-01
The ocean surface is organized into regions with distinct properties reflecting the complexity of interactions between environmental forcing and biological responses. The delineation of these functional units, each with unique, homogeneous properties and underlying ecosystem structure and dynamics, can be defined as ocean surface partitioning. The main purposes and applications of ocean partitioning include the evaluation of particular marine environments; generation of more accurate satellite ocean colour products; assimilation of data into biogeochemical and climate models; and establishment of ecosystem-based management practices. This paper reviews the diverse approaches implemented for ocean surface partition into functional units, using ocean colour remote sensing (OCRS) data, including their purposes, criteria, methods and scales. OCRS offers a synoptic, high spatial-temporal resolution, multi-decadal coverage of bio-optical properties, relevant to the applications and value of ocean surface partitioning. In combination with other biotic and/or abiotic data, OCRS-derived data (e.g., chlorophyll-a, optical properties) provide a broad and varied source of information that can be analysed using different delineation methods derived from subjective, expert-based to unsupervised learning approaches (e.g., cluster, fuzzy and empirical orthogonal function analyses). Partition schemes are applied at global to mesoscale spatial coverage, with static (time-invariant) or dynamic (time-varying) representations. A case study, the highly heterogeneous area off SW Iberian Peninsula (NE Atlantic), illustrates how the selection of spatial coverage and temporal representation affects the discrimination of distinct environmental drivers of phytoplankton variability. Advances in operational oceanography and in the subject area of satellite ocean colour, including development of new sensors, algorithms and products, are among the potential benefits from extended use, scope and applications of ocean surface partitioning using OCRS.
Carter, J.L.; Fend, S.V.
2005-01-01
Lotic habitats in urban settings are often more modified than in other anthropogenically influenced areas. The extent, degree, and permanency of these modifications compromise the use of traditional reference-based study designs to evaluate the level of lotic impairment and establish restoration goals. Directly relating biological responses to the combined effects of urbanization is further complicated by the nonlinear response often observed in common metrics (e.g., Ephemeroptera, Plecoptera, and Trichoptera [EPT] species richness) to measures of human influence (e.g., percentage urban land cover). A characteristic polygonal biological response often arises from the presence of a generalized limiting factor (i.e., urban land use) plus the influence of multiple additional stressors that are nonuniformly distributed throughout the urban environment. Benthic macroinvertebrates, on-site physical habitat and chemistry, and geographical information systems-derived land cover data for 85 sites were collected within the 1,600-km2 Santa Clara Valley (SCV), California urban area. A biological indicator value was derived from EPT richness and percentage EPT. Partitioned regression was used to define reference conditions and estimate the degree of site impairment. We propose that an upper-boundary condition (factor-ceiling) modeled by partitioned regression using ordinary least squares represents an attainable upper limit for biological condition in the SCV area. Indicator values greater than the factor-ceiling, which is monotonically related to existing land use, are considered representative of reference conditions under the current habitat conditions imposed by existing land cover and land use.
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Sherrill, C. David
2014-07-01
We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu
2014-07-28
We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work throughmore » the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.« less
[Analytic methods for seed models with genotype x environment interactions].
Zhu, J
1996-01-01
Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
Applying graph partitioning methods in measurement-based dynamic load balancing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha
Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less
Topological strings on singular elliptic Calabi-Yau 3-folds and minimal 6d SCFTs
NASA Astrophysics Data System (ADS)
Del Zotto, Michele; Gu, Jie; Huang, Min-xin; Kashani-Poor, Amir-Kian; Klemm, Albrecht; Lockhart, Guglielmo
2018-03-01
We apply the modular approach to computing the topological string partition function on non-compact elliptically fibered Calabi-Yau 3-folds with higher Kodaira singularities in the fiber. The approach consists in making an ansatz for the partition function at given base degree, exact in all fiber classes to arbitrary order and to all genus, in terms of a rational function of weak Jacobi forms. Our results yield, at given base degree, the elliptic genus of the corresponding non-critical 6d string, and thus the associated BPS invariants of the 6d theory. The required elliptic indices are determined from the chiral anomaly 4-form of the 2d worldsheet theories, or the 8-form of the corresponding 6d theories, and completely fix the holomorphic anomaly equation constraining the partition function. We introduce subrings of the known rings of Weyl invariant Jacobi forms which are adapted to the additional symmetries of the partition function, making its computation feasible to low base wrapping number. In contradistinction to the case of simpler singularities, generic vanishing conditions on BPS numbers are no longer sufficient to fix the modular ansatz at arbitrary base wrapping degree. We show that to low degree, imposing exact vanishing conditions does suffice, and conjecture this to be the case generally.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less
Spatial partitioning algorithms for data visualization
NASA Astrophysics Data System (ADS)
Devulapalli, Raghuveer; Quist, Mikael; Carlsson, John Gunnar
2013-12-01
Spatial partitions of an information space are frequently used for data visualization. Weighted Voronoi diagrams are among the most popular ways of dividing a space into partitions. However, the problem of computing such a partition efficiently can be challenging. For example, a natural objective is to select the weights so as to force each Voronoi region to take on a pre-defined area, which might represent the relevance or market share of an informational object. In this paper, we present an easy and fast algorithm to compute these weights of the Voronoi diagrams. Unlike previous approaches whose convergence properties are not well-understood, we give a formulation to the problem based on convex optimization with excellent performance guarantees in theory and practice. We also show how our technique can be used to control the shape of these partitions. More specifically we show how to convert undesirable skinny and long regions into fat regions while maintaining the areas of the partitions. As an application, we use these to visualize the amount of website traffic for the top 101 websites.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandbyge, Mads, E-mail: mads.brandbyge@nanotech.dtu.dk
2014-05-07
In a recent paper Reuter and Harrison [J. Chem. Phys. 139, 114104 (2013)] question the widely used mean-field electron transport theories, which employ nonorthogonal localized basis sets. They claim these can violate an “implicit decoupling assumption,” leading to wrong results for the current, different from what would be obtained by using an orthogonal basis, and dividing surfaces defined in real-space. We argue that this assumption is not required to be fulfilled to get exact results. We show how the current/transmission calculated by the standard Greens function method is independent of whether or not the chosen basis set is nonorthogonal, andmore » that the current for a given basis set is consistent with divisions in real space. The ambiguity known from charge population analysis for nonorthogonal bases does not carry over to calculations of charge flux.« less
12. VIEW OF SPACE BETWEEN EAST FALSE PARTITION WALL IN ...
12. VIEW OF SPACE BETWEEN EAST FALSE PARTITION WALL IN CLEAN ROOM (102) AND EAST WALL OF VEHICLE SUPPORT BUILDING SHOWING PREFILTER NEAR SOUTH WALL - Vandenberg Air Force Base, Space Launch Complex 3, Vehicle Support Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Mapping Pesticide Partition Coefficients By Electromagnetic Induction
USDA-ARS?s Scientific Manuscript database
A potential method for reducing pesticide leaching is to base application rates on the leaching potential of a specific chemical and soil combination. However, leaching is determined in part by the partitioning of the chemical between the soil and soil solution, which varies across a field. Standard...
A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.
Tabe-Bordbar, Shayan; Emad, Amin; Zhao, Sihai Dave; Sinha, Saurabh
2018-04-26
Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.
Tanabe, Akifumi S
2011-09-01
Proportional and separate models able to apply different combination of substitution rate matrix (SRM) and among-site rate variation model (ASRVM) to each locus are frequently used in phylogenetic studies of multilocus data. A proportional model assumes that branch lengths are proportional among partitions and a separate model assumes that each partition has an independent set of branch lengths. However, the selection from among nonpartitioned (i.e., a common combination of models is applied to all-loci concatenated sequences), proportional and separate models is usually based on the researcher's preference rather than on any information criteria. This study describes two programs, 'Kakusan4' (for DNA sequences) and 'Aminosan' (for amino-acid sequences), which allow the selection of evolutionary models based on several types of information criteria. The programs can handle both multilocus and single-locus data, in addition to providing an easy-to-use wizard interface and a noninteractive command line interface. In the case of multilocus data, SRMs and ASRVMs are compared at each locus and at all-loci concatenated sequences, after which nonpartitioned, proportional and separate models are compared based on information criteria. The programs also provide model configuration files for mrbayes, paup*, phyml, raxml and Treefinder to support further phylogenetic analysis using a selected model. When likelihoods are optimized by Treefinder, the best-fit models were found to differ depending on the data set. Furthermore, differences in the information criteria among nonpartitioned, proportional and separate models were much larger than those among the nonpartitioned models. These findings suggest that selecting from nonpartitioned, proportional and separate models results in a better phylogenetic tree. Kakusan4 and Aminosan are available at http://www.fifthdimension.jp/. They are licensed under gnugpl Ver.2, and are able to run on Windows, MacOS X and Linux. © 2011 Blackwell Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.
2009-12-01
We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less
Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph
NASA Astrophysics Data System (ADS)
Feng, Lv; Chunlin, Gao; Kaiyang, Ma
2017-05-01
With rapid development of computer performance and distributed technology, P2P-based resource sharing mode plays important role in Internet. P2P network users continued to increase so the high dynamic characteristics of the system determine that it is difficult to obtain the load of other nodes. Therefore, a dynamic load balance strategy based on hypergraph is proposed in this article. The scheme develops from the idea of hypergraph theory in multilevel partitioning. It adopts optimized multilevel partitioning algorithms to partition P2P network into several small areas, and assigns each area a supernode for the management and load transferring of the nodes in this area. In the case of global scheduling is difficult to be achieved, the priority of a number of small range of load balancing can be ensured first. By the node load balance in each small area the whole network can achieve relative load balance. The experiments indicate that the load distribution of network nodes in our scheme is obviously compacter. It effectively solves the unbalanced problems in P2P network, which also improve the scalability and bandwidth utilization of system.
MODFLOW-CDSS, a version of MODFLOW-2005 with modifications for Colorado Decision Support Systems
Banta, Edward R.
2011-01-01
MODFLOW-CDSS is a three-dimensional, finite-difference groundwater-flow model based on MODFLOW-2005, with two modifications. The first modification is the introduction of a Partition Stress Boundaries capability, which enables the user to partition a selected subset of MODFLOW's stress-boundary packages, with each partition defined by a separate input file. Volumetric water-budget components of each partition are tracked and listed separately in the volumetric water-budget tables. The second modification enables the user to specify that execution of a simulation should continue despite failure of the solver to satisfy convergence criteria. This modification is particularly intended to be used in conjunction with automated model-analysis software; its use is not recommended for other purposes.
Spectral methods in machine learning and new strategies for very large datasets
Belabbas, Mohamed-Ali; Wolfe, Patrick J.
2009-01-01
Spectral methods are of fundamental importance in statistics and machine learning, because they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. For the growing number of applications dealing with very large or high-dimensional datasets, however, the optimal approximation afforded by an exact spectral decomposition is too costly, because its complexity scales as the cube of either the number of training examples or their dimensionality. Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semidefinite kernels, together with error bounds that improve on results in the literature. We approach this problem by seeking to determine, in an efficient manner, the most informative subset of our data relative to the kernel approximation task at hand. This leads to two new strategies based on the Nyström method that are directly applicable to massive datasets. The first of these—based on sampling—leads to a randomized algorithm whereupon the kernel induces a probability distribution on its set of partitions, whereas the latter approach—based on sorting—provides for the selection of a partition in a deterministic way. We detail their numerical implementation and provide simulation results for a variety of representative problems in statistical data analysis, each of which demonstrates the improved performance of our approach relative to existing methods. PMID:19129490
Data Randomization and Cluster-Based Partitioning for Botnet Intrusion Detection.
Al-Jarrah, Omar Y; Alhussein, Omar; Yoo, Paul D; Muhaidat, Sami; Taha, Kamal; Kim, Kwangjo
2016-08-01
Botnets, which consist of remotely controlled compromised machines called bots, provide a distributed platform for several threats against cyber world entities and enterprises. Intrusion detection system (IDS) provides an efficient countermeasure against botnets. It continually monitors and analyzes network traffic for potential vulnerabilities and possible existence of active attacks. A payload-inspection-based IDS (PI-IDS) identifies active intrusion attempts by inspecting transmission control protocol and user datagram protocol packet's payload and comparing it with previously seen attacks signatures. However, the PI-IDS abilities to detect intrusions might be incapacitated by packet encryption. Traffic-based IDS (T-IDS) alleviates the shortcomings of PI-IDS, as it does not inspect packet payload; however, it analyzes packet header to identify intrusions. As the network's traffic grows rapidly, not only the detection-rate is critical, but also the efficiency and the scalability of IDS become more significant. In this paper, we propose a state-of-the-art T-IDS built on a novel randomized data partitioned learning model (RDPLM), relying on a compact network feature set and feature selection techniques, simplified subspacing and a multiple randomized meta-learning technique. The proposed model has achieved 99.984% accuracy and 21.38 s training time on a well-known benchmark botnet dataset. Experiment results demonstrate that the proposed methodology outperforms other well-known machine-learning models used in the same detection task, namely, sequential minimal optimization, deep neural network, C4.5, reduced error pruning tree, and randomTree.
Evapotranspiration partitioning in a semi-arid African savanna using stable isotopes of water vapor
NASA Astrophysics Data System (ADS)
Soderberg, K.; Good, S. P.; O'Connor, M.; King, E. G.; Caylor, K. K.
2012-04-01
Evapotranspiration (ET) represents a major flux of water out of semi-arid ecosystems. Thus, understanding ET dynamics is central to the study of African savanna health and productivity. At our study site in central Kenya (Mpala Research Centre), we have been using stable isotopes of water vapor to partition ET into its constituent parts of plant transpiration (T) and soil evaporation (E). This effort includes continuous measurement (1 Hz) of δ2H and δ18O in water vapor using a portable water vapor isotope analyzer mounted on a 22.5 m eddy covariance flux tower. The flux tower has been collecting data since early 2010. The isotopic end-member of δET is calculated using a Keeling Plot approach, whereas δT and δE are measured directly via a leaf chamber and tubing buried in the soil, respectively. Here we report on a two recent sets of measurements for partitioning ET in the Kenya Long-term Exclosure Experiment (KLEE) and a nearby grassland. We combine leaf level measurements of photosynthesis and water use with canopy-scale isotope measurements for ET partitioning. In the KLEE experiment we compare ET partitioning in a 4 ha plot that has only seen cattle grazing for the past 15 years with an adjacent plot that has undergone grazing by both cattle and wild herbivores (antelope, elephants, giraffe). These results are compared with a detailed study of ET in an artificially watered grassland.
Rule groupings in expert systems using nearest neighbour decision rules, and convex hulls
NASA Technical Reports Server (NTRS)
Anastasiadis, Stergios
1991-01-01
Expert System shells are lacking in many areas of software engineering. Large rule based systems are not semantically comprehensible, difficult to debug, and impossible to modify or validate. Partitioning a set of rules found in CLIPS (C Language Integrated Production System) into groups of rules which reflect the underlying semantic subdomains of the problem, will address adequately the concerns stated above. Techniques are introduced to structure a CLIPS rule base into groups of rules that inherently have common semantic information. The concepts involved are imported from the field of A.I., Pattern Recognition, and Statistical Inference. Techniques focus on the areas of feature selection, classification, and a criteria of how 'good' the classification technique is, based on Bayesian Decision Theory. A variety of distance metrics are discussed for measuring the 'closeness' of CLIPS rules and various Nearest Neighbor classification algorithms are described based on the above metric.
Iron Partitioning in Ferropericlase and Consequences for the Magma Ocean.
NASA Astrophysics Data System (ADS)
Braithwaite, J. W. H.; Stixrude, L. P.; Holmstrom, E.; Pinilla, C.
2016-12-01
The relative buoyancy of crystals and liquid is likely to exert a strong influence on the thermal and chemical evolution of the magma ocean. Theory indicates that liquids approach, but do not exceed the density of iso-chemical crystals in the deep mantle. The partitioning of heavy elements, such as Fe, is therefore likely to control whether crystals sink or float. While some experimental results exist, our knowledge of silicate liquid-crystal element partitioning is still limited in the deep mantle. We have developed a method for computing the Mg-Fe partitioning of Fe in such systems. We have focused initially on ferropericlase, as a relatively simple system where the buoyancy effects of Fe partitioning are likely to be large. The method is based on molecular dynamics driven by density functional theory (spin polarized, PBEsol+U). We compute the free energy of Mg for Fe substitution in simulations of liquid and B1 crystalline phases via adiabatic switching. We investigate the dependence of partitioning on pressure, temperature, and iron concentration. We find that the liquid is denser than the coexisting crystalline phase at all conditions studies. We also find that the high-spin to low-spin transition in the crystal and the liquid, have an important influence on partitioning behavior.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
Can texture analysis of tooth microwear detect within guild niche partitioning in extinct species?
NASA Astrophysics Data System (ADS)
Purnell, Mark; Nedza, Christopher; Rychlik, Leszek
2017-04-01
Recent work shows that tooth microwear analysis can be applied further back in time and deeper into the phylogenetic history of vertebrate clades than previously thought (e.g. niche partitioning in early Jurassic insectivorous mammals; Gill et al., 2014, Nature). Furthermore, quantitative approaches to analysis based on parameterization of surface roughness are increasing the robustness and repeatability of this widely used dietary proxy. Discriminating between taxa within dietary guilds has the potential to significantly increase our ability to determine resource use and partitioning in fossil vertebrates, but how sensitive is the technique? To address this question we analysed tooth microwear texture in sympatric populations of shrew species (Neomys fodiens, Neomys anomalus, Sorex araneus, Sorex minutus) from BiaŁ owieza Forest, Poland. These populations are known to exhibit varying degrees of niche partitioning (Churchfield & Rychlik, 2006, J. Zool.) with greatest overlap between the Neomys species. Sorex araneus also exhibits some niche overlap with N. anomalus, while S. minutus is the most specialised. Multivariate analysis based only on tooth microwear textures recovers the same pattern of niche partitioning. Our results also suggest that tooth textures track seasonal differences in diet. Projecting data from fossils into the multivariate dietary space defined using microwear from extant taxa demonstrates that the technique is capable of subtle dietary discrimination in extinct insectivores.
NASA Astrophysics Data System (ADS)
Cahyaningrum, Rosalia D.; Bustamam, Alhadi; Siswantining, Titin
2017-03-01
Technology of microarray became one of the imperative tools in life science to observe the gene expression levels, one of which is the expression of the genes of people with carcinoma. Carcinoma is a cancer that forms in the epithelial tissue. These data can be analyzed such as the identification expressions hereditary gene and also build classifications that can be used to improve diagnosis of carcinoma. Microarray data usually served in large dimension that most methods require large computing time to do the grouping. Therefore, this study uses spectral clustering method which allows to work with any object for reduces dimension. Spectral clustering method is a method based on spectral decomposition of the matrix which is represented in the form of a graph. After the data dimensions are reduced, then the data are partitioned. One of the famous partition method is Partitioning Around Medoids (PAM) which is minimize the objective function with exchanges all the non-medoid points into medoid point iteratively until converge. Objectivity of this research is to implement methods spectral clustering and partitioning algorithm PAM to obtain groups of 7457 genes with carcinoma based on the similarity value. The result in this study is two groups of genes with carcinoma.
A Multi-Objective Partition Method for Marine Sensor Networks Based on Degree of Event Correlation.
Huang, Dongmei; Xu, Chenyixuan; Zhao, Danfeng; Song, Wei; He, Qi
2017-09-21
Existing marine sensor networks acquire data from sea areas that are geographically divided, and store the data independently in their affiliated sea area data centers. In the case of marine events across multiple sea areas, the current network structure needs to retrieve data from multiple data centers, and thus severely affects real-time decision making. In this study, in order to provide a fast data retrieval service for a marine sensor network, we use all the marine sensors as the vertices, establish the edge based on marine events, and abstract the marine sensor network as a graph. Then, we construct a multi-objective balanced partition method to partition the abstract graph into multiple regions and store them in the cloud computing platform. This method effectively increases the correlation of the sensors and decreases the retrieval cost. On this basis, an incremental optimization strategy is designed to dynamically optimize existing partitions when new sensors are added into the network. Experimental results show that the proposed method can achieve the optimal layout for distributed storage in the process of disaster data retrieval in the China Sea area, and effectively optimize the result of partitions when new buoys are deployed, which eventually will provide efficient data access service for marine events.
Text vectorization based on character recognition and character stroke modeling
NASA Astrophysics Data System (ADS)
Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao
2014-03-01
In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.
Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong
2014-08-01
Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.
Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.
Garnavi, Rahil; Aldeen, Mohammad; Bailey, James
2012-11-01
This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.
Electoral Susceptibility and Entropically Driven Interactions
NASA Astrophysics Data System (ADS)
Caravan, Bassir; Levine, Gregory
2013-03-01
In the United States electoral system the election is usually decided by the electoral votes cast by a small number of ``swing states'' where the two candidates historically have roughly equal probabilities of winning. The effective value of a swing state is determined not only by the number of its electoral votes but by the frequency of its appearance in the set of winning partitions of the electoral college. Since the electoral vote values of swing states are not identical, the presence or absence of a state in a winning partition is generally correlated with the frequency of appearance of other states and, hence, their effective values. We quantify the effective value of states by an electoral susceptibility, χj, the variation of the winning probability with the ``cost'' of changing the probability of winning state j. Associating entropy with the logarithm of the number of appearances of a state within the set of winning partitions, the entropy per state (in effect, the chemical potential) is not additive and the states may be said to ``interact.'' We study χj for a simple model with a Zipf's law type distribution of electoral votes. We show that the susceptibility for small states is largest in ``one-sided'' electoral contests and smallest in close contests. This research was supported by Department of Energy DE-FG02-08ER64623, Research Corporation CC6535 (GL) and HHMI Scholar Program (BC)
Liang, Chao; Han, Shu-ying; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin
2014-11-01
A strategy to utilize neutral model compounds for lipophilicity measurement of ionizable basic compounds by reversed-phase high-performance liquid chromatography is proposed in this paper. The applicability of the novel protocol was justified by theoretical derivation. Meanwhile, the linear relationships between logarithm of apparent n-octanol/water partition coefficients (logKow '') and logarithm of retention factors corresponding to the 100% aqueous fraction of mobile phase (logkw ) were established for a basic training set, a neutral training set and a mixed training set of these two. As proved in theory, the good linearity and external validation results indicated that the logKow ''-logkw relationships obtained from a neutral model training set were always reliable regardless of mobile phase pH. Afterwards, the above relationships were adopted to determine the logKow of harmaline, a weakly dissociable alkaloid. As far as we know, this is the first report on experimental logKow data for harmaline (logKow = 2.28 ± 0.08). Introducing neutral compounds into a basic model training set or using neutral model compounds alone is recommended to measure the lipophilicity of weakly ionizable basic compounds especially those with high hydrophobicity for the advantages of more suitable model compound choices and convenient mobile phase pH control. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Identifying finite-time coherent sets from limited quantities of Lagrangian data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Matthew O.; Rypina, Irina I.; Rowley, Clarence W.
A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods basedmore » on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.« less
19. Interior view showing flight simulator partition and rear overhead ...
19. Interior view showing flight simulator partition and rear overhead door, dock no. 493. View to south. - Offutt Air Force Base, Looking Glass Airborne Command Post, Nose Docks, On either side of Hangar Access Apron at Northwest end of Project Looking Glass Historic District, Bellevue, Sarpy County, NE
Computational strategy for quantifying human pesticide exposure based upon a saliva measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timchalk, Charles; Weber, Thomas J.; Smith, Jordan N.
The National Research Council of the National Academies report, Toxicity Testing in the 21st Century: A Vision and Strategy, highlighted the importance of quantitative exposure data for evaluating human toxicity risk and noted that biomonitoring is a critical tool for quantitatively evaluating exposure from both environmental and occupational settings. Direct measurement of chemical exposures using personal monitoring provides the most accurate estimation of a subject’s true exposure, and non-invasive methods have also been advocated for quantifying the pharmacokinetics and bioavailability of drugs and xenobiotics. In this regard, there is a need to identify chemicals that are readily cleared in salivamore » at concentrations that can be quantified to support the implementation of this approach.. The current manuscript describes the use of computational modeling approaches that are closely coupled to in vivo and in vitro experiments to predict salivary uptake and clearance of xenobiotics. The primary mechanism by which xenobiotics leave the blood and enter saliva is thought to involve paracellular transport, passive transcellular diffusion, or trancellular active transport with the majority of drugs and xenobiotics cleared from plasma into saliva by passive diffusion. The transcellular or paracellular diffusion of unbound chemicals in plasma to saliva has been computational modeled using a combination of compartmental and physiologically based approaches. Of key importance for determining the plasma:saliva partitioning was the utilization of a modified Schmitt algorithm that calculates partitioning based upon the tissue composition, pH, chemical pKa and plasma protein-binding. Sensitivity analysis of key model parameters specifically identified that both protein-binding and pKa (for weak acids and bases) had the most significant impact on the determination of partitioning and that there were clear species dependent differences based upon physiological variance between rats and humans. Ongoing efforts are focused on extending this modeling strategy to an in vitro salivary acinar cell based system that will be utilized to experimentally determine and computationally predict salivary gland uptake and clearance for a broad range of xenobiotics. Hence, it is envisioned that a combination of salivary biomonitoring and computational modeling will enable the non-invasive measurement of both environmental and occupational exposure in human populations using saliva.« less
Song, Youyi; Zhang, Ling; Chen, Siping; Ni, Dong; Lei, Baiying; Wang, Tianfu
2015-10-01
In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods.
Measuring Constraint-Set Utility for Partitional Clustering Algorithms
NASA Technical Reports Server (NTRS)
Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato
2006-01-01
Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.
Vieira, Vasco Manuel Nobre de Carvalho da Silva; Mateus, Marcos Duarte
2014-01-01
Isomorphic biphasic algal life cycles often occur in the environment at ploidy abundance ratios (Haploid:Diploid) different from 1. Its spatial variability occurs within populations related to intertidal height and hydrodynamic stress, possibly reflecting the niche partitioning driven by their diverging adaptation to the environment argued necessary for their prevalence (evolutionary stability). Demographic models based in matrix algebra were developed to investigate which vital rates may efficiently generate an H:D variability at a fine spatial resolution. It was also taken into account time variation and type of life strategy. Ploidy dissimilarities in fecundity rates set an H:D spatial structure miss-fitting the ploidy fitness ratio. The same happened with ploidy dissimilarities in ramet growth whenever reproductive output dominated the population demography. Only through ploidy dissimilarities in looping rates (stasis, breakage and clonal growth) did the life cycle respond to a spatially heterogeneous environment efficiently creating a niche partition. Marginal locations were more sensitive than central locations. Related results have been obtained experimentally and numerically for widely different life cycles from the plant and animal kingdoms. Spore dispersal smoothed the effects of ploidy dissimilarities in fertility and enhanced the effects of ploidy dissimilarities looping rates. Ploidy dissimilarities in spore dispersal could also create the necessary niche partition, both over the space and time dimensions, even in spatial homogeneous environments and without the need for conditional differentiation of the ramets. Fine scale spatial variability may be the key for the prevalence of isomorphic biphasic life cycles, which has been neglected so far.
Regulation of the Demographic Structure in Isomorphic Biphasic Life Cycles at the Spatial Fine Scale
Vieira, Vasco Manuel Nobre de Carvalho da Silva; Mateus, Marcos Duarte
2014-01-01
Isomorphic biphasic algal life cycles often occur in the environment at ploidy abundance ratios (Haploid:Diploid) different from 1. Its spatial variability occurs within populations related to intertidal height and hydrodynamic stress, possibly reflecting the niche partitioning driven by their diverging adaptation to the environment argued necessary for their prevalence (evolutionary stability). Demographic models based in matrix algebra were developed to investigate which vital rates may efficiently generate an H:D variability at a fine spatial resolution. It was also taken into account time variation and type of life strategy. Ploidy dissimilarities in fecundity rates set an H:D spatial structure miss-fitting the ploidy fitness ratio. The same happened with ploidy dissimilarities in ramet growth whenever reproductive output dominated the population demography. Only through ploidy dissimilarities in looping rates (stasis, breakage and clonal growth) did the life cycle respond to a spatially heterogeneous environment efficiently creating a niche partition. Marginal locations were more sensitive than central locations. Related results have been obtained experimentally and numerically for widely different life cycles from the plant and animal kingdoms. Spore dispersal smoothed the effects of ploidy dissimilarities in fertility and enhanced the effects of ploidy dissimilarities looping rates. Ploidy dissimilarities in spore dispersal could also create the necessary niche partition, both over the space and time dimensions, even in spatial homogeneous environments and without the need for conditional differentiation of the ramets. Fine scale spatial variability may be the key for the prevalence of isomorphic biphasic life cycles, which has been neglected so far. PMID:24658603
Neutron-neutron angular correlations in spontaneous fission of 252Cf and 240Pu
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Nakae, L. F.; Vogt, R.
2018-04-01
Background: Angular anisotropy has been observed between prompt neutrons emitted during the fission process. Such an anisotropy arises because the emitted neutrons are boosted along the direction of the parent fragment. Purpose: To measure the neutron-neutron angular correlations from the spontaneous fission of 252Cf and 240Pu oxide samples using a liquid scintillator array capable of pulse-shape discrimination. To compare these correlations to simulations combining the Monte Carlo radiation transport code MCNPX with the fission event generator FREYA. Method: Two different analysis methods were used to study the neutron-neutron correlations with varying energy thresholds. The first is based on setting a light output threshold while the second imposes a time-of-flight cutoff. The second method has the advantage of being truly detector independent. Results: The neutron-neutron correlation modeled by FREYA depends strongly on the sharing of the excitation energy between the two fragments. The measured asymmetry enabled us to adjust the FREYA parameter x in 240Pu, which controls the energy partition between the fragments and is so far inaccessible in other measurements. The 240Pu data in this analysis was the first available to quantify the energy partition for this isotope. The agreement between data and simulation is overall very good for 252Cf(sf ) and 240Pu(sf ) . Conclusions: The asymmetry in the measured neutron-neutron angular distributions can be predicted by FREYA. The shape of the correlation function depends on how the excitation energy is partitioned between the two fission fragments. Experimental data suggest that the lighter fragment is disproportionately excited.
Clarity™ digital PCR system: a novel platform for absolute quantification of nucleic acids.
Low, Huiyu; Chan, Shun-Jie; Soo, Guo-Hao; Ling, Belinda; Tan, Eng-Lee
2017-03-01
In recent years, digital polymerase chain reaction (dPCR) has gained recognition in biomedical research as it provides a platform for precise and accurate quantification of nucleic acids without the need for a standard curve. However, this technology has not yet been widely adopted as compared to real-time quantitative PCR due to its more cumbersome workflow arising from the need to sub-divide a PCR sample into a large number of smaller partitions prior to thermal cycling to achieve zero or at least one copy of the target RNA/DNA per partition. A recently launched platform, the Clarity™ system from JN Medsys, simplifies dPCR workflow through the use of a novel chip-in-a-tube technology for sample partitioning. In this study, the performance of Clarity™ was evaluated through quantification of the single-copy human RNase P gene. The system demonstrated high precision and accuracy and also excellent linearity across a range of over 4 orders of magnitude for the absolute quantification of the target gene. Moreover, consistent DNA copy measurements were also attained using a panel of different probe- and dye-based master mixes, demonstrating the system's compatibility with commercial master mixes. The Clarity™ was then compared to the QX100™ droplet dPCR system from Bio-Rad using a set of DNA reference materials, and the copy number concentrations derived from both systems were found to be closely associated. Collectively, the results showed that Clarity™ is a reliable, robust and flexible platform for next-generation genetic analysis.
Braga, Laura; Diniz, Ivone Rezende
2015-06-01
Moths exhibit different levels of fidelity to habitat, and some taxa are considered as bioindicators for conservation because they respond to habitat quality, environmental change, and vegetation types. In this study, we verified the effect of two phytophysiognomies of the Cerrado, savanna and forest, on the diversity distribution of moths of Erebidae (Arctiinae), Saturniidae, and Sphingidae families by using a hierarchical additive partitioning analysis. This analysis was based on two metrics: species richness and Shannon diversity index. The following questions were addressed: 1) Does the beta diversity of moths between phytophysiognomies add more species to the regional diversity than the beta diversity between sampling units and between sites? 2) Does the distribution of moth diversity differ among taxa? Alpha and beta diversities were compared with null models. The additive partitioning of species richness for the set of three Lepidoptera families identified beta diversity between phytophysiognomies as the component that contributed most to regional diversity, whereas the Shannon index identified alpha diversity as the major contributor. According to both species richness and the Shannon index, beta diversity between phytophysiognomies was significantly higher than expected by chance. Therefore, phytophysiognomies are the most important component in determining the richness and composition of the community. Additive partitioning also indicated that individual families of moths respond differently to the effect of habitat heterogeneity. The integrity of the Cerrado mosaic of phytophysiognomies plays a crucial role in maintaining moth biodiversity in the region. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satake, Shin-ichi; Kanamori, Hiroyuki; Kunugi, Tomoaki
2007-02-01
We have developed a parallel algorithm for microdigital-holographic particle-tracking velocimetry. The algorithm is used in (1) numerical reconstruction of a particle image computer using a digital hologram, and (2) searching for particles. The numerical reconstruction from the digital hologram makes use of the Fresnel diffraction equation and the FFT (fast Fourier transform),whereas the particle search algorithm looks for local maximum graduation in a reconstruction field represented by a 3D matrix. To achieve high performance computing for both calculations (reconstruction and particle search), two memory partitions are allocated to the 3D matrix. In this matrix, the reconstruction part consists of horizontallymore » placed 2D memory partitions on the x-y plane for the FFT, whereas, the particle search part consists of vertically placed 2D memory partitions set along the z axes.Consequently, the scalability can be obtained for the proportion of processor elements,where the benchmarks are carried out for parallel computation by a SGI Altix machine.« less
Shell use and partitioning of two sympatric species of hermit crabs on a tropical mudflat
NASA Astrophysics Data System (ADS)
Teoh, Hong Wooi; Chong, Ving Ching
2014-02-01
Shell use and partitioning of two sympatric hermit crab species (Diogenes moosai and Diogenes lopochir), as determined by shell shape, size and availability, were examined from August 2009 to March 2011 in a tropical mudflat (Malaysia). Shells of 14 gastropod species were used but > 85% comprised shells of Cerithidea cingulata, Nassarius cf. olivaceus, Nassarius jacksonianus, and Thais malayensis. Shell partitioning between hermit crab species, sexes, and developmental stages was evident from occupied shells of different species, shapes, and sizes. Extreme bias in shell use pattern by male and female of both species of hermit crabs suggests that shell shape, which depends on shell species, is the major determinant of shell use. The hermit crab must however fit well into the shell so that compatibility between crab size and shell size becomes crucial. Although shell availability possibly influenced shell use and hermit crab distribution, this is not critical in a tropical setting of high gastropod diversity and abundance.
Partitioning an object-oriented terminology schema.
Gu, H; Perl, Y; Halper, M; Geller, J; Kuo, F; Cimino, J J
2001-07-01
Controlled medical terminologies are increasingly becoming strategic components of various healthcare enterprises. However, the typical medical terminology can be difficult to exploit due to its extensive size and high density. The schema of a medical terminology offered by an object-oriented representation is a valuable tool in providing an abstract view of the terminology, enhancing comprehensibility and making it more usable. However, schemas themselves can be large and unwieldy. We present a methodology for partitioning a medical terminology schema into manageably sized fragments that promote increased comprehension. Our methodology has a refinement process for the subclass hierarchy of the terminology schema. The methodology is carried out by a medical domain expert in conjunction with a computer. The expert is guided by a set of three modeling rules, which guarantee that the resulting partitioned schema consists of a forest of trees. This makes it easier to understand and consequently use the medical terminology. The application of our methodology to the schema of the Medical Entities Dictionary (MED) is presented.
NASA Astrophysics Data System (ADS)
Gibbard, Philip L.; Lewin, John
2016-11-01
We review the historical purposes and procedures for stratigraphical division and naming within the Quaternary, and summarize the current requirements for formal partitioning through the International Commission on Stratigraphy (ICS). A raft of new data and evidence has impacted traditional approaches: quasi-continuous records from ocean sediments and ice cores, new numerical dating techniques, and alternative macro-models, such as those provided through Sequence Stratigraphy and Earth-System Science. The practical usefulness of division remains, but there is now greater appreciation of complex Quaternary detail and the modelling of time continua, the latter also extending into the future. There are problems both of commission (what is done, but could be done better) and of omission (what gets left out) in partitioning the Quaternary. These include the challenge set by the use of unconformities as stage boundaries, how to deal with multiphase records in ocean and terrestrial sediments, what happened at the 'Early-Mid- (Middle) Pleistocene Transition', dealing with trends that cross phase boundaries, and the current controversial focus on how to subdivide the Holocene and formally define an 'Anthropocene'.
An extended affinity propagation clustering method based on different data density types.
Zhao, XiuLi; Xu, WeiXiang
2015-01-01
Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.
Demazure Modules, Fusion Products and Q-Systems
NASA Astrophysics Data System (ADS)
Chari, Vyjayanthi; Venkatesh, R.
2015-01-01
In this paper, we introduce a family of indecomposable finite-dimensional graded modules for the current algebra associated to a simple Lie algebra. These modules are indexed by an -tuple of partitions , where α varies over a set of positive roots of and we assume that they satisfy a natural compatibility condition. In the case when the are all rectangular, for instance, we prove that these modules are Demazure modules in various levels. As a consequence, we see that the defining relations of Demazure modules can be greatly simplified. We use this simplified presentation to relate our results to the fusion products, defined in (Feigin and Loktev in Am Math Soc Transl Ser (2) 194:61-79, 1999), of representations of the current algebra. We prove that the Q-system of (Hatayama et al. in Contemporary Mathematics, vol. 248, pp. 243-291. American Mathematical Society, Providence, 1998) extends to a canonical short exact sequence of fusion products of representations associated to certain special partitions .Finally, in the last section we deal with the case of and prove that the modules we define are just fusion products of irreducible representations of the associated current algebra and give monomial bases for these modules.
Optimal service distribution in WSN service system subject to data security constraints.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong
2014-08-04
Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm.
Optimal Service Distribution in WSN Service System Subject to Data Security Constraints
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong
2014-01-01
Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm. PMID:25093346
Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao
2018-01-09
River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.
Votano, Joseph R; Parham, Marc; Hall, L Mark; Hall, Lowell H; Kier, Lemont B; Oloff, Scott; Tropsha, Alexander
2006-11-30
Four modeling techniques, using topological descriptors to represent molecular structure, were employed to produce models of human serum protein binding (% bound) on a data set of 1008 experimental values, carefully screened from publicly available sources. To our knowledge, this data is the largest set on human serum protein binding reported for QSAR modeling. The data was partitioned into a training set of 808 compounds and an external validation test set of 200 compounds. Partitioning was accomplished by clustering the compounds in a structure descriptor space so that random sampling of 20% of the whole data set produced an external test set that is a good representative of the training set with respect to both structure and protein binding values. The four modeling techniques include multiple linear regression (MLR), artificial neural networks (ANN), k-nearest neighbors (kNN), and support vector machines (SVM). With the exception of the MLR model, the ANN, kNN, and SVM QSARs were ensemble models. Training set correlation coefficients and mean absolute error ranged from r2=0.90 and MAE=7.6 for ANN to r2=0.61 and MAE=16.2 for MLR. Prediction results from the validation set yielded correlation coefficients and mean absolute errors which ranged from r2=0.70 and MAE=14.1 for ANN to a low of r2=0.59 and MAE=18.3 for the SVM model. Structure descriptors that contribute significantly to the models are discussed and compared with those found in other published models. For the ANN model, structure descriptor trends with respect to their affects on predicted protein binding can assist the chemist in structure modification during the drug design process.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, M.F.; Hiemstra, T.; Riemsdijk, W. van
The need for qualitative and quantitative description of the chemical speciation of Al, in particular and other metal ions in general, is stressed by the increased mobilization of metal ions in water and soils due to acid rain deposition. In this paper we present new data of Al binding to two humic acids. These new data sets and the some previously published data will be analyzed with the NICA-Donnan model using one set of parameters to describe the Al binding to the different humic substances. Once the experimental data is described with the NICA-Donnan approach, we will show the effectmore » of Ca on Al binding and surface speciation as well as the effect of Al on the charge of the humic particles. The parameters derived from the laboratory experiments will be used to describe the variation of the field based Al partition coefficient.« less
NASA Astrophysics Data System (ADS)
Kochanov, R. V.; Gordon, I. E.; Rothman, L. S.; Wcisło, P.; Hill, C.; Wilzewski, J. S.
2016-07-01
The HITRAN Application Programming Interface (HAPI) is presented. HAPI is a free Python library, which extends the capabilities of the HITRANonline interface (www.hitran.org) and can be used to filter and process the structured spectroscopic data. HAPI incorporates a set of tools for spectra simulation accounting for the temperature, pressure, optical path length, and instrument properties. HAPI is aimed to facilitate the spectroscopic data analysis and the spectra simulation based on the line-by-line data, such as from the HITRAN database [JQSRT (2013) 130, 4-50], allowing the usage of the non-Voigt line profile parameters, custom temperature and pressure dependences, and partition sums. The HAPI functions allow the user to control the spectra simulation and data filtering process via a set of the function parameters. HAPI can be obtained at its homepage www.hitran.org/hapi.
Phytochemistry of cimicifugic acids and associated bases in Cimicifuga racemosa root extracts.
Gödecke, Tanja; Nikolic, Dejan; Lankin, David C; Chen, Shao-Nong; Powell, Sharla L; Dietz, Birgit; Bolton, Judy L; van Breemen, Richard B; Farnsworth, Norman R; Pauli, Guido F
2009-01-01
Earlier studies reported serotonergic activity for cimicifugic acids (CA) isolated from Cimicifuga racemosa. The discovery of strongly basic alkaloids, cimipronidines, from the active extract partition and evaluation of previously employed work-up procedures has led to the hypothesis of strong acid/base association in the extract. Re-isolation of the CAs was desired to permit further detailed studies. Based on the acid/base association hypothesis, a new separation scheme of the active partition was required, which separates acids from associated bases. A new 5-HT(7) bioassay guided work-up procedure was developed that concentrates activity into one partition. The latter was subjected to a new two-step centrifugal partitioning chromatography (CPC) method, which applies pH zone refinement gradient (pHZR CPC) to dissociate the acid/base complexes. The resulting CA fraction was subjected to a second CPC step. Fractions and compounds were monitored by (1)H NMR using a structure-based spin-pattern analysis facilitating dereplication of the known acids. Bioassay results were obtained for the pHZR CPC fractions and for purified CAs. A new CA was characterised. While none of the pure CAs was active, the serotonergic activity was concentrated in a single pHZR CPC fraction, which was subsequently shown to contain low levels of the potent 5-HT(7) ligand, N(omega)-methylserotonin. This study shows that CAs are not responsible for serotonergic activity in black cohosh. New phytochemical methodology (pHZR CPC) and a sensitive dereplication method (LC-MS) led to the identification of N(omega)-methylserotonin as serotonergic active principle. Copyright (c) 2009 John Wiley & Sons, Ltd.
Phytochemistry of Cimicifugic Acids and Associated Bases in Cimicifuga racemosa Root Extracts
GÖdecke, Tanja; Nikolic, Dejan; Lankin, David C.; Chen, Shao-Nong; Powell, Sharla L.; Dietz, Birgit; Bolton, Judy L.; Van Breemen, Richard B.; Farnsworth, Norman R.; Pauli, Guido F.
2009-01-01
Introduction Earlier studies reported serotonergic activity for cimicifugic acids (CA) isolated from Cimicifuga racemosa. The discovery of strongly basic alkaloids, cimipronidines, from the active extract partition and evaluation of previously employed work-up procedures has led to the hypothesis of strong acid/base association in the extract. Objective Re-isolation of the CAs was desired to permit further detailed studies. Based on the acid/base association hypothesis, a new separation scheme of the active partition was required, which separates acids from associated bases. Methodology A new 5-HT7 bioassay guided work-up procedure was developed that concentrates activity into one partition. The latter was subjected to a new 2-step centrifugal partitioning chromatography (CPC) method, which applies pH zone refinement gradient (pHZR CPC) to dissociate the acid/base complexes. The resulting CA fraction was subjected to a second CPC step. Fractions and compounds were monitored by 1H NMR using a structure based spin-pattern analysis facilitating dereplication of the known acids. Bioassay results were obtained for the pHZR CPC fractions and for purified CAs. Results A new CA was characterized. While none of the pure CAs was active, the serotonergic activity was concentrated in a single pHZR CPC fraction, which was subsequently shown to contain low levels of the potent 5-HT7 ligand, Nω–methylserotonin. Conclusion This study shows that CAs are not responsible for serotonergic activity in black cohosh. New phytochemical methodology (pHZR CPC) and a sensitive dereplication method (LC-MS) led to the identification of Nω–methylserotonin as serotonergic active principle. PMID:19140115
Multi-viewpoint clustering analysis
NASA Technical Reports Server (NTRS)
Mehrotra, Mala; Wild, Chris
1993-01-01
In this paper, we address the feasibility of partitioning rule-based systems into a number of meaningful units to enhance the comprehensibility, maintainability and reliability of expert systems software. Preliminary results have shown that no single structuring principle or abstraction hierarchy is sufficient to understand complex knowledge bases. We therefore propose the Multi View Point - Clustering Analysis (MVP-CA) methodology to provide multiple views of the same expert system. We present the results of using this approach to partition a deployed knowledge-based system that navigates the Space Shuttle's entry. We also discuss the impact of this approach on verification and validation of knowledge-based systems.
Interaction of Airspace Partitions and Traffic Flow Management Delay with Weather
NASA Technical Reports Server (NTRS)
Lee, Hak-Tae; Chatterji, Gano B.; Palopo, Kee
2011-01-01
The interaction of partitioning the airspace and delaying flights in the presence of convective weather is explored to study how re-partitioning the airspace can help reduce congestion and delay. Three approaches with varying complexities are employed to compute the ground delays.In the first approach, an airspace partition of 335 high-altitude sectors that is based on clear weather day traffic is used. Routes are then created to avoid regions of convective weather. With traffic flow management, this approach establishes the baseline with per-flight delay of 8.4 minutes. In the second approach, traffic flow management is used to select routes and assign departure delays such that only the airport capacity constraints are met. This results in 6.7 minutes of average departure delay. The airspace is then partitioned with a specific capacity. It is shown that airspace-capacity-induced delay can be reduced to zero ata cost of 20percent more sectors for the examined scenario.
NASA Astrophysics Data System (ADS)
Zhao, Hongshan; Li, Wei; Wang, Li; Zhou, Shu; Jin, Xuejun
2016-08-01
T wo types of multiphase steels containing blocky or fine martensite have been used to study the phase interaction and the TRIP effect. These steels were obtained by step-quenching and partitioning (S-QP820) or intercritical-quenching and partitioning (I-QP800 & I-QP820). The retained austenite (RA) in S-QP820 specimen containing blocky martensite transformed too early to prevent the local failure at high strain due to the local strain concentration. In contrast, plentiful RA in I-QP800 specimen containing finely dispersed martensite transformed uniformly at high strain, which led to optimized strength and elongation. By applying a coordinate conversion method to the microhardness test, the load partitioning between ferrite and partitioned martensite was proved to follow the linear mixture law. The mechanical behavior of multiphase S-QP820 steel can be modeled based on the Mecking-Kocks theory, Bouquerel's spherical assumption, and Gladman-type mixture law. Finally, the transformation-induced martensite hardening effect has been studied on a bake-hardened specimen.
Size-dependent forced PEG partitioning into channels: VDAC, OmpC, and α-hemolysin
Aksoyoglu, M. Alphan; Podgornik, Rudolf; Bezrukov, Sergey M.; ...
2016-07-27
Nonideal polymer mixtures of PEGs of different molecular weights partition differently into nanosize protein channels. Here, we assess the validity of the recently proposed theoretical approach of forced partitioning for three structurally different beta-barrel channels: voltage-dependent anion channel from outer mitochondrial membrane VDAC, bacterial porin OmpC (outer membrane protein C), and bacterial channel-forming toxin alpha-hemolysin. Our interpretation is based on the idea that relatively less-penetrating polymers push the more easily penetrating ones into nanosize channels in excess of their bath concentration. Comparison of the theory with experiments is excellent for VDAC. Polymer partitioning data for the other two channels aremore » consistent with theory if additional assumptions regarding the energy penalty of pore penetration are included. In conclusion, the obtained results demonstrate that the general concept of "polymers pushing polymers" is helpful in understanding and quantification of concrete examples of size-dependent forced partitioning of polymers into protein nanopores.« less
Size-dependent forced PEG partitioning into channels: VDAC, OmpC, and α-hemolysin
Aksoyoglu, M. Alphan; Podgornik, Rudolf; Bezrukov, Sergey M.; Gurnev, Philip A.; Muthukumar, Murugappan; Parsegian, V. Adrian
2016-01-01
Nonideal polymer mixtures of PEGs of different molecular weights partition differently into nanosize protein channels. Here, we assess the validity of the recently proposed theoretical approach of forced partitioning for three structurally different β-barrel channels: voltage-dependent anion channel from outer mitochondrial membrane VDAC, bacterial porin OmpC (outer membrane protein C), and bacterial channel-forming toxin α-hemolysin. Our interpretation is based on the idea that relatively less-penetrating polymers push the more easily penetrating ones into nanosize channels in excess of their bath concentration. Comparison of the theory with experiments is excellent for VDAC. Polymer partitioning data for the other two channels are consistent with theory if additional assumptions regarding the energy penalty of pore penetration are included. The obtained results demonstrate that the general concept of “polymers pushing polymers” is helpful in understanding and quantification of concrete examples of size-dependent forced partitioning of polymers into protein nanopores. PMID:27466408
Design of a Dual Waveguide Normal Incidence Tube (DWNIT) Utilizing Energy and Modal Methods
NASA Technical Reports Server (NTRS)
Betts, Juan F.; Jones, Michael G. (Technical Monitor)
2002-01-01
This report investigates the partition design of the proposed Dual Waveguide Normal Incidence Tube (DWNIT). Some advantages provided by the DWNIT are (1) Assessment of coupling relationships between resonators in close proximity, (2) Evaluation of "smart liners", (3) Experimental validation for parallel element models, and (4) Investigation of effects of simulated angles of incidence of acoustic waves. Energy models of the two chambers were developed to determine the Sound Pressure Level (SPL) drop across the two chambers, through the use of an intensity transmission function for the chamber's partition. The models allowed the chamber's lengthwise end samples to vary. The initial partition design (2" high, 16" long, 0.25" thick) was predicted to provide at least 160 dB SPL drop across the partition with a compressive model, and at least 240 dB SPL drop with a bending model using a damping loss factor of 0.01. The end chamber sample transmissions coefficients were set to 0.1. Since these results predicted more SPL drop than required, a plate thickness optimization algorithm was developed. The results of the algorithm routine indicated that a plate with the same height and length, but with a thickness of 0.1" and 0.05 structural damping loss, would provide an adequate SPL isolation between the chambers.
Effects of low urea concentrations on protein-water interactions.
Ferreira, Luisa A; Povarova, Olga I; Stepanenko, Olga V; Sulatskaya, Anna I; Madeira, Pedro P; Kuznetsova, Irina M; Turoverov, Konstantin K; Uversky, Vladimir N; Zaslavsky, Boris Y
2017-01-01
Solvent properties of aqueous media (dipolarity/polarizability, hydrogen bond donor acidity, and hydrogen bond acceptor basicity) were measured in the coexisting phases of Dextran-PEG aqueous two-phase systems (ATPSs) containing .5 and 2.0 M urea. The differences between the electrostatic and hydrophobic properties of the phases in the ATPSs were quantified by analysis of partitioning of the homologous series of sodium salts of dinitrophenylated amino acids with aliphatic alkyl side chains. Furthermore, partitioning of eleven different proteins in the ATPSs was studied. The analysis of protein partition behavior in a set of ATPSs with protective osmolytes (sorbitol, sucrose, trehalose, and TMAO) at the concentration of .5 M, in osmolyte-free ATPS, and in ATPSs with .5 or 2.0 M urea in terms of the solvent properties of the phases was performed. The results show unambiguously that even at the urea concentration of .5 M, this denaturant affects partitioning of all proteins (except concanavalin A) through direct urea-protein interactions and via its effect on the solvent properties of the media. The direct urea-protein interactions seem to prevail over the urea effects on the solvent properties of water at the concentration of .5 M urea and appear to be completely dominant at 2.0 M urea concentration.
Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping
2015-10-20
Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.
NASA Astrophysics Data System (ADS)
Klosterhalfen, Anne; Moene, Arnold; Schmidt, Marius; Ney, Patrizia; Graf, Alexander
2017-04-01
Source partitioning of eddy covariance (EC) measurements of CO2 into respiration and photosynthesis is routinely used for a better understanding of the exchange of greenhouse gases, especially between terrestrial ecosystems and the atmosphere. The most frequently used methods are usually based either on relations of fluxes to environmental drivers or on chamber measurements. However, they often depend strongly on assumptions or invasive measurements and do usually not offer partitioning estimates for latent heat fluxes into evaporation and transpiration. SCANLON and SAHU (2008) and SCANLON and KUSTAS (2010) proposed an promising method to estimate the contributions of transpiration and evaporation using measured high frequency time series of CO2 and H2O fluxes - no extra instrumentation necessary. This method (SK10 in the following) is based on the spatial separation and relative strength of sources and sinks of CO2 and water vapor among the sub-canopy and canopy. Assuming that air from those sources and sinks is not yet perfectly mixed before reaching EC sensors, partitioning is estimated based on the separate application of the flux-variance similarity theory to the stomatal and non-stomatal components of the regarded fluxes, as well as on additional assumptions on stomatal water use efficiency (WUE). The CO2 partitioning method after THOMAS et al. (2008) (TH08 in the following) also follows the argument that the dissimilarities of sources and sinks in and below a canopy affect the relation between H2O and CO2 fluctuations. Instead of involving assumptions on WUE, TH08 directly screens their scattergram for signals of joint respiration and evaporation events and applies a conditional sampling methodology. In spite of their different main targets (H2O vs. CO2), both methods can yield partitioning estimates on both fluxes. We therefore compare various sub-methods of SK10 and TH08 including own modifications (e.g., cluster analysis) to each other, to established source partitioning methods, and to chamber measurements at various agroecosystems. Further, profile measurements and a canopy-resolving Large Eddy Simulation model are used to test the assumptions involved in SK10. Scanlon, T.M., Kustas, W.P., 2010. Partitioning carbon dioxide and water vapor fluxes using correlation analysis. Agricultural and Forest Meteorology 150 (1), 89-99. Scanlon, T.M., Sahu, P., 2008. On the correlation structure of water vapor and carbon dioxide in the atmospheric surface layer: A basis for flux partitioning. Water Resources Research 44 (10), W10418, 15 pp. Thomas, C., Martin, J.G., Goeckede, M., Siqueira, M.B., Foken, T., Law, B.E., Loescher H.W., Katul, G., 2008. Estimating daytime subcanopy respiration from conditional sampling methods applied to multi-scalar high frequency turbulence time series. Agricultural and Forest Meteorology 148 (8-9), 1210-1229.
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
The potential of cloud point system as a novel two-phase partitioning system for biotransformation.
Wang, Zhilong
2007-05-01
Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.
New polymers for low-gravity purification of cells by phase partitioning
NASA Technical Reports Server (NTRS)
Harris, J. M.
1983-01-01
A potentially powerful technique for separating different biological cell types is based on the partitioning of these cells between the immiscible aqueous phases formed by solution of certain polymers in water. This process is gravity-limited because cells sediment rather than associate with the phase most favored on the basis of cell-phase interactions. In the present contract we have been involved in the synthesis of new polymers both to aid in understanding the partitioning process and to improve the quality of separations. The prime driving force behind the design of these polymers is to produce materials which will aid in space experiments to separate important cell types and to study the partitioning process in the absence of gravity (i.e., in an equilibrium state).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janes, N.; Ma, L.; Hsu, J.W.
1992-01-01
The Meyer-Overton hypothesis--that anesthesia arises from the nonspecific action of solutes on membrane lipids--is reformulated using colligative thermodynamics. Configurational entropy, the randomness imparted by the solute through the partitioning process, is implicated as the energetic driving force that pertubs cooperative membrane equilibria. A proton NMR partitioning approach based on the anesthetic benzyl alcohol is developed to assess the reformulation. Ring resonances from the partitioned drug are shielded by 0.2 ppm and resolved from the free, aqueous drug. Free alcohol is quantitated in dilute lipid dispersions using an acetate internal standard. Cooperative equilibria in model dipalmitoyl lecithin membranes are examined withmore » changes in temperature and alcohol concentration. The L[sub [beta][prime
Partition-based discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
A set partitioning reformulation for the multiple-choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Voß, Stefan; Lalla-Ruiz, Eduardo
2016-05-01
The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.
Cumulants, free cumulants and half-shuffles
Ebrahimi-Fard, Kurusch; Patras, Frédéric
2015-01-01
Free cumulants were introduced as the proper analogue of classical cumulants in the theory of free probability. There is a mix of similarities and differences, when one considers the two families of cumulants. Whereas the combinatorics of classical cumulants is well expressed in terms of set partitions, that of free cumulants is described and often introduced in terms of non-crossing set partitions. The formal series approach to classical and free cumulants also largely differs. The purpose of this study is to put forward a different approach to these phenomena. Namely, we show that cumulants, whether classical or free, can be understood in terms of the algebra and combinatorics underlying commutative as well as non-commutative (half-)shuffles and (half-) unshuffles. As a corollary, cumulants and free cumulants can be characterized through linear fixed point equations. We study the exponential solutions of these linear fixed point equations, which display well the commutative, respectively non-commutative, character of classical and free cumulants. PMID:27547078
Silva, D F C; Azevedo, A M; Fernandes, P; Chu, V; Conde, J P; Aires-Barros, M R
2017-03-03
Aqueous two phase systems (ATPS) offer great potential for selective separation of a wide range of biomolecules by exploring differences in molecular solubility in each of the two immiscible phases. However, ATPS use has been limited due to the difficulty in predicting the behavior of a given biomolecule in the partition environment together with the empirical and time-consuming techniques that are used for the determination of partition and extraction parameters. In this work, a fast and novel technique based on a microfluidic platform and using fluorescence microscopy was developed to determine the partition coefficients of biomolecules in different ATPS. This method consists of using a microfluidic device with a single microchannel and three inlets. In two of the inlets, solutions containing the ATPS forming components were loaded while the third inlet was fed with the FITC tagged biomolecule of interest prepared in milli-Q water. Using fluorescence microscopy, it was possible to follow the location of the FITC-tagged biomolecule and, by simply varying the pumping rates of the solutions, to quickly test a wide variety of ATPS compositions. The ATPS system is allowed 4min for stabilization and fluorescence micrographs are used to determine the partition coefficient.The partition coefficients obtained were shown to be consistent with results from macroscale ATPS partition. This process allows for faster screening of partition coefficients using only a few microliters of material for each ATPS composition and is amenable to automation. The partitioning behavior of several biomolecules with molecular weights (MW) ranging from 5.8 to 150kDa, and isoelectric points (pI) ranging from 4.7 to 6.4 was investigated, as well as the effect of the molecular weight of the polymer ATPS component. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Dean T.; Coughlin, D. R.; Clarke, Kester D.
Here, the influence of Cr and Ni additions and quench and partition (Q&P) processing parameters on the microstructural development, including carbide formation and austenite retention during Q&P, was studied in two steels with a base composition of 0.2C-1.5Mn-1.3Si wt.% and additions of 1.5 wt.% Cr (1.5Cr) or Ni (1.5Ni). Additions of 1.5 wt.% Cr significantly slowed the kinetics of austenite decomposition relative to the 1.5Ni alloy at all partitioning temperatures, promoting greater austenite retention, lower retained austenite carbon (C) contents, and reduced sensitivity of the retained austenite amounts to processing variables. In the 1.5Cr alloy after partitioning at 400 °Cmore » for 300 s, η-carbides were identified by transmission electron microscopy (TEM) and atom probe tomography (APT) revealed no significant enrichment of substitutional elements in the carbides. In the 1.5Ni alloy after partitioning at 450 °C for 300 s, both plate-like and globular carbides were observed by TEM. APT analysis of the globular carbides clearly revealed significant Si rejection and Mn enrichment. Mössbauer effect spectroscopy was used to quantify the amount of carbides after Q&P. In general, carbide amounts below ~0.3% of Fe were measured in both alloys after partitioning for short times (10 s), irrespective of quench or partitioning temperature, which corresponds to a relatively small portion of the bulk C. With increasing partitioning time, carbide amounts remained approximately constant or increased, depending on the alloy, quench temperature, and/or partitioning temperature.« less
Pierce, Dean T.; Coughlin, D. R.; Clarke, Kester D.; ...
2018-03-08
Here, the influence of Cr and Ni additions and quench and partition (Q&P) processing parameters on the microstructural development, including carbide formation and austenite retention during Q&P, was studied in two steels with a base composition of 0.2C-1.5Mn-1.3Si wt.% and additions of 1.5 wt.% Cr (1.5Cr) or Ni (1.5Ni). Additions of 1.5 wt.% Cr significantly slowed the kinetics of austenite decomposition relative to the 1.5Ni alloy at all partitioning temperatures, promoting greater austenite retention, lower retained austenite carbon (C) contents, and reduced sensitivity of the retained austenite amounts to processing variables. In the 1.5Cr alloy after partitioning at 400 °Cmore » for 300 s, η-carbides were identified by transmission electron microscopy (TEM) and atom probe tomography (APT) revealed no significant enrichment of substitutional elements in the carbides. In the 1.5Ni alloy after partitioning at 450 °C for 300 s, both plate-like and globular carbides were observed by TEM. APT analysis of the globular carbides clearly revealed significant Si rejection and Mn enrichment. Mössbauer effect spectroscopy was used to quantify the amount of carbides after Q&P. In general, carbide amounts below ~0.3% of Fe were measured in both alloys after partitioning for short times (10 s), irrespective of quench or partitioning temperature, which corresponds to a relatively small portion of the bulk C. With increasing partitioning time, carbide amounts remained approximately constant or increased, depending on the alloy, quench temperature, and/or partitioning temperature.« less
NASA Astrophysics Data System (ADS)
Good, Stephen P.; Soderberg, Keir; Guan, Kaiyu; King, Elizabeth G.; Scanlon, Todd M.; Caylor, Kelly K.
2014-02-01
The partitioning of surface vapor flux (FET) into evaporation (FE) and transpiration (FT) is theoretically possible because of distinct differences in end-member stable isotope composition. In this study, we combine high-frequency laser spectroscopy with eddy covariance techniques to critically evaluate isotope flux partitioning of FET over a grass field during a 15 day experiment. Following the application of a 30 mm water pulse, green grass coverage at the study site increased from 0 to 10% of ground surface area after 6 days and then began to senesce. Using isotope flux partitioning, transpiration increased as a fraction of total vapor flux from 0% to 40% during the green-up phase, after which this ratio decreased while exhibiting hysteresis with respect to green grass coverage. Daily daytime leaf-level gas exchange measurements compare well with daily isotope flux partitioning averages (RMSE = 0.0018 g m-2 s-1). Overall the average ratio of FT to FET was 29%, where uncertainties in Keeling plot intercepts and transpiration composition resulted in an average of uncertainty of ˜5% in our isotopic partitioning of FET. Flux-variance similarity partitioning was partially consistent with the isotope-based approach, with divergence occurring after rainfall and when the grass was stressed. Over the average diurnal cycle, local meteorological conditions, particularly net radiation and relative humidity, are shown to control partitioning. At longer time scales, green leaf area and available soil water control FT/FET. Finally, we demonstrate the feasibility of combining isotope flux partitioning and flux-variance similarity theory to estimate water use efficiency at the landscape scale.
Total strain version of strainrange partitioning for thermomechanical fatigue at low strains
NASA Technical Reports Server (NTRS)
Halford, G. R.; Saltsman, J. F.
1987-01-01
A new method is proposed for characterizing and predicting the thermal fatigue behavior of materials. The method is based on three innovations in characterizing high temperature material behavior: (1) the bithermal concept of fatigue testing; (2) advanced, nonlinear, cyclic constitutive models; and (3) the total strain version of traditional strainrange partitioning.
USDA-ARS?s Scientific Manuscript database
The thermal-based Two Source Energy Balance (TSEB) model partitions the water and energy fluxes from vegetation and soil components providing thus the ability for estimating soil evaporation (E) and canopy transpiration (T) separately. However, it is crucial for ET partitioning to retrieve reliable ...
Collaborative efforts between EPA's Office of Water and Office of Research and Development have resulted in the development of sediment guidelines based on equilibrium partitioning theory (EqP). The guidance available includes a technical support document, describing the derivat...
THE DEVELOPMENT OF THE TEACHING SPACE DIVIDER.
ERIC Educational Resources Information Center
BELLOMY, CLEON C.; CAUDILL, WILLIAM W.
TYPES OF VERTICAL WORK SURFACES AND THE DEVELOPMENT OF A MODEL TEACHING SPACE DIVIDER ARE DISCUSSED IN THIS REPORT. THIS DESIGN IS BASED ON THE EXPRESSED NEED FOR MORE TACKBOARD AND SHELVING SPACE, AND FOR MOVABLE PARTITIONS. THE MODEL PANELS WHICH SERVE DIRECTLY AS PARTITIONS RATHER THAN BEING OVERLAID ON A PLASTERED SURFACE, INCLUDE THE…