Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Impact of Surface Roughness and Soil Texture on Mineral Dust Emission Fluxes Modeling
NASA Technical Reports Server (NTRS)
Menut, Laurent; Perez, Carlos; Haustein, Karsten; Bessagnet, Bertrand; Prigent, Catherine; Alfaro, Stephane
2013-01-01
Dust production models (DPM) used to estimate vertical fluxes of mineral dust aerosols over arid regions need accurate data on soil and surface properties. The Laboratoire Inter-Universitaire des Systemes Atmospheriques (LISA) data set was developed for Northern Africa, the Middle East, and East Asia. This regional data set was built through dedicated field campaigns and include, among others, the aerodynamic roughness length, the smooth roughness length of the erodible fraction of the surface, and the dry (undisturbed) soil size distribution. Recently, satellite-derived roughness length and high-resolution soil texture data sets at the global scale have emerged and provide the opportunity for the use of advanced schemes in global models. This paper analyzes the behavior of the ERS satellite-derived global roughness length and the State Soil Geographic data base-Food and Agriculture Organization of the United Nations (STATSGO-FAO) soil texture data set (based on wet techniques) using an advanced DPM in comparison to the LISA data set over Northern Africa and the Middle East. We explore the sensitivity of the drag partition scheme (a critical component of the DPM) and of the dust vertical fluxes (intensity and spatial patterns) to the roughness length and soil texture data sets. We also compare the use of the drag partition scheme to a widely used preferential source approach in global models. Idealized experiments with prescribed wind speeds show that the ERS and STATSGO-FAO data sets provide realistic spatial patterns of dust emission and friction velocity thresholds in the region. Finally, we evaluate a dust transport model for the period of March to July 2011 with observed aerosol optical depths from Aerosol Robotic Network sites. Results show that ERS and STATSGO-FAO provide realistic simulations in the region.
Shear Stress Partitioning in Large Patches of Roughness in the Atmospheric Inertial Sublayer
NASA Technical Reports Server (NTRS)
Gillies, John A.; Nickling, William G.; King, James
2007-01-01
Drag partition measurements were made in the atmospheric inertial sublayer for six roughness configurations made up of solid elements in staggered arrays of different roughness densities. The roughness was in the form of a patch within a large open area and in the shape of an equilateral triangle with 60 m long sides. Measurements were obtained of the total shear stress (tau) acting on the surfaces, the surface shear stress on the ground between the elements (tau(sub S)) and the drag force on the elements for each roughness array. The measurements indicated that tau(sub S) quickly reduced near the leading edge of the roughness compared with tau, and a tau(sub S) minimum occurs at a normalized distance (x/h, where h is element height) of approx. -42 (downwind of the roughness leading edge is negative), then recovers to a relatively stable value. The location of the minimum appears to scale with element height and not roughness density. The force on the elements decreases exponentially with normalized downwind distance and this rate of change scales with the roughness density, with the rate of change increasing as roughness density increases. Average tau(sub S): tau values for the six roughness surfaces scale predictably as a function of roughness density and in accordance with a shear stress partitioning model. The shear stress partitioning model performed very well in predicting the amount of surface shear stress, given knowledge of the stated input parameters for these patches of roughness. As the shear stress partitioning relationship within the roughness appears to come into equilibrium faster for smaller roughness element sizes it would also appear the shear stress partitioning model can be applied with confidence for smaller patches of smaller roughness elements than those used in this experiment.
Adsorption of Phthalates on Impervious Indoor Surfaces.
Wu, Yaoxing; Eichler, Clara M A; Leng, Weinan; Cox, Steven S; Marr, Linsey C; Little, John C
2017-03-07
Sorption of semivolatile organic compounds (SVOCs) onto interior surfaces, often referred to as the "sink effect", and their subsequent re-emission significantly affect the fate and transport of indoor SVOCs and the resulting human exposure. Unfortunately, experimental challenges and the large number of SVOC/surface combinations have impeded progress in understanding sorption of SVOCs on indoor surfaces. An experimental approach based on a diffusion model was thus developed to determine the surface/air partition coefficient K of di-2-ethylhexyl phthalate (DEHP) on typical impervious surfaces including aluminum, steel, glass, and acrylic. The results indicate that surface roughness plays an important role in the adsorption process. Although larger data sets are needed, the ability to predict K could be greatly improved by establishing the nature of the relationship between surface roughness and K for clean indoor surfaces. Furthermore, different surfaces exhibit nearly identical K values after being exposed to kitchen grime with values that are close to those reported for the octanol/air partition coefficient. This strongly supports the idea that interactions between gas-phase DEHP and soiled surfaces have been reduced to interactions with an organic film. Collectively, the results provide an improved understanding of equilibrium partitioning of SVOCs on impervious surfaces.
NASA Astrophysics Data System (ADS)
Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.
2017-09-01
Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.
NASA Technical Reports Server (NTRS)
King, James; Nickling, William G.; Gillies, John A.
2005-01-01
The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
Variation in bed level shear stress on surfaces sheltered by nonerodible roughness elements
NASA Astrophysics Data System (ADS)
Sutton, Stephen L. F.; McKenna-Neuman, Cheryl
2008-09-01
Direct bed level observations of surface shear stress, pressure gradient variability, turbulence intensity, and fluid flow patterns were carried out in the vicinity of cylindrical roughness elements mounted in a boundary layer wind tunnel. Paired corkscrew vortices shed from each of the elements result in elevated shear stress and increased potential for the initiation of particle transport within the far wake. While the size and shape of these trailing vortices change with the element spacing, they persist even for large roughness densities. Wake interference coincides with the impingement of the upwind horseshoe vortices upon one another at a point when their diameter approaches half the distance between the roughness elements. While the erosive capability of the horseshoe vortex has been suggested for a variety of settings, the present study shows that the fluid stress immediately beneath this coherent structure is actually small in comparison to that caused by compression of the incident flow as it is deflected around the element and attached vortex. Observations such as these are required for further refinement of models of stress partitioning on rough surfaces.
Roughness configuration matters for aeolian sediment flux
USDA-ARS?s Scientific Manuscript database
The parameterisation of surface roughness effects on aeolian sediment transport is a key source of uncertainty in wind erosion models. Roughness effects are typically represented by bulk drag-partitioning schemes that scale the threshold friction velocity (u*t) for soil entrainment by the ratio of s...
NASA Technical Reports Server (NTRS)
King, James; Nickling, W. G.; Gilliles, J. A.
2006-01-01
A field study was conducted to ascertain the amount of protection that mesquite-dominated communities provide to the surface from wind erosion. The dynamics of the locally accelerated evolution of a mesquite/coppice dune landscape and the undetermined spatial dependence of potential erosion by wind from a shear stress partition model were investigated. Sediment transport and dust emission processes are governed by the amount of protection that can be provided by roughness elements. Although shear stress partition models exist that can describe this, their accuracy has only been tested against a limited dataset because instrumentation has previously been unable to provide the necessary measurements. This study combines the use of meteorological towers and surface shear stress measurements with Irwin sensors to measure the partition of shear stress in situ. The surface shear stress within preferentially aligned vegetation (within coppice dune development) exhibited highly skewed distributions, while a more homogenous surface stress was recorded at a site with less developed coppice dunes. Above the vegetation, the logarithmic velocity profile deduced roughness length (based on 10-min averages) exhibited a distinct correlation with compass direction for the site with vegetation preferentially aligned, while the site with more homogenously distributed vegetation showed very little variation in the roughness length. This distribution in roughness length within an area, defines a distribution of a resolved shear stress partitioning model based on these measurements, ultimately providing potential closure to a previously uncorrelated model parameter.
NASA Astrophysics Data System (ADS)
King, James; Nickling, W. G.; Gillies, J. A.
2006-12-01
A field study was conducted to ascertain the amount of protection that mesquite-dominated communities provide to the surface from wind erosion. The dynamics of the locally accelerated evolution of a mesquite/coppice dune landscape and the undetermined spatial dependence of potential erosion by wind from a shear stress partition model were investigated. Sediment transport and dust emission processes are governed by the amount of protection that can be provided by roughness elements. Although shear stress partition models exist that can describe this, their accuracy has only been tested against a limited dataset because instrumentation has previously been unable to provide the necessary measurements. This study combines the use of meteorological towers and surface shear stress measurements with Irwin sensors to measure the partition of shear stress in situ. The surface shear stress within preferentially aligned vegetation (within coppice dune development) exhibited highly skewed distributions, while a more homogenous surface stress was recorded at a site with less developed coppice dunes. Above the vegetation, the logarithmic velocity profile deduced roughness length (based on 10-min averages) exhibited a distinct correlation with compass direction for the site with vegetation preferentially aligned, while the site with more homogenously distributed vegetation showed very little variation in the roughness length. This distribution in roughness length within an area, defines a distribution of a resolved shear stress partitioning model based on these measurements, ultimately providing potential closure to a previously uncorrelated model parameter.
San, Phyo Phyo; Ling, Sai Ho; Nuryani; Nguyen, Hung
2014-08-01
This paper focuses on the hybridization technology using rough sets concepts and neural computing for decision and classification purposes. Based on the rough set properties, the lower region and boundary region are defined to partition the input signal to a consistent (predictable) part and an inconsistent (random) part. In this way, the neural network is designed to deal only with the boundary region, which mainly consists of an inconsistent part of applied input signal causing inaccurate modeling of the data set. Owing to different characteristics of neural network (NN) applications, the same structure of conventional NN might not give the optimal solution. Based on the knowledge of application in this paper, a block-based neural network (BBNN) is selected as a suitable classifier due to its ability to evolve internal structures and adaptability in dynamic environments. This architecture will systematically incorporate the characteristics of application to the structure of hybrid rough-block-based neural network (R-BBNN). A global training algorithm, hybrid particle swarm optimization with wavelet mutation is introduced for parameter optimization of proposed R-BBNN. The performance of the proposed R-BBNN algorithm was evaluated by an application to the field of medical diagnosis using real hypoglycemia episodes in patients with Type 1 diabetes mellitus. The performance of the proposed hybrid system has been compared with some of the existing neural networks. The comparison results indicated that the proposed method has improved classification performance and results in early convergence of the network.
Electoral Susceptibility and Entropically Driven Interactions
NASA Astrophysics Data System (ADS)
Caravan, Bassir; Levine, Gregory
2013-03-01
In the United States electoral system the election is usually decided by the electoral votes cast by a small number of ``swing states'' where the two candidates historically have roughly equal probabilities of winning. The effective value of a swing state is determined not only by the number of its electoral votes but by the frequency of its appearance in the set of winning partitions of the electoral college. Since the electoral vote values of swing states are not identical, the presence or absence of a state in a winning partition is generally correlated with the frequency of appearance of other states and, hence, their effective values. We quantify the effective value of states by an electoral susceptibility, χj, the variation of the winning probability with the ``cost'' of changing the probability of winning state j. Associating entropy with the logarithm of the number of appearances of a state within the set of winning partitions, the entropy per state (in effect, the chemical potential) is not additive and the states may be said to ``interact.'' We study χj for a simple model with a Zipf's law type distribution of electoral votes. We show that the susceptibility for small states is largest in ``one-sided'' electoral contests and smallest in close contests. This research was supported by Department of Energy DE-FG02-08ER64623, Research Corporation CC6535 (GL) and HHMI Scholar Program (BC)
A Novel Coarsening Method for Scalable and Efficient Mesh Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, A; Hysom, D; Gunney, B
2010-12-02
In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less
Field theoretic approach to roughness corrections
NASA Astrophysics Data System (ADS)
Wu, Hua Yao; Schaden, Martin
2012-02-01
We develop a systematic field theoretic description of roughness corrections to the Casimir free energy of a massless scalar field in the presence of parallel plates with mean separation a. Roughness is modeled by specifying a generating functional for correlation functions of the height profile. The two-point correlation function being characterized by its variance, σ2, and correlation length, ℓ. We obtain the partition function of a massless scalar quantum field interacting with the height profile of the surface via a δ-function potential. The partition function is given by a holographic reduction of this model to three coupled scalar fields on a two-dimensional plane. The original three-dimensional space with a flat parallel plate at a distance a from the rough plate is encoded in the nonlocal propagators of the surface fields on its boundary. Feynman rules for this equivalent 2+1-dimensional model are derived and its counterterms constructed. The two-loop contribution to the free energy of this model gives the leading roughness correction. The effective separation, aeff, to a rough plate is measured to a plane that is displaced a distance ρ∝σ2/ℓ from the mean of its profile. This definition of the separation eliminates corrections to the free energy of order 1/a4 and results in unitary scattering matrices. We obtain an effective low-energy model in the limit ℓ≪a. It determines the scattering matrix and equivalent planar scattering surface of a very rough plate in terms of the single length scale ρ. The Casimir force on a rough plate is found to always weaken with decreasing correlation length ℓ. The two-loop approximation to the free energy interpolates between the free energy of the effective low-energy model and that of the proximity force approximation - the force on a very rough plate with σ≳0.5ℓ being weaker than on a planar Dirichlet surface at any separation.
A Layer Model of Ethanol Partitioning into Lipid Membranes
Nizza, David T.; Gawrisch, Klaus
2013-01-01
The effect of membrane composition on ethanol partitioning into lipid bilayers was assessed by headspace gas chromatography. A series of model membranes with different compositions have been investigated. Membranes were exposed to a physiological ethanol concentration of 20 mmol/l. The concentration of membranes was 20 wt% which roughly corresponds to values found in tissue. Partitioning depended on the chemical nature of polar groups at the lipid-water interface. Compared to phosphatidylcholine, lipids with headgroups containing phosphatidylglycerol, phosphatidylserine, and sphingomyelin showed enhanced partitioning while headgroups containing phosphatidylethanolamine resulted in a lower partition coefficient. The molar partition coefficient was independent of a membrane’s hydrophobic volume. This observation is in agreement with our previously published NMR results which showed that ethanol resides almost exclusively within the membrane-water interface. At an ethanol concentration of 20 mmol/l in water, ethanol concentrations at the lipid/water interface are in the range from 30 – 15 mmol/l, corresponding to one ethanol molecule per 100–200 lipids. PMID:19592710
A layer model of ethanol partitioning into lipid membranes.
Nizza, David T; Gawrisch, Klaus
2009-06-01
The effect of membrane composition on ethanol partitioning into lipid bilayers was assessed by headspace gas chromatography. A series of model membranes with different compositions have been investigated. Membranes were exposed to a physiological ethanol concentration of 20 mmol/l. The concentration of membranes was 20 wt% which roughly corresponds to values found in tissue. Partitioning depended on the chemical nature of polar groups at the lipid/water interface. Compared to phosphatidylcholine, lipids with headgroups containing phosphatidylglycerol, phosphatidylserine, and sphingomyelin showed enhanced partitioning while headgroups containing phosphatidylethanolamine resulted in a lower partition coefficient. The molar partition coefficient was independent of a membrane's hydrophobic volume. This observation is in agreement with our previously published NMR results which showed that ethanol resides almost exclusively within the membrane/water interface. At an ethanol concentration of 20 mmol/l in water, ethanol concentrations at the lipid/water interface are in the range from 30-15 mmol/l, corresponding to one ethanol molecule per 100-200 lipids.
Wilcox, Andrew C.; Nelson, Jonathan M.; Wohl, Ellen E.
2006-01-01
In step‐pool stream channels, flow resistance is created primarily by bed sediments, spill over step‐pool bed forms, and large woody debris (LWD). In order to measure resistance partitioning between grains, steps, and LWD in step‐pool channels we completed laboratory flume runs in which total resistance was measured with and without grains and steps, with various LWD configurations, and at multiple slopes and discharges. Tests of additive approaches to resistance partitioning found that partitioning estimates are highly sensitive to the order in which components are calculated and that such approaches inflate the values of difficult‐to‐measure components that are calculated by subtraction from measured components. This effect is especially significant where interactions between roughness features create synergistic increases in resistance such that total resistance measured for combinations of resistance components greatly exceeds the sum of those components measured separately. LWD contributes large proportions of total resistance by creating form drag on individual pieces and by increasing the spill resistance effect of steps. The combined effect of LWD and spill over steps was found to dominate total resistance, whereas grain roughness on step treads was a small component of total resistance. The relative contributions of grain, spill, and woody debris resistance were strongly influenced by discharge and to a lesser extent by LWD density. Grain resistance values based on published formulas and debris resistance values calculated using a cylinder drag approach typically underestimated analogous flume‐derived values, further illustrating sources of error in partitioning methods and the importance of accounting for interaction effects between resistance components.
NASA Astrophysics Data System (ADS)
Wilcox, Andrew C.; Nelson, Jonathan M.; Wohl, Ellen E.
2006-05-01
In step-pool stream channels, flow resistance is created primarily by bed sediments, spill over step-pool bed forms, and large woody debris (LWD). In order to measure resistance partitioning between grains, steps, and LWD in step-pool channels we completed laboratory flume runs in which total resistance was measured with and without grains and steps, with various LWD configurations, and at multiple slopes and discharges. Tests of additive approaches to resistance partitioning found that partitioning estimates are highly sensitive to the order in which components are calculated and that such approaches inflate the values of difficult-to-measure components that are calculated by subtraction from measured components. This effect is especially significant where interactions between roughness features create synergistic increases in resistance such that total resistance measured for combinations of resistance components greatly exceeds the sum of those components measured separately. LWD contributes large proportions of total resistance by creating form drag on individual pieces and by increasing the spill resistance effect of steps. The combined effect of LWD and spill over steps was found to dominate total resistance, whereas grain roughness on step treads was a small component of total resistance. The relative contributions of grain, spill, and woody debris resistance were strongly influenced by discharge and to a lesser extent by LWD density. Grain resistance values based on published formulas and debris resistance values calculated using a cylinder drag approach typically underestimated analogous flume-derived values, further illustrating sources of error in partitioning methods and the importance of accounting for interaction effects between resistance components.
Rough set classification based on quantum logic
NASA Astrophysics Data System (ADS)
Hassan, Yasser F.
2017-11-01
By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.
On the star partition dimension of comb product of cycle and path
NASA Astrophysics Data System (ADS)
Alfarisi, Ridho; Darmaji
2017-08-01
Let G = (V, E) be a connected graphs with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk }. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S1, S2, S3, …, Sk } is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si, 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V(G) is the star partition dimension of G, denoted by spd(G). Finding the star partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of cycle and path namely Cm⊳Pn and Pn⊳Cm for n ≥ 2 and m ≥ 3.
On Fuzzy Sets and Rough Sets from the Perspective of Indiscernibility
NASA Astrophysics Data System (ADS)
Chakraborty, Mihir K.
The category theoretic approach of Obtułowicz to Pawlak's rough sets has been reintroduced in a somewhat modified form. A generalization is rendered to this approach that has been motivated by the notion of rough membership function. Thus, a link is established between rough sets and L-fuzzy sets for some special lattices. It is shown that a notion of indistinguishability lies at the root of vagueness. This observation in turn gives a common ground to the theories of rough sets and fuzzy sets.
Application of Rough Sets to Information Retrieval.
ERIC Educational Resources Information Center
Miyamoto, Sadaaki
1998-01-01
Develops a method of rough retrieval, an application of the rough set theory to information retrieval. The aim is to: (1) show that rough sets are naturally applied to information retrieval in which categorized information structure is used; and (2) show that a fuzzy retrieval scheme is induced from the rough retrieval. (AEF)
Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution
Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen
2014-01-01
Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804
Hardware Index to Set Partition Converter
2013-01-01
Brisk, J.G. de Figueiredo Coutinho, P.C. Diniz (Eds.): ARC 2013, LNCS 7806, pp. 72–83, 2013. c© Springer-Verlag Berlin Heidelberg 2013 Report...374 (1990) 13. Orlov, M.: Efficient generation of set partitions (March 2002), http://www.cs.bgu.ac.il/~orlovm/papers/partitions.pdf 14. Reingold, E
NASA Astrophysics Data System (ADS)
Elbanna, A. E.
2013-12-01
Numerous field and experimental observations suggest that faults surfaces are rough at multiple scales and tend to produce a wide range of branch sizes ranging from micro-branching to large scale secondary faults. The development and evolution of fault roughness and branching is believed to play an important role in rupture dynamics and energy partitioning. Previous work by several groups has succeeded in determining conditions under which a main rupture may branch into a secondary fault. Recently, there great progress has been made in investigating rupture propagation on rough faults with and without off-fault plasticity. Nonetheless, in most of these models the heterogeneity, whether the roughness profile or the secondary faults orientation, was built into the system from the beginning and consequently the final outcome depends strongly on the initial conditions. Here we introduce an adaptive mesh technique for modeling mode-II crack propagation on slip weakening frictional interfaces. We use a Finite Element Framework with random mesh topology that adapts to crack dynamics through element splitting and sequential insertion of frictional interfaces dictated by the failure criterion. This allows the crack path to explore non-planar paths and develop the roughness profile that is most compatible with the dynamical constraints. It also enables crack branching at different scales. We quantify energy dissipation due to the roughening process and small scale branching. We compare the results of our model to a reference case for propagation on a planar fault. We show that the small scale processes of roughening and branching influence many characteristics of the rupture propagation including the energy partitioning, rupture speed and peak slip rates. We also estimate the fracture energy required for propagating a crack on a planar fault that will be required to produce comparable results. We anticipate that this modeling approach provides an attractive methodology that complements the current efforts in modeling off-fault plasticity and damage.
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
The Study of Imperfection in Rough Set on the Field of Engineering and Education
NASA Astrophysics Data System (ADS)
Sheu, Tian-Wei; Liang, Jung-Chin; You, Mei-Li; Wen, Kun-Li
Based on the characteristic of rough set, rough set theory overlaps with many other theories, especially with fuzzy set theory, evidence theory and Boolean reasoning methods. And the rough set methodology has found many real-life applications, such as medical data analysis, finance, banking, engineering, voice recognition, image processing and others. Till now, there is rare research associating to this issue in the imperfection of rough set. Hence, the main purpose of this paper is to study the imperfection of rough set in the field of engineering and education. First of all, we preview the mathematics model of rough set, and a given two examples to enhance our approach, which one is the weighting of influence factor in muzzle noise suppressor, and the other is the weighting of evaluation factor in English learning. Third, we also apply Matlab to develop a complete human-machine interface type of toolbox in order to support the complex calculation and verification the huge data. Finally, some further suggestions are indicated for the research in the future.
On the partition dimension of comb product of path and complete graph
NASA Astrophysics Data System (ADS)
Darmaji, Alfarisi, Ridho
2017-08-01
For a vertex v of a connected graph G(V, E) with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk}. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). Finding the partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of path and complete graph. The results show that comb product of complete grapph Km and path Pn namely p d (Km⊳Pn)=m where m ≥ 3 and n ≥ 2 and p d (Pn⊳Km)=m where m ≥ 3, n ≥ 2 and m ≥ n.
Clouds Versus Carbon: Predicting Vegetation Roughness by Maximizing Productivity
NASA Technical Reports Server (NTRS)
Olsen, Lola M.
2004-01-01
Surface roughness is one of the dominant vegetation properties that affects land surface exchange of energy, water, carbon, and momentum with the overlying atmosphere. We hypothesize that the canopy structure of terrestrial vegetation adapts optimally to climate by maximizing productivity, leading to an optimum surface roughness. An optimum should exist because increasing values of surface roughness cause increased surface exchange, leading to increased supply of carbon dioxide for photosynthesis. At the same time, increased roughness enhances evapotranspiration and cloud cover, thereby reducing the supply of photosynthetically active radiation. We demonstrate the optimum through sensitivity simulations using a coupled dynamic vegetation-climate model for present day conditions, in which we vary the value of surface roughness for vegetated surfaces. We find that the maximum in productivity occurs at a roughness length of 2 meters, a value commonly used to describe the roughness of today's forested surfaces. The sensitivity simulations also illustrate the strong climatic impacts of vegetation roughness on the energy and water balances over land: with increasing vegetation roughness, solar radiation is reduced by up to 20 W/sq m in the global land mean, causing shifts in the energy partitioning and leading to general cooling of the surface by 1.5 K. We conclude that the roughness of vegetated surfaces can be understood as a reflection of optimum adaptation, and it is associated with substantial changes in the surface energy and water balances over land. The role of the cloud feedback in shaping the optimum underlines the importance of an integrated perspective that views vegetation and its adaptive nature as an integrated component of the Earth system.
Allan Variance Calculation for Nonuniformly Spaced Input Data
2015-01-01
τ (tau). First, the set of gyro values is partitioned into bins of duration τ. For example, if the sampling duration τ is 2 sec and there are 4,000...Variance Calculation For each value of τ, the conventional AV calculation partitions the gyro data sets into bins with approximately τ / Δt...value of Δt. Therefore, a new way must be found to partition the gyro data sets into bins. The basic concept behind the modified AV calculation is
Watershed Complexity Impacts on Rainfall-Runoff Modeling
NASA Astrophysics Data System (ADS)
Goodrich, D. C.; Grayson, R.; Willgoose, G.; Palacios-Velez, O.; Bloeschl, G.
2002-12-01
Application of distributed hydrologic watershed models fundamentally requires watershed partitioning or discretization. In addition to partitioning the watershed into modeling elements, these elements typically represent a further abstraction of the actual watershed surface and its relevant hydrologic properties. A critical issue that must be addressed by any user of these models prior to their application is definition of an acceptable level of watershed discretization or geometric model complexity. A quantitative methodology to define a level of geometric model complexity commensurate with a specified level of model performance is developed for watershed rainfall-runoff modeling. In the case where watershed contributing areas are represented by overland flow planes, equilibrium discharge storage was used to define the transition from overland to channel dominated flow response. The methodology is tested on four subcatchments which cover a range of watershed scales of over three orders of magnitude in the USDA-ARS Walnut Gulch Experimental Watershed in Southeastern Arizona. It was found that distortion of the hydraulic roughness can compensate for a lower level of discretization (fewer channels) to a point. Beyond this point, hydraulic roughness distortion cannot compensate for topographic distortion of representing the watershed by fewer elements (e.g. less complex channel network). Similarly, differences in representation of topography by different model or digital elevation model (DEM) types (e.g. Triangular Irregular Elements - TINs; contour lines; and regular grid DEMs) also result in difference in runoff routing responses that can be largely compensated for by a distortion in hydraulic roughness.
Can texture analysis of tooth microwear detect within guild niche partitioning in extinct species?
NASA Astrophysics Data System (ADS)
Purnell, Mark; Nedza, Christopher; Rychlik, Leszek
2017-04-01
Recent work shows that tooth microwear analysis can be applied further back in time and deeper into the phylogenetic history of vertebrate clades than previously thought (e.g. niche partitioning in early Jurassic insectivorous mammals; Gill et al., 2014, Nature). Furthermore, quantitative approaches to analysis based on parameterization of surface roughness are increasing the robustness and repeatability of this widely used dietary proxy. Discriminating between taxa within dietary guilds has the potential to significantly increase our ability to determine resource use and partitioning in fossil vertebrates, but how sensitive is the technique? To address this question we analysed tooth microwear texture in sympatric populations of shrew species (Neomys fodiens, Neomys anomalus, Sorex araneus, Sorex minutus) from BiaŁ owieza Forest, Poland. These populations are known to exhibit varying degrees of niche partitioning (Churchfield & Rychlik, 2006, J. Zool.) with greatest overlap between the Neomys species. Sorex araneus also exhibits some niche overlap with N. anomalus, while S. minutus is the most specialised. Multivariate analysis based only on tooth microwear textures recovers the same pattern of niche partitioning. Our results also suggest that tooth textures track seasonal differences in diet. Projecting data from fossils into the multivariate dietary space defined using microwear from extant taxa demonstrates that the technique is capable of subtle dietary discrimination in extinct insectivores.
Certificate Revocation Using Fine Grained Certificate Space Partitioning
NASA Astrophysics Data System (ADS)
Goyal, Vipul
A new certificate revocation system is presented. The basic idea is to divide the certificate space into several partitions, the number of partitions being dependent on the PKI environment. Each partition contains the status of a set of certificates. A partition may either expire or be renewed at the end of a time slot. This is done efficiently using hash chains.
Intelligent System Development Using a Rough Sets Methodology
NASA Technical Reports Server (NTRS)
Anderson, Gray T.; Shelton, Robert O.
1997-01-01
The purpose of this research was to examine the potential of the rough sets technique for developing intelligent models of complex systems from limited information. Rough sets a simple but promising technology to extract easily understood rules from data. The rough set methodology has been shown to perform well when used with a large set of exemplars, but its performance with sparse data sets is less certain. The difficulty is that rules will be developed based on just a few examples, each of which might have a large amount of noise associated with them. The question then becomes, what is the probability of a useful rule being developed from such limited information? One nice feature of rough sets is that in unusual situations, the technique can give an answer of 'I don't know'. That is, if a case arises that is different from the cases the rough set rules were developed on, the methodology can recognize this and alert human operators of it. It can also be trained to do this when the desired action is unknown because conflicting examples apply to the same set of inputs. This summer's project was to look at combining rough set theory with statistical theory to develop confidence limits in rules developed by rough sets. Often it is important not to make a certain type of mistake (e.g., false positives or false negatives), so the rules must be biased toward preventing a catastrophic error, rather than giving the most likely course of action. A method to determine the best course of action in the light of such constraints was examined. The resulting technique was tested with files containing electrical power line 'signatures' from the space shuttle and with decompression sickness data.
The partition dimension of cycle books graph
NASA Astrophysics Data System (ADS)
Santoso, Jaya; Darmaji
2018-03-01
Let G be a nontrivial and connected graph with vertex set V(G), edge set E(G) and S ⊆ V(G) with v ∈ V(G), the distance between v and S is d(v,S) = min{d(v,x)|x ∈ S}. For an ordered partition ∏ = {S 1, S 2, S 3,…, Sk } of V(G), the representation of v with respect to ∏ is defined by r(v|∏) = (d(v, S 1), d(v, S 2),…, d(v, Sk )). The partition ∏ is called a resolving partition of G if all representations of vertices are distinct. The partition dimension pd(G) is the smallest integer k such that G has a resolving partition set with k members. In this research, we will determine the partition dimension of Cycle Books {B}{Cr,m}. Cycle books graph {B}{Cr,m} is a graph consisting of m copies cycle Cr with the common path P 2. It is shown that the partition dimension of cycle books graph, pd({B}{C3,m}) is 3 for m = 2, 3, and m for m ≥ 4. pd({B}{C4,m}) is 3 + 2k for m = 3k + 2, 4 + 2(k ‑ 1) for m = 3k + 1, and 3 + 2(k ‑ 1) for m = 3k. pd({B}{C5,m}) is m + 1.
Direct optimization, affine gap costs, and node stability.
Aagesen, Lone
2005-09-01
The outcome of a phylogenetic analysis based on DNA sequence data is highly dependent on the homology-assignment step and may vary with alignment parameter costs. Robustness to changes in parameter costs is therefore a desired quality of a data set because the final conclusions will be less dependent on selecting a precise optimal cost set. Here, node stability is explored in relationship to separate versus combined analysis in three different data sets, all including several data partitions. Robustness to changes in cost sets is measured as number of successive changes that can be made in a given cost set before a specific clade is lost. The changes are in all cases base change cost, gap penalties, and adding/removing/changing affine gap costs. When combining data partitions, the number of clades that appear in the entire parameter space is not remarkably increased, in some cases this number even decreased. However, when combining data partitions the trees from cost sets including affine gap costs were always more similar than the trees were from cost sets without affine gap costs. This was not the case when the data partitions were analyzed independently. When data sets were combined approximately 80% of the clades found under cost sets including affine gap costs resisted at least one change to the cost set.
Toropov, A A; Toropova, A P; Raska, I
2008-04-01
Simplified molecular input line entry system (SMILES) has been utilized in constructing quantitative structure-property relationships (QSPR) for octanol/water partition coefficient of vitamins and organic compounds of different classes by optimal descriptors. Statistical characteristics of the best model (vitamins) are the following: n=17, R(2)=0.9841, s=0.634, F=931 (training set); n=7, R(2)=0.9928, s=0.773, F=690 (test set). Using this approach for modeling octanol/water partition coefficient for a set of organic compounds gives a model that is statistically characterized by n=69, R(2)=0.9872, s=0.156, F=5184 (training set) and n=70, R(2)=0.9841, s=0.179, F=4195 (test set).
Flu Diagnosis System Using Jaccard Index and Rough Set Approaches
NASA Astrophysics Data System (ADS)
Efendi, Riswan; Azah Samsudin, Noor; Mat Deris, Mustafa; Guan Ting, Yip
2018-04-01
Jaccard index and rough set approaches have been frequently implemented in decision support systems with various domain applications. Both approaches are appropriate to be considered for categorical data analysis. This paper presents the applications of sets operations for flu diagnosis systems based on two different approaches, such as, Jaccard index and rough set. These two different approaches are established using set operations concept, namely intersection and subset. The step-by-step procedure is demonstrated from each approach in diagnosing flu system. The similarity and dissimilarity indexes between conditional symptoms and decision are measured using Jaccard approach. Additionally, the rough set is used to build decision support rules. Moreover, the decision support rules are established using redundant data analysis and elimination of unclassified elements. A number data sets is considered to attempt the step-by-step procedure from each approach. The result has shown that rough set can be used to support Jaccard approaches in establishing decision support rules. Additionally, Jaccard index is better approach for investigating the worst condition of patients. While, the definitely and possibly patients with or without flu can be determined using rough set approach. The rules may improve the performance of medical diagnosis systems. Therefore, inexperienced doctors and patients are easier in preliminary flu diagnosis.
The effect of catchment discretization on rainfall-runoff model predictions
NASA Astrophysics Data System (ADS)
Goodrich, D.; Grayson, R.; Willgoose, G.; Palacios-Valez, O.; Bloschl, G.
2003-04-01
Application of distributed hydrologic watershed models fundamentally requires watershed partitioning or discretization. In addition to partitioning the watershed into modelling elements, these elements typically represent a further abstraction of the actual watershed surface and its relevant hydrologic properties. A critical issue that must be addressed by any user of these models prior to their application is definition of an acceptable level and type of watershed discretization or geometric model complexity. A quantitative methodology to define a level of geometric model complexity commensurate with a specified level of model performance is developed for watershed rainfall-runoff modelling. The methodology is tested on four subcatchments which cover a range of watershed scales of over three orders of magnitude in the USDA-ARS Walnut Gulch Experimental Watershed in Southeastern Arizona. It was found that distortion of the hydraulic roughness can compensate for a lower level of discretization (fewer channels) to a point. Beyond this point, hydraulic roughness distortion cannot compensate for the topographic distortion of representing the watershed by fewer elements (e.g. less complex channel network). Similarly, differences in representation of topography by different model or digital elevation model (DEM) types (e.g. Triangular Irregular Elements - TINs; contour lines; and regular grid DEMs) also result in difference in runoff routing responses that can be largely compensated for by a distortion in hydraulic roughness or path length. To put the effect of these discretization models in context it will be shown that relatively small non-compliance with Peclet number restrictions on timestep size can overwhelm the relatively modest differences resulting from the type of representation of topography.
Information Warfare: Evaluation of Operator Information Processing Models
1997-10-01
that people can describe or report, including both episodic and semantic information. Declarative memory contains a network of knowledge represented...second dimension corresponds roughly to the distinction between episodic and semantic memory that is commonly made in cognitive psychology. Episodic ...3 is long-term memory for the discourse, a subset of episodic memory . Partition 4 is long-term semantic memory , or the knowledge-base. According to
NASA Astrophysics Data System (ADS)
Basart, Sara; Jorba, Oriol; Pérez García-Pando, Carlos; Prigent, Catherine; Baldasano, Jose M.
2014-05-01
Aeolian aerodynamic roughness length in arid regions is a key parameter to predict the vulnerability of the surface to wind erosion, and, as a consequence, the related production of mineral aerosol (e.g. Laurent et al., 2008). Recently, satellite-derived roughness length at the global scale have emerged and provide the opportunity to use them in advanced emission schemes in global and regional models (i.e. Menut et al., 2013). A global map of the aeolian aerodynamic roughness length at high resolution (6 km) is derived, for arid and semi-arid regions merging PARASOL and ASCAT data to estimate aeolian roughness length. It shows very good consistency with the existing information on the properties of these surfaces. The dataset is available to the community, for use in atmospheric dust transport models. The present contribution analyses the behaviour of the NMMB/BSC-Dust model (Pérez et al., 2011) when the ASCAT/PARASOL satellite-derived global roughness length (Prigent et al, 2012) and the State Soil Geographic database Food and Agriculture Organization of the United Nations (STATSGO-FAO) soil texture data set (based on wet techniques) is used. We explore the sensitivity of the drag partition scheme (a critical component of the dust emission scheme) and the dust vertical fluxes (intensity and spatial patterns) to the roughness length. An annual evaluation of NMMB/BSC-Dust (for the year 2011) over Northern Africa and the Middle East using observed aerosol optical depths (AODs) from Aerosol Robotic Network sites and aerosol satellite products (MODIS and MISR) will be discussed. Laurent, B., Marticorena, B., Bergametti, G., Leon, J. F., and Mahowald, N. M.: Modeling mineral dust emissions from the Sahara desert using new surface properties and soil database, J. Geophys. Res., 113, D14218, doi:10.1029/2007JD009484, 2008. Menut, L., C. Pérez, K. Haustein, B. Bessagnet, C. Prigent, and S. Alfaro, Impact of surface roughness and soil texture on mineral dust emission fluxes modeling, J. Geophys. Res. Atmos., 118, 6505-6520, doi:10.1002/jgrd.50313, 2013. Pérez, C., Haustein, K., Janjic, Z., Jorba, O., Huneeus, N., Baldasano, J. M. and Thomson, M. Atmospheric dust modeling from meso to global scales with the online NMMB/BSC-Dust model-Part 1: Model description, annual simulations and evaluation. Atmospheric Chemistry and Physics, 11(24), 13001-13027, 2011. Prigent, C., Jiménez, C., and Catherinot, J.: Comparison of satellite microwave backscattering (ASCAT) and visible/near-infrared reflectances (PARASOL) for the estimation of aeolian aerodynamic roughness length in arid and semi-arid regions, Atmos. Meas. Tech., 5, 2703-2712, doi:10.5194/amt-5-2703-2012, 2012.
On the star partition dimension of comb product of cycle and complete graph
NASA Astrophysics Data System (ADS)
Alfarisi, Ridho; Darmaji; Dafik
2017-06-01
Let G = (V, E) be a connected graphs with vertex set V (G), edge set E(G) and S ⊆ V (G). For an ordered partition Π = {S 1, S 2, S 3, …, Sk } of V (G), the representation of a vertex v ∈ V (G) with respect to Π is the k-vectors r(v|Π) = (d(v, S 1), d(v, S 2), …, d(v, Sk )), where d(v, Sk ) represents the distance between the vertex v and the set Sk , defined by d(v, Sk ) = min{d(v, x)|x ∈ Sk}. The partition Π of V (G) is a resolving partition if the k-vektors r(v|Π), v ∈ V (G) are distinct. The minimum resolving partition Π is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S 1, S 2, S 3, …, Sk} is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si , 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V (G) is the star partition dimension of G, denoted by spd(G). Finding a star partition dimension of G is classified to be a NP-Hard problem. Furthermore, the comb product between G and H, denoted by G ⊲ H, is a graph obtained by taking one copy of G and |V (G)| copies of H and grafting the i-th copy of H at the vertex o to the i-th vertex of G. By definition of comb product, we can say that V (G ⊲ H) = {(a, u)|a ∈ V (G), u ∈ V (H)} and (a, u)(b, v) ∈ E(G ⊲ H) whenever a = b and uv ∈ E(H), or ab ∈ E(G) and u = v = o. In this paper, we will study the star partition dimension of comb product of cycle and complete graph, namely Cn ⊲ Km and Km ⊲ Cn for n ≥ 3 and m ≥ 3.
Application of preprocessing filtering on Decision Tree C4.5 and rough set theory
NASA Astrophysics Data System (ADS)
Chan, Joseph C. C.; Lin, Tsau Y.
2001-03-01
This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.
Intelligent Data Granulation on Load: Improving Infobright's Knowledge Grid
NASA Astrophysics Data System (ADS)
Ślęzak, Dominik; Kowalski, Marcin
One of the major aspects of Infobright's relational database technology is automatic decomposition of each of data tables onto Rough Rows, each consisting of 64K of original rows. Rough Rows are automatically annotated by Knowledge Nodes that represent compact information about the rows' values. Query performance depends on the quality of Knowledge Nodes, i.e., their efficiency in minimizing the access to the compressed portions of data stored on disk, according to the specific query optimization procedures. We show how to implement the mechanism of organizing the incoming data into such Rough Rows that maximize the quality of the corresponding Knowledge Nodes. Given clear business-driven requirements, the implemented mechanism needs to be fully integrated with the data load process, causing no decrease in the data load speed. The performance gain resulting from better data organization is illustrated by some tests over our benchmark data. The differences between the proposed mechanism and some well-known procedures of database clustering or partitioning are discussed. The paper is a continuation of our patent application [22].
Metal/silicate partitioning of Pt and the origin of the "late veneer"
NASA Astrophysics Data System (ADS)
Ertel, W.; Walter, M. J.; Drake, M. J.; Sylvester, P. J.
2002-12-01
Highly siderophile elements (HSEs) are perfect tools for investigating core forming processes in planetary bodies due to their Fe-loving (siderophile) geochemical behavior. Tremendous scientific effort was invested into this field during the past 10 years - mostly in 1 atm experiments. However, little is known about their high-pressure geochemistry and partitioning behavior between core and mantle forming phases. This knowledge is essential to distinguish between equilibrium (Magma Ocean) and non-equilibrium (heterogeneous accretion, late veneer) models for the accretion history for the early Earth. We therefore chose to investigate the partitioning behavior of Pt up to pressures of 140 kbar (14 GPa) and temperatures of 1950°C. The used melt composition - identical to melt systems used in 1 atm experiments - is the eutectic composition of Anorthite-Diopside (AnDi), a pseudo-basalt. A series of runs were performed which were internaly buffered by the piston cylinder apparatus, and were followed by duplicate experiments buffered in the AnDi-C-CO2 system. These experiments constitute reversals since they approach equilibrium from an initially higher and lower Pt solubility (8 ppm in the non-buffered runs, and essentially Pt free in the buffered runs). Experimental charges were encapsulated in Pt capsules which served as source for Pt. Experiments up to 20 kbar were performed in a Quickpress piston cylinder apparatus, while experiments at higher pressures were performed in a Walker-type (Tucson, AZ) and a Kawai-type (Misasa, Japan) multi anvil apparatus. Time series experiments were performed in piston-cylinder runs to determine minimum run durations for the achievement of equilibrium, and to guarantee high-quality partitioning data. 6 hours was found to be sufficient to obtain equilibrium. In practice, all experiments exceeded 12 hours to assure equilibrium. In a second set of runs the temperature dependence of the partitioning behavior of Pt was investigated between the melting point of the 1 atm, AnDi system and the melting point of the Pt capsule material. Over 150 piston cylinder and 12 multi anvil experiments have been performed. Pt solubility is only slightly dependent on temperature, decreasing between 1800 and 1400°C by less than an order of magnitude. In consequence, the partitioning behavior of Pt is mostly determined by its oxygen fugacity dependence, which has only been determined in 1 atm experiments. At 10 kbar, metal/silicate partition coefficients (D's) decrease by about 3 orders of magnitude. The reason for this is not understood, but might be attributed to a first order phase transition as found for, e.g., SiO2 or H2O. Above 10 kbar any increase in pressure does not lead to any further significant decrease in partition coefficients. Solubilities stay roughly constant up to 140 kbar. Abundances of moderately siderophile elements were possibly established by metal/silicate equilibrium in a magma ocean. These results for Pt suggest that the abundances of HSEs were most probably established by the accretion of a chondritic veneer following core formation, as metal/silicate partition coefficients are too high to be consistent with metal/silicate equilibrium in a magma ocean.
Burant, Aniela; Thompson, Christopher; Lowry, Gregory V; Karamalidis, Athanasios K
2016-05-17
Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch-reactor system with dual spectroscopic detectors: a near-infrared spectrometer for measuring the organic analyte in the CO2 phase and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly parameter linear free-energy relationship and to develop five new linear free-energy relationships for predicting water-sc-CO2 partitioning coefficients. A total of four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than does the model built for the entire data set.
Cell-autonomous-like silencing of GFP-partitioned transgenic Nicotiana benthamiana.
Sohn, Seong-Han; Frost, Jennifer; Kim, Yoon-Hee; Choi, Seung-Kook; Lee, Yi; Seo, Mi-Suk; Lim, Sun-Hyung; Choi, Yeonhee; Kim, Kook-Hyung; Lomonossoff, George
2014-08-01
We previously reported the novel partitioning of regional GFP-silencing on leaves of 35S-GFP transgenic plants, coining the term "partitioned silencing". We set out to delineate the mechanism of partitioned silencing. Here, we report that the partitioned plants were hemizygous for the transgene, possessing two direct-repeat copies of 35S-GFP. The detection of both siRNA expression (21 and 24 nt) and DNA methylation enrichment specifically at silenced regions indicated that both post-transcriptional gene silencing (PTGS) and transcriptional gene silencing (TGS) were involved in the silencing mechanism. Using in vivo agroinfiltration of 35S-GFP/GUS and inoculation of TMV-GFP RNA, we demonstrate that PTGS, not TGS, plays a dominant role in the partitioned silencing, concluding that the underlying mechanism of partitioned silencing is analogous to RNA-directed DNA methylation (RdDM). The initial pattern of partitioned silencing was tightly maintained in a cell-autonomous manner, although partitioned-silenced regions possess a potential for systemic spread. Surprisingly, transcriptome profiling through next-generation sequencing demonstrated that expression levels of most genes involved in the silencing pathway were similar in both GFP-expressing and silenced regions although a diverse set of region-specific transcripts were detected.This suggests that partitioned silencing can be triggered and regulated by genes other than the genes involved in the silencing pathway. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Granularity refined by knowledge: contingency tables and rough sets as tools of discovery
NASA Astrophysics Data System (ADS)
Zytkow, Jan M.
2000-04-01
Contingency tables represent data in a granular way and are a well-established tool for inductive generalization of knowledge from data. We show that the basic concepts of rough sets, such as concept approximation, indiscernibility, and reduct can be expressed in the language of contingency tables. We further demonstrate the relevance to rough sets theory of additional probabilistic information available in contingency tables and in particular of statistical tests of significance and predictive strength applied to contingency tables. Tests of both type can help the evaluation mechanisms used in inductive generalization based on rough sets. Granularity of attributes can be improved in feedback with knowledge discovered in data. We demonstrate how 49er's facilities for (1) contingency table refinement, for (2) column and row grouping based on correspondence analysis, and (3) the search for equivalence relations between attributes improve both granularization of attributes and the quality of knowledge. Finally we demonstrate the limitations of knowledge viewed as concept approximation, which is the focus of rough sets. Transcending that focus and reorienting towards the predictive knowledge and towards the related distinction between possible and impossible (or statistically improbable) situations will be very useful in expanding the rough sets approach to more expressive forms of knowledge.
NASA Technical Reports Server (NTRS)
Drake, Michael J.; Rubie, David C.; Mcfarlane, Elisabeth A.
1992-01-01
The partitioning of elements amongst lower mantle phases and silicate melts is of interest in unraveling the early thermal history of the Earth. Because of the technical difficulty in carrying out such measurements, only one direct set of measurements was reported previously, and these results as well as interpretations based on them have generated controversy. Here we report what are to our knowledge only the second set of directly measured trace element partition coefficients for a natural system (KLB-1).
NASA Astrophysics Data System (ADS)
Gehrmann, Andreas; Nagai, Yoshimitsu; Yoshida, Osamu; Ishizu, Syohei
Since management decision-making becomes complex and preferences of the decision-maker frequently becomes inconsistent, multi-attribute decision-making problems were studied. To represent inconsistent preference relation, the concept of evaluation structure was introduced. We can generate simple rules to represent inconsistent preference relation by the evaluation structures. Further rough set theory for the preference relation was studied and the concept of approximation was introduced. One of our main aims of this paper is to introduce a concept of rough evaluation structure for representing inconsistent preference relation. We apply rough set theory to the evaluation structure, and develop a method for generating simple rules for inconsistent preference relations. In this paper, we introduce concepts of totally ordered information system, similarity class of preference relation, upper and lower approximation of preference relations. We also show the properties of rough evaluation structure and provide a simple example. As an application of rough evaluation structure, we analyze questionnaire survey of customer preferences about audio players.
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.
1984-01-01
A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.
Set Partitions and the Multiplication Principle
ERIC Educational Resources Information Center
Lockwood, Elise; Caughman, John S., IV
2016-01-01
To further understand student thinking in the context of combinatorial enumeration, we examine student work on a problem involving set partitions. In this context, we note some key features of the multiplication principle that were often not attended to by students. We also share a productive way of thinking that emerged for several students who…
NASA Technical Reports Server (NTRS)
Medard, E.; Martin, A. M.; Righter, K.; Malouta, A.; Lee, C.-T.
2017-01-01
Most siderophile element concentrations in planetary mantles can be explained by metal/ silicate equilibration at high temperature and pressure during core formation. Highly siderophile elements (HSE = Au, Re, and the Pt-group elements), however, usually have higher mantle abundances than predicted by partitioning models, suggesting that their concentrations have been set by late accretion of material that did not equilibrate with the core. The partitioning of HSE at the low oxygen fugacities relevant for core formation is however poorly constrained due to the lack of sufficient experimental constraints to describe the variations of partitioning with key variables like temperature, pressure, and oxygen fugacity. To better understand the relative roles of metal/silicate partitioning and late accretion, we performed a self-consistent set of experiments that parameterizes the influence of oxygen fugacity, temperature and melt composition on the partitioning of Pt, one of the HSE, between metal and silicate melts. The major outcome of this project is the fact that Pt dissolves in an anionic form in silicate melts, causing a dependence of partitioning on oxygen fugacity opposite to that reported in previous studies.
NASA Astrophysics Data System (ADS)
Ise, Takeshi; Litton, Creighton M.; Giardina, Christian P.; Ito, Akihiko
2010-12-01
Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long-lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning to leaves, stems, and roots varies consistently with GPP and that the ratio of net primary production (NPP) to GPP is conservative across environmental gradients. To examine influences of carbon partitioning schemes employed by global ecosystem models, we used this meta-analysis-based model and a satellite-based (MODIS) terrestrial GPP data set to estimate global woody NPP and equilibrium biomass, and then compared it to two process-based ecosystem models (Biome-BGC and VISIT) using the same GPP data set. We hypothesized that different carbon partitioning schemes would result in large differences in global estimates of woody NPP and equilibrium biomass. Woody NPP estimated by Biome-BGC and VISIT was 25% and 29% higher than the meta-analysis-based model for boreal forests, with smaller differences in temperate and tropics. Global equilibrium woody biomass, calculated from model-specific NPP estimates and a single set of tissue turnover rates, was 48 and 226 Pg C higher for Biome-BGC and VISIT compared to the meta-analysis-based model, reflecting differences in carbon partitioning to structural versus metabolically active tissues. In summary, we found that different carbon partitioning schemes resulted in large variations in estimates of global woody carbon flux and storage, indicating that stand-level controls on carbon partitioning are not yet accurately represented in ecosystem models.
Correlation of soil and sediment organic matter polarity to aqueous sorption of nonionic compounds
Kile, D.E.; Wershaw, R. L.; Chiou, C.T.
1999-01-01
Polarities of the soiL/sediment organic matter (SOM) in 19 soil and 9 freshwater sediment sam pies were determined from solid-state 13C-CP/MAS NMR spectra and compared with published partition coefficients (K(oc)) of carbon tetrachloride (CT) from aqueous solution. Nondestructive analysis of whole samples by solid-state NMR permits a direct assessment of the polarity of SOM that is not possible by elemental analysis. The percent of organic carbon associated with polar functional groups was estimated from the combined fraction of carbohydrate and carboxylamide-ester carbons. A plot of the measured partition coefficients (K(oc)) of carbon tetrachloride (CT) vs. percent polar organic carbon (POC) shows distinctly different populations of soils and sediments as well as a roughly inverse trend among the soil/sediment populations. Plots of K(oc) values for CT against other structural group carbon fractions did not yield distinct populations. The results indicate that the polarity of SOM is a significant factor in accounting for differences in K(oc) between the organic matter in soils and sediments. The alternate direct correlation of the sum of aliphatic and aromatic structural carbons with K(oc) illustrates the influence of nonpolar hydrocarbon on solute partition interaction. Additional elemental analysis data of selected samples further substantiate the effect of the organic matter polarity on the partition efficiency of nonpolar solutes. The separation between soil and sediment samples based on percent POC reflects definite differences of the properties of soil and sediment organic matters that are attributable to diagenesis.Polarities of the soil/sediment organic matter (SOM) in 19 soil and 9 freshwater sediment samples were determined from solid-state 13C-CP/MAS NMR spectra and compared with published partition coefficients (Koc) of carbon tetrachloride (CT) from aqueous solution. Nondestructive analysis of whole samples by solid-state NMR permits a direct assessment of the polarity of SOM that is not possible by elemental analysis. The percent of organic carbon associated with polar functional groups was estimated from the combined fraction of carbohydrate and carboxyl-amide-ester carbons. A plot of the measured partition coefficients (Koc) of carbon tetrachloride (CT) vs. percent polar organic carbon (POC) shows distinctly different populations of soils and sediments as well as a roughly inverse trend among the soil/sediment populations. Plots of Koc values for CT against other structural group carbon fractions did not yield distinct populations. The results indicate that the polarity of SOM is a significant factor in accounting for differences in Koc between the organic matter in soils and sediments. The alternate direct correlation of the sum of aliphatic and aromatic structural carbons with Koc illustrates the influence of nonpolar hydrocarbon on solute partition interaction. Additional elemental analysis data of selected samples further substantiate the effect of the organic matter polarity on the partition efficiency of nonpolar solutes. The separation between soil and sediment samples based on percent POC reflects definite differences of the properties of soil and sediment organic matters that are attributable to diagenesis.
Padró, Juan M; Pellegrino Vidal, Rocío B; Reta, Mario
2014-12-01
The partition coefficients, P IL/w, of several compounds, some of them of biological and pharmacological interest, between water and room-temperature ionic liquids based on the imidazolium, pyridinium, and phosphonium cations, namely 1-octyl-3-methylimidazolium hexafluorophosphate, N-octylpyridinium tetrafluorophosphate, trihexyl(tetradecyl)phosphonium chloride, trihexyl(tetradecyl)phosphonium bromide, trihexyl(tetradecyl)phosphonium bis(trifluoromethylsulfonyl)imide, and trihexyl(tetradecyl)phosphonium dicyanamide, were accurately measured. In this way, we extended our database of partition coefficients in room-temperature ionic liquids previously reported. We employed the solvation parameter model with different probe molecules (the training set) to elucidate the chemical interactions involved in the partition process and discussed the most relevant differences among the three types of ionic liquids. The multiparametric equations obtained with the aforementioned model were used to predict the partition coefficients for compounds (the test set) not present in the training set, most being of biological and pharmacological interest. An excellent agreement between calculated and experimental log P IL/w values was obtained. Thus, the obtained equations can be used to predict, a priori, the extraction efficiency for any compound using these ionic liquids as extraction solvents in liquid-liquid extractions.
NASA Astrophysics Data System (ADS)
Corrigan, Catherine M.; Chabot, Nancy L.; McCoy, Timothy J.; McDonough, William F.; Watson, Heather C.; Saslow, Sarah A.; Ash, Richard D.
2009-05-01
To better understand the partitioning behavior of elements during the formation and evolution of iron meteorites, two sets of experiments were conducted at 1 atm in the Fe-Ni-P system. The first set examined the effect of P on solid metal/liquid metal partitioning behavior of 22 elements, while the other set explored the effect of the crystal structures of body-centered cubic (α)- and face-centered cubic (γ)-solid Fe alloys on partitioning behavior. Overall, the effect of P on the partition coefficients for the majority of the elements was minimal. As, Au, Ga, Ge, Ir, Os, Pt, Re, and Sb showed slightly increasing partition coefficients with increasing P-content of the metallic liquid. Co, Cu, Pd, and Sn showed constant partition coefficients. Rh, Ru, W, and Mo showed phosphorophile (P-loving) tendencies. Parameterization models were applied to solid metal/liquid metal results for 12 elements. As, Au, Pt, and Re failed to match previous parameterization models, requiring the determination of separate parameters for the Fe-Ni-S and Fe-Ni-P systems. Experiments with coexisting α and γ Fe alloy solids produced partitioning ratios close to unity, indicating that an α versus γ Fe alloy crystal structure has only a minor influence on the partitioning behaviors of the trace element studied. A simple relationship between an element's natural crystal structure and its α/γ partitioning ratio was not observed. If an iron meteorite crystallizes from a single metallic liquid that contains both S and P, the effect of P on the distribution of elements between the crystallizing solids and the residual liquid will be minor in comparison to the effect of S. This indicates that to a first order, fractional crystallization models of the Fe-Ni-S-P system that do not take into account P are appropriate for interpreting the evolution of iron meteorites if the effects of S are appropriately included in the effort.
NASA Astrophysics Data System (ADS)
Chandramouli, Bharadwaj; Kamens, Richard M.
Decamethyl cyclopentasiloxane (D 5) and decamethyl tetrasiloxane (MD 2M) were injected into a smog chamber containing fine Arizona road dust particles (95% surface area <2.6 μM) and an urban smog atmosphere in the daytime. A photochemical reaction - gas-particle partitioning reaction scheme, was implemented to simulate the formation and gas-particle partitioning of hydroxyl oxidation products of D 5 and MD 2M. This scheme incorporated the reactions of D 5 and MD 2M into an existing urban smog chemical mechanism carbon bond IV and partitioned the products between gas and particle phase by treating gas-particle partitioning as a kinetic process and specifying an uptake and off-gassing rate. A photochemical model PKSS was used to simulate this set of reactions. A Langmuirian partitioning model was used to convert the measured and estimated mass-based partitioning coefficients ( KP) to a molar or volume-based form. The model simulations indicated that >99% of all product silanol formed in the gas-phase partition immediately to particle phase and the experimental data agreed with model predictions. One product, D 4TOH was observed and confirmed for the D 5 reaction and this system was modeled successfully. Experimental data was inadequate for MD 2M reaction products and it is likely that more than one product formed. The model set up a framework into which more reaction and partitioning steps can be easily added.
Sasaki, Kotaro; Rispin, Karen
2017-01-01
In under-resourced settings where motorized wheelchairs are rarely available, manual wheelchair users with limited upper-body strength and functionalities need to rely on assisting pushers for their mobility. Because traveling surfaces in under-resourced settings are often unpaved and rough, wheelchair pushers could experience high physiological loading. In order to evaluate pushers' physiological loading and to improve wheelchair designs, we built indoor modular units that simulate rough surface conditions, and tested a hypothesis that pushing different wheelchairs would result in different physiological performances and pushers' perception of difficulty on the simulated rough surface. Eighteen healthy subjects pushed two different types of pediatric wheelchairs (Moti-Go manufactured by Motivation, and KidChair by Hope Haven) fitted with a 50-kg dummy on the rough and smooth surfaces at self-selected speeds. Oxygen uptake, traveling distance for 6 minutes, and the rating of difficulty were obtained. The results supported our hypothesis, showing that pushing Moti-Go on the rough surface was physiologically less loading than KidChair, but on the smooth surface, the two wheelchairs did not differ significantly. These results indicate wheelchair designs to improve pushers' performance in under-resourced settings should be evaluated on rough surfaces.
NASA Astrophysics Data System (ADS)
Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.
The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.
A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses
Zhang, Chao; Li, Deyu; Yan, Yan
2015-01-01
In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772
An Efficient Soft Set-Based Approach for Conflict Analysis
Sutoyo, Edi; Mungad, Mungad; Hamid, Suraya; Herawan, Tutut
2016-01-01
Conflict analysis has been used as an important tool in economic, business, governmental and political dispute, games, management negotiations, military operations and etc. There are many mathematical formal models have been proposed to handle conflict situations and one of the most popular is rough set theory. With the ability to handle vagueness from the conflict data set, rough set theory has been successfully used. However, computational time is still an issue when determining the certainty, coverage, and strength of conflict situations. In this paper, we present an alternative approach to handle conflict situations, based on some ideas using soft set theory. The novelty of the proposed approach is that, unlike in rough set theory that uses decision rules, it is based on the concept of co-occurrence of parameters in soft set theory. We illustrate the proposed approach by means of a tutorial example of voting analysis in conflict situations. Furthermore, we elaborate the proposed approach on real world dataset of political conflict in Indonesian Parliament. We show that, the proposed approach achieves lower computational time as compared to rough set theory of up to 3.9%. PMID:26928627
An Efficient Soft Set-Based Approach for Conflict Analysis.
Sutoyo, Edi; Mungad, Mungad; Hamid, Suraya; Herawan, Tutut
2016-01-01
Conflict analysis has been used as an important tool in economic, business, governmental and political dispute, games, management negotiations, military operations and etc. There are many mathematical formal models have been proposed to handle conflict situations and one of the most popular is rough set theory. With the ability to handle vagueness from the conflict data set, rough set theory has been successfully used. However, computational time is still an issue when determining the certainty, coverage, and strength of conflict situations. In this paper, we present an alternative approach to handle conflict situations, based on some ideas using soft set theory. The novelty of the proposed approach is that, unlike in rough set theory that uses decision rules, it is based on the concept of co-occurrence of parameters in soft set theory. We illustrate the proposed approach by means of a tutorial example of voting analysis in conflict situations. Furthermore, we elaborate the proposed approach on real world dataset of political conflict in Indonesian Parliament. We show that, the proposed approach achieves lower computational time as compared to rough set theory of up to 3.9%.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
ERIC Educational Resources Information Center
McCain, Daniel F.; Allgood, Ottie E.; Cox, Jacob T.; Falconi, Audrey E.; Kim, Michael J.; Shih, Wei-Yu
2012-01-01
Only a few pedagogical experiments have been published dealing specifically with the hydrophobic interaction though it plays a central role in biochemistry. A set of experiments is presented in which students partition a variety of colorful indicator dyes in biphasic water/organic solvent mixtures. Students monitor the partitioning visually and…
Lost in the supermarket: Quantifying the cost of partitioning memory sets in hybrid search.
Boettcher, Sage E P; Drew, Trafton; Wolfe, Jeremy M
2018-01-01
The items on a memorized grocery list are not relevant in every aisle; for example, it is useless to search for the cabbage in the cereal aisle. It might be beneficial if one could mentally partition the list so only the relevant subset was active, so that vegetables would be activated in the produce section. In four experiments, we explored observers' abilities to partition memory searches. For example, if observers held 16 items in memory, but only eight of the items were relevant, would response times resemble a search through eight or 16 items? In Experiments 1a and 1b, observers were not faster for the partition set; however, they suffered relatively small deficits when "lures" (items from the irrelevant subset) were presented, indicating that they were aware of the partition. In Experiment 2 the partitions were based on semantic distinctions, and again, observers were unable to restrict search to the relevant items. In Experiments 3a and 3b, observers attempted to remove items from the list one trial at a time but did not speed up over the course of a block, indicating that they also could not limit their memory searches. Finally, Experiments 4a, 4b, 4c, and 4d showed that observers were able to limit their memory searches when a subset was relevant for a run of trials. Overall, observers appear to be unable or unwilling to partition memory sets from trial to trial, yet they are capable of restricting search to a memory subset that remains relevant for several trials. This pattern is consistent with a cost to switching between currently relevant memory items.
Hailstone classifier based on Rough Set Theory
NASA Astrophysics Data System (ADS)
Wan, Huisong; Jiang, Shuming; Wei, Zhiqiang; Li, Jian; Li, Fengjiao
2017-09-01
The Rough Set Theory was used for the construction of the hailstone classifier. Firstly, the database of the radar image feature was constructed. It included transforming the base data reflected by the Doppler radar into the bitmap format which can be seen. Then through the image processing, the color, texture, shape and other dimensional features should be extracted and saved as the characteristic database to provide data support for the follow-up work. Secondly, Through the Rough Set Theory, a machine for hailstone classifications can be built to achieve the hailstone samples’ auto-classification.
Uncertainty Modeling for Database Design using Intuitionistic and Rough Set Theory
2009-01-01
Definition. An intuitionistic rough relation R is a sub- set of the set cross product P(D1)× P(D2) × · · ·× P( Dm )× Dµ.× Dv. For a specific relation, R...that aj ∈ dij for all j. The interpretation space is the cross product D1× D2 × · · ·× Dm × Dµ× Dv but is limited for a given re- lation R to the set...systems, Journal of Information Science 11 (1985), 77–87. [7] T. Beaubouef and F. Petry, Rough Querying of Crisp Data in Relational Databases, Third
NASA Astrophysics Data System (ADS)
Grimmond, C. S. B.; Salmond, J. A.; Oke, T. R.; Offerle, B.; Lemonsu, A.
2004-12-01
Eddy covariance (EC) observations above the densely built-up center of Marseille during the Expérience sur site pour contraindre les modèles de pollution atmosphérique et de transport d'émissions (ESCOMPTE) summertime measurement campaign extend current understanding of surface atmosphere exchanges in cities. The instrument array presented opportunities to address issues of the representativeness of local-scale fluxes in urban settings. Separate EC systems operated at two levels, and a telescoping tower allowed the pair to be exposed at two different sets of heights. The flux and turbulence observations taken at the four heights, stratified by wind conditions (mistral wind and sea breeze), are used to address the partitioning of the surface energy balance in an area with large roughness elements. The turbulent sensible heat flux dominates in the daytime, although the storage heat flux is a significant term that peaks before solar noon. The turbulent latent heat flux is small but not negligible. Carbon dioxide fluxes show that this central city district is almost always a source, but the vegetation reduces the magnitude of the fluxes in the afternoon. The atmosphere in such a heavily developed area is rarely stable. The turbulence characteristics support the empirical functions proposed by M. Roth.
Half a century of research on Garner interference and the separability-integrality distinction.
Algom, Daniel; Fitousi, Daniel
2016-12-01
Research in the allied domains of selective attention and perceptual independence has made great advances over the past 5 decades ensuing from the foundational ideas and research conceived by Wendell R. Garner. In particular, Garner's speeded classification paradigm has received considerable attention in psychology. The paradigm is widely used to inform research and theory in various domains of cognitive science. It was Garner who provided the consensual definition of the separable-integral partition of stimulus dimensions, delineating a set of converging operations sustaining the distinction. This distinction is a pillar of today's cognitive science. We review the key ideas, definitions, and findings along 2 paths of the evolution of Garnerian research: selective attention, with a focus on Garner interference and its relation to the Stroop effect, and divided attention, with focus on perceptual independence gauged by multivariate models of perception. The review tracks developments in a roughly chronological order. Our review is also integrative as we follow the evolution of a set of nascent ideas into the vast multifaceted enterprise that they comprise today. Finally, the review is also critical as we highlight problems, inconsistencies, and deviations from original intent in the various studies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the Does History Repeat Itself? Wavelets and the Phylodynamics of Influenza A
Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.
2012-01-01
Unprecedented global surveillance of viruses will result in massive sequence data sets that require new statistical methods. These data sets press the limits of Bayesian phylogenetics as the high-dimensional parameters that comprise a phylogenetic tree increase the already sizable computational burden of these techniques. This burden often results in partitioning the data set, for example, by gene, and inferring the evolutionary dynamics of each partition independently, a compromise that results in stratified analyses that depend only on data within a given partition. However, parameter estimates inferred from these stratified models are likely strongly correlated, considering they rely on data from a single data set. To overcome this shortfall, we exploit the existing Monte Carlo realizations from stratified Bayesian analyses to efficiently estimate a nonparametric hierarchical wavelet-based model and learn about the time-varying parameters of effective population size that reflect levels of genetic diversity across all partitions simultaneously. Our methods are applied to complete genome influenza A sequences that span 13 years. We find that broad peaks and trends, as opposed to seasonal spikes, in the effective population size history distinguish individual segments from the complete genome. We also address hypotheses regarding intersegment dynamics within a formal statistical framework that accounts for correlation between segment-specific parameters. PMID:22160768
The Research of Tax Text Categorization based on Rough Set
NASA Astrophysics Data System (ADS)
Liu, Bin; Xu, Guang; Xu, Qian; Zhang, Nan
To solve the problem of effective of categorization of text data in taxation system, the paper analyses the text data and the size calculation of key issues first, then designs text categorization based on rough set model.
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Optimal partitioning of random programs across two processors
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.
Padró, Juan M; Ponzinibbio, Agustín; Mesa, Leidy B Agudelo; Reta, Mario
2011-03-01
The partition coefficients, P(IL/w), for different probe molecules as well as for compounds of biological interest between the room-temperature ionic liquids (RTILs) 1-butyl-3-methylimidazolium hexafluorophosphate, [BMIM][PF(6)], 1-hexyl-3-methylimidazolium hexafluorophosphate, [HMIM][PF(6)], 1-octyl-3-methylimidazolium tetrafluoroborate, [OMIM][BF(4)] and water were accurately measured. [BMIM][PF(6)] and [OMIM][BF(4)] were synthesized by adapting a procedure from the literature to a simpler, single-vessel and faster methodology, with a much lesser consumption of organic solvent. We employed the solvation-parameter model to elucidate the general chemical interactions involved in RTIL/water partitioning. With this purpose, we have selected different solute descriptor parameters that measure polarity, polarizability, hydrogen-bond-donor and hydrogen-bond-acceptor interactions, and cavity formation for a set of specifically selected probe molecules (the training set). The obtained multiparametric equations were used to predict the partition coefficients for compounds not present in the training set (the test set), most being of biological interest. Partial solubility of the ionic liquid in water (and water into the ionic liquid) was taken into account to explain the obtained results. This fact has not been deeply considered up to date. Solute descriptors were obtained from the literature, when available, or else calculated through commercial software. An excellent agreement between calculated and experimental log P(IL/w) values was obtained, which demonstrated that the resulting multiparametric equations are robust and allow predicting partitioning for any organic molecule in the biphasic systems studied.
Mode entanglement of Gaussian fermionic states
NASA Astrophysics Data System (ADS)
Spee, C.; Schwaiger, K.; Giedke, G.; Kraus, B.
2018-04-01
We investigate the entanglement of n -mode n -partite Gaussian fermionic states (GFS). First, we identify a reasonable definition of separability for GFS and derive a standard form for mixed states, to which any state can be mapped via Gaussian local unitaries (GLU). As the standard form is unique, two GFS are equivalent under GLU if and only if their standard forms coincide. Then, we investigate the important class of local operations assisted by classical communication (LOCC). These are central in entanglement theory as they allow one to partially order the entanglement contained in states. We show, however, that there are no nontrivial Gaussian LOCC (GLOCC) among pure n -partite (fully entangled) states. That is, any such GLOCC transformation can also be accomplished via GLU. To obtain further insight into the entanglement properties of such GFS, we investigate the richer class of Gaussian stochastic local operations assisted by classical communication (SLOCC). We characterize Gaussian SLOCC classes of pure n -mode n -partite states and derive them explicitly for few-mode states. Furthermore, we consider certain fermionic LOCC and show how to identify the maximally entangled set of pure n -mode n -partite GFS, i.e., the minimal set of states having the property that any other state can be obtained from one state inside this set via fermionic LOCC. We generalize these findings also to the pure m -mode n -partite (for m >n ) case.
Correlation of bond strength with surface roughness using a new roughness measurement technique.
Winkler, M M; Moore, B K
1994-07-01
The correlation between shear bond strength and surface roughness was investigated using new surface measurement methods. Bonding agents and associated resin composites were applied to set amalgam after mechanically roughening its surface. Surface treatments were noe (as set against glass), 80 grit, and 600 grit abrasive paper. Surface roughness (R(a) as measured parallel and perpendicular (+) to the direction of the polishing scratches and true profile length were measured. A knife-edge was applied (rate = 2.54 mm/min) at the bonding agent/amalgam interface of each sample until failure. Coefficients of determination for mean bond strength vs either roughness (R(a), of profile length were significantly higher for measurements in parallel directions than for those measurements in (+) directions. The shear bond strength to set amalgam for a PENTA-containing adhesives system (L.D. Caulk Division) was not significantly different from that of a PENTA-free adhesive (3M Dental Products Division), even though PENTA has been reported to increase bond strength to nonprecious metals. The shear bond strength of resin composite to amalgam is correlated to surface roughness when it is measured parallel to the polishing scratches. This correlation is significantly lower when surface roughness is measured in the typical manner, perpendicular to the polishing scratches.
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS
A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...
Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio
2010-04-01
Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.
Equivalence of partition properties and determinacy
Kechris, Alexander S.; Woodin, W. Hugh
1983-01-01
It is shown that, within L(ℝ), the smallest inner model of set theory containing the reals, the axiom of determinacy is equivalent to the existence of arbitrarily large cardinals below Θ with the strong partition property κ → (κ)κ. PMID:16593299
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
A rough set approach for determining weights of decision makers in group decision making.
Yang, Qiang; Du, Ping-An; Wang, Yong; Liang, Bin
2017-01-01
This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs' decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member' decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs' evaluations and selections.
NASA Technical Reports Server (NTRS)
Worm, Jeffrey A.; Culas, Donald E.
1991-01-01
Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. This paper examines the concepts of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to provide the possible optimal solution. By incorporating principles from these theories, a decision-making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much we believe these rules is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of its fuzzy attributes is studied.
The influence of surface roughness of deserts on the July circulation - A numerical study
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Smith, W. E.
1985-01-01
The effect of the low surface roughness characteristics of deserts on atmospheric circulation in July is examined using numerical simulations with the GCM of the Goddard Laboratory for Atmospheric Science (GLAS). Identical sets of simulations were carried out with the model starting from the initial state of the atmosphere on June 15, for the years 1979 and 1980. The first simulation included a surface roughness factor of 45 cm, and the second set had a surface roughness factor of 0.02 cm for desert regions, and 45 cm for all other land. A comparative analysis of the numerical data was carried out in order to study the variations for the desert regions. It is shown that rainfall in the Sahara desert was reduced significantly in the data set with the nonuniform surface roughness factor in comparison with the other data set. The inter-tropical convergence zone (ITCZ) moved southward to about 15 degrees, which was close to its observed location at about 10 degrees N. In other deserts, the North American Great Plains, Rajputana in India, and the Central Asian desert, no similar changes were observed. Detailed contour maps of the weather conditions in the different desert regions are provided.
NASA Astrophysics Data System (ADS)
López-Legentil, S.; Pawlik, J. R.
2009-03-01
In recent years, reports of sponge bleaching, disease, and subsequent mortality have increased alarmingly. Population recovery may depend strongly on colonization capabilities of the affected species. The giant barrel sponge Xestospongia muta is a dominant reef constituent in the Caribbean. However, little is known about its population structure and gene flow. The 5'-end fragment of the mitochondrial gene cytochrome oxidase subunit I is often used to address these kinds of questions, but it presents very low intraspecific nucleotide variability in sponges. In this study, the usefulness of the I3-M11 partition of COI to determine the genetic structure of X. muta was tested for seven populations from Florida, the Bahamas and Belize. A total of 116 sequences of 544 bp were obtained for the I3-M11 partition corresponding to four haplotypes. In order to make a comparison with the 5'-end partition, 10 sequences per haplotype were analyzed for this fragment. The 40 resulting sequences were of 569 bp and corresponded to two haplotypes. The nucleotide diversity of the I3-M11 partition (π = 0.00386) was higher than that of the 5'-end partition (π = 0.00058), indicating better resolution at the intraspecific level. Sponges with the most divergent external morphologies (smooth vs. digitate surface) had different haplotypes, while those with the most common external morphology (rough surface) presented a mixture of haplotypes. Pairwise tests for genetic differentiation among geographic locations based on F ST values showed significant genetic divergence between most populations, but this genetic differentiation was not due to isolation by distance. While limited larval dispersal may have led to differentiation among some of the populations, the patterns of genetic structure appear to be most strongly related to patterns of ocean currents. Therefore, hydrological features may play a major role in sponge colonization and need to be considered in future plans for management and conservation of these important components of coral reef ecosystems.
Priority of road maintenance management based on halda reading range on NAASRA method
NASA Astrophysics Data System (ADS)
Surbakti, M.; Doan, A.
2018-02-01
The road pavement, constantly experiencing stress-strain due to traffic load through it, can cause damage to the pavement. Therefore, early detection and repair of the damage will be able to prevent more severe damage that can develop into pavement failure. A road condition survey is one of the earliest attempts to detect the initial damage of a pavement. In this case the driving comfort is the most important part for the driver in assessing road conditions that are affected by the level of road surface roughness. To determine the level of roughness of the road, one of the methods developed is the measurement using the NAASRA method. In this method the roughness of the road is an accumulation of the average unevenness of the road, with the general setting on halda of 100 m. However, with this 100-meter setting, in some places the final value of the roughness value is too large or too small so that it will result in the priority of the road maintenance. This is what underlies roughness research by comparing halda settings at 50 m and 200 m different from the general settings above.This study uses the International Roughness Index (IRI) method in determining the level of road stability concerning driving discomfort. IRI score obtained from direct survey in field by using Roughometer-NAASRA.The final result shows that there is a significant difference between the reading of halda which is set at 100 m reading with halda set with 50 and 200 meter readings. This may lead to differences in handling priorities, which may impact on the sustainability of road network maintenance management (Sustainaible Road Management)
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120
Monkey search algorithm for ECE components partitioning
NASA Astrophysics Data System (ADS)
Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.
2018-05-01
The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.
Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
B. Hendrickson; T.G. Kolda
1998-09-01
A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.
Springer, M S; Amrine, H M; Burk, A; Stanhope, M J
1999-03-01
We concatenated sequences for four mitochondrial genes (12S rRNA, tRNA valine, 16S rRNA, cytochrome b) and four nuclear genes [aquaporin, alpha 2B adrenergic receptor (A2AB), interphotoreceptor retinoid-binding protein (IRBP), von Willebrand factor (vWF)] into a multigene data set representing 11 eutherian orders (Artiodactyla, Hyracoidea, Insectivora, Lagomorpha, Macroscelidea, Perissodactyla, Primates, Proboscidea, Rodentia, Sirenia, Tubulidentata). Within this data set, we recognized nine mitochondrial partitions (both stems and loops, for each of 12S rRNA, tRNA valine, and 16S rRNA; and first, second, and third codon positions of cytochrome b) and 12 nuclear partitions (first, second, and third codon positions, respectively, of each of the four nuclear genes). Four of the 21 partitions (third positions of cytochrome b, A2AB, IRBP, and vWF) showed significant heterogeneity in base composition across taxa. Phylogenetic analyses (parsimony, minimum evolution, maximum likelihood) based on sequences for all 21 partitions provide 99-100% bootstrap support for Afrotheria and Paenungulata. With the elimination of the four partitions exhibiting heterogeneity in base composition, there is also high bootstrap support (89-100%) for cow + horse. Statistical tests reject Altungulata, Anagalida, and Ungulata. Data set heterogeneity between mitochondrial and nuclear genes is most evident when all partitions are included in the phylogenetic analyses. Mitochondrial-gene trees associate cow with horse, whereas nuclear-gene trees associate cow with hedgehog and these two with horse. However, after eliminating third positions of A2AB, IRBP, and vWF, nuclear data agree with mitochondrial data in supporting cow + horse. Nuclear genes provide stronger support for both Afrotheria and Paenungulata. Removal of third positions of cytochrome b results in improved performance for the mitochondrial genes in recovering these clades.
SGS Dynamics and Modeling near a Rough Wall.
NASA Astrophysics Data System (ADS)
Juneja, Anurag; Brasseur, James G.
1998-11-01
Large-eddy simulation (LES) of the atmospheric boundary layer (ABL) using classical subgrid-scale (SGS) models is known to poorly predict mean shear at the first few grid cells near the rough surface, creating error which can propogate vertically to infect the entire ABL. Our goal was to determine the first-order errors in predicted SGS terms that arise as a consequence of necessary under-resolution of integral scales and anisotropy which exist at the first few grid levels in LES of rough wall turbulence. Analyzing the terms predicted from eddy-viscosity and similarity closures with DNS anisotropic datasets of buoyancy- and shear-driven turbulence, we uncover three important issues which should be addressed in the design of SGS closures for rough walls and we provide a priori tests for the SGS model. Firstly, we identify a strong spurious coupling between the anisotropic structure of the resolved velocity field and predicted SGS dynamics which can create a feedback loop to incorrectly enhance certain components of the predicted resolved velocity. Secondly, we find that eddy viscosity and similarity SGS models do not contain enough degrees of freedom to capture, at a sufficient level of accuracy, both RS-SGS energy flux and SGS-RS dynamics. Thirdly, to correctly capture pressure transport near a wall, closures must be made more flexible to accommodate proper partitioning between SGS stress divergence and SGS pressure gradient.
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
Corrugated megathrust revealed offshore from Costa Rica
NASA Astrophysics Data System (ADS)
Edwards, Joel H.; Kluesner, Jared W.; Silver, Eli A.; Brodsky, Emily E.; Brothers, Daniel S.; Bangs, Nathan L.; Kirkpatrick, James D.; Wood, Ruby; Okamoto, Kristina
2018-03-01
Exhumed faults are rough, often exhibiting topographic corrugations oriented in the direction of slip; such features are fundamental to mechanical processes that drive earthquakes and fault evolution. However, our understanding of corrugation genesis remains limited due to a lack of in situ observations at depth, especially at subducting plate boundaries. Here we present three-dimensional seismic reflection data of the Costa Rica subduction zone that image a shallow megathrust fault characterized by corrugated, and chaotic and weakly corrugated topographies. The corrugated surfaces extend from near the trench to several kilometres down-dip, exhibit high reflection amplitudes (consistent with high fluid content/pressure) and trend 11-18° oblique to subduction, suggesting 15 to 25 mm yr-1 of trench-parallel slip partitioning across the plate boundary. The corrugations form along portions of the megathrust with greater cumulative slip and may act as fluid conduits. In contrast, weakly corrugated areas occur adjacent to active plate bending faults where the megathrust has migrated up-section, forming a nascent fault surface. The variations in megathrust roughness imaged here suggest that abandonment and then reestablishment of the megathrust up-section transiently increases fault roughness. Analogous corrugations may exist along significant portions of subduction megathrusts globally.
Corrugated megathrust revealed offshore from Costa Rica
Edwards, Joel H.; Kluesner, Jared; Silver, Eli A.; Brodsky, Emily E.; Brothers, Daniel; Bangs, Nathan L.; Kirkpatrick, James D.; Wood, Ruby; Okamato, Kristina
2018-01-01
Exhumed faults are rough, often exhibiting topographic corrugations oriented in the direction of slip; such features are fundamental to mechanical processes that drive earthquakes and fault evolution. However, our understanding of corrugation genesis remains limited due to a lack of in situ observations at depth, especially at subducting plate boundaries. Here we present three-dimensional seismic reflection data of the Costa Rica subduction zone that image a shallow megathrust fault characterized by corrugated, and chaotic and weakly corrugated topographies. The corrugated surfaces extend from near the trench to several kilometres down-dip, exhibit high reflection amplitudes (consistent with high fluid content/pressure) and trend 11–18° oblique to subduction, suggesting 15 to 25 mm yr−1 of trench-parallel slip partitioning across the plate boundary. The corrugations form along portions of the megathrust with greater cumulative slip and may act as fluid conduits. In contrast, weakly corrugated areas occur adjacent to active plate bending faults where the megathrust has migrated up-section, forming a nascent fault surface. The variations in megathrust roughness imaged here suggest that abandonment and then reestablishment of the megathrust up-section transiently increases fault roughness. Analogous corrugations may exist along significant portions of subduction megathrusts globally.
Gene selection for tumor classification using neighborhood rough sets and entropy measures.
Chen, Yumin; Zhang, Zunjun; Zheng, Jianzhong; Ma, Ying; Xue, Yu
2017-03-01
With the development of bioinformatics, tumor classification from gene expression data becomes an important useful technology for cancer diagnosis. Since a gene expression data often contains thousands of genes and a small number of samples, gene selection from gene expression data becomes a key step for tumor classification. Attribute reduction of rough sets has been successfully applied to gene selection field, as it has the characters of data driving and requiring no additional information. However, traditional rough set method deals with discrete data only. As for the gene expression data containing real-value or noisy data, they are usually employed by a discrete preprocessing, which may result in poor classification accuracy. In this paper, we propose a novel gene selection method based on the neighborhood rough set model, which has the ability of dealing with real-value data whilst maintaining the original gene classification information. Moreover, this paper addresses an entropy measure under the frame of neighborhood rough sets for tackling the uncertainty and noisy of gene expression data. The utilization of this measure can bring about a discovery of compact gene subsets. Finally, a gene selection algorithm is designed based on neighborhood granules and the entropy measure. Some experiments on two gene expression data show that the proposed gene selection is an effective method for improving the accuracy of tumor classification. Copyright © 2017 Elsevier Inc. All rights reserved.
Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems
2015-05-01
of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC
NASA Astrophysics Data System (ADS)
Reygondeau, Gabriel; Guieu, Cécile; Benedetti, Fabio; Irisson, Jean-Olivier; Ayata, Sakina-Dorothée; Gasparini, Stéphane; Koubbi, Philippe
2017-02-01
When dividing the ocean, the aim is generally to summarise a complex system into a representative number of units, each representing a specific environment, a biological community or a socio-economical specificity. Recently, several geographical partitions of the global ocean have been proposed using statistical approaches applied to remote sensing or observations gathered during oceanographic cruises. Such geographical frameworks defined at a macroscale appear hardly applicable to characterise the biogeochemical features of semi-enclosed seas that are driven by smaller-scale chemical and physical processes. Following the Longhurst's biogeochemical partitioning of the pelagic realm, this study investigates the environmental divisions of the Mediterranean Sea using a large set of environmental parameters. These parameters were informed in the horizontal and the vertical dimensions to provide a 3D spatial framework for environmental management (12 regions found for the epipelagic, 12 for the mesopelagic, 13 for the bathypelagic and 26 for the seafloor). We show that: (1) the contribution of the longitudinal environmental gradient to the biogeochemical partitions decreases with depth; (2) the partition of the surface layer cannot be extrapolated to other vertical layers as the partition is driven by a different set of environmental variables. This new partitioning of the Mediterranean Sea has strong implications for conservation as it highlights that management must account for the differences in zoning with depth at a regional scale.
ERIC Educational Resources Information Center
Mathematics Teaching, 1972
1972-01-01
Topics discussed in this column include patterns of inverse multipliers in modular arithmetic; diagrams for product sets, set intersection, and set union; function notation; patterns in the number of partitions of positive integers; and tessellations. (DT)
Standard Sizes for Rough-Dimension Exports to Europe and Japan
Philip A. Araman
1987-01-01
In this article, European and Japanese standard-sized rough dimension products are described, and their apparent sizes are listed. One set of proposed standard sizes of rough dimension that could be manufactured in the United States for these markets is presented. Also, the benefits of the production and sale of standard sizes of export rough dimension are highlighted...
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
Spectral Analysis and Experimental Modeling of Ice Accretion Roughness
NASA Technical Reports Server (NTRS)
Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.
1996-01-01
A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.
Concentration of isoprene in artificial and thylakoid membranes.
Harvey, Christopher M; Li, Ziru; Tjellström, Henrik; Blanchard, Gary J; Sharkey, Thomas D
2015-10-01
Isoprene emission protects plants from a variety of abiotic stresses. It has been hypothesized to do so by partitioning into cellular membranes, particularly the thylakoid membrane. At sufficiently high concentrations, this partitioning may alter the physical properties of membranes. As much as several per cent of carbon taken up in photosynthesis is re-emitted as isoprene but the concentration of isoprene in the thylakoid membrane of rapidly emitting plants has seldom been considered. In this study, the intramembrane concentration of isoprene in phosphatidylcholine liposomes equilibrated to a physiologically relevant gas phase concentration of 20 μL L(-1) isoprene was less than predicted by ab initio calculations based on the octanol-water partitioning coefficient of isoprene while the concentration in thylakoid membranes was more. However, the concentration in both systems was roughly two orders of magnitude lower than previously assumed. High concentrations of isoprene (2000 μL L(-1) gas phase) failed to alter the viscosity of phosphatidylcholine liposomes as measured with perylene, a molecular probe of membrane structure. These results strongly suggest that the physiological concentration of isoprene within the leaves of highly emitting plants is too low to affect the dynamics of thylakoid membrane acyl lipids. It is speculated that isoprene may bind to and modulate the dynamics of thylakoid embedded proteins.
NASA Technical Reports Server (NTRS)
Palopo, Kee; Lee, Hak-Tae; Chatterji, Gano
2011-01-01
The concept of re-partitioning the airspace into a new set of sectors for allocating capacity rather than delaying flights to comply with the capacity constraints of a static set of sectors is being explored. The reduction in delay, a benefit, achieved by this concept needs to be greater than the cost of controllers and equipment needed for the additional sectors. Therefore, tradeoff studies are needed for benefits assessment of this concept.
Architecture Aware Partitioning Algorithms
2006-01-19
follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj
A rough set approach for determining weights of decision makers in group decision making
Yang, Qiang; Du, Ping-an; Wang, Yong; Liang, Bin
2017-01-01
This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs’ decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member’ decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs’ evaluations and selections. PMID:28234974
Heat Transfer Measurements on Surfaces with Natural Ice Castings and Modeled Roughness
NASA Technical Reports Server (NTRS)
Breuer, Kenneth S.; Torres, Benjamin E.; Orr, D. J.; Hansman, R. John
1997-01-01
An experimental method is described to measure and compare the convective heat transfer coefficient of natural and simulated ice accretion roughness and to provide a rational means for determining accretion-related enhanced heat transfer coefficients. The natural ice accretion roughness was a sample casting made from accretions at the NASA Lewis Icing Research Tunnel (IRT). One of these castings was modeled using a Spectral Estimation Technique (SET) to produce three roughness elements patterns that simulate the actual accretion. All four samples were tested in a flat-plate boundary layer at angle of attack in a "dry" wind tunnel test. The convective heat transfer coefficient was measured using infrared thermography. It is shown that, dispite some problems in the current data set, the method does show considerable promise in determining roughness-induced heat transfer coefficients, and that, in addition to the roughness height and spacing in the flow direction, the concentration and spacing of elements in the spanwise direction are important parameters.
NASA Astrophysics Data System (ADS)
Hemes, K. S.; Eichelmann, E.; Chamberlain, S.; Knox, S. H.; Oikawa, P.; Sturtevant, C.; Verfaillie, J. G.; Baldocchi, D. D.
2017-12-01
Globally, delta ecosystems are critical for human livelihoods, but are at increasingly greater risk of degradation. The Sacramento-San Joaquin River Delta (`Delta') has been subsiding dramatically, losing close to 100 Tg of carbon since the mid 19th century due in large part to agriculture-induced oxidation of the peat soils through drainage and cultivation. Efforts to re-wet the peat soils through wetland restoration are attractive as climate mitigation activities. While flooded wetland systems have the potential to sequester significant amounts of carbon as photosynthesis outpaces aerobic respiration, the highly-reduced conditions can result in significant methane emissions. This study will utilize three years (2014-2016) of continuous, gap-filled, CO2 and CH4 flux data from a mesonetwork of seven eddy covariance towers in the Delta to compute GHG budgets for the restored wetlands and agricultural baseline sites measured. Along with biogeochemical impacts of wetland restoration, biophysical impacts such as changes in reflectance, energy partitioning, and surface roughness, can have significant local to regional impacts on air temperature and heat fluxes. We hypothesize that despite flooded wetlands reducing albedo, wetland land cover will cool the near-surface air temperature due to increased net radiation being preferentially partitioned into latent heat flux and rougher canopy conditions allowing for more turbulent mixing with the atmosphere. This study will investigate the seasonal and diurnal patterns of turbulent energy fluxes and the surface properties that drive them. With nascent policy mechanisms set to compensate landowners and farmers for low emission land use practices beyond reforestation, it is essential that policy mechanisms take into consideration how the biophysical impacts of land use change could drive local to regional-scale climatic perturbations, enhancing or attenuating the biogeochemical impacts.
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
Performance review of the ROMI-RIP rough mill simulator
Edward Thomas; Urs Buehlmann
2003-01-01
The USDA Forest Service's ROMI-RIP version 2.0 (RR2) rough mill rip-first simulation program was validated in a recent study. The validation study found that when RR2 was set to search for optimum yield without considering actual rough mill strip solutions, it produced yields that were as much as 7 percent higher (71.1% versus 64.0%) than the actual rough mill....
NASA Astrophysics Data System (ADS)
Haghighi, Erfan; Or, Dani
2015-11-01
Bluff-body obstacles interacting with turbulent airflows are common in many natural and engineering applications (from desert pavement and shrubs over natural surfaces to cylindrical elements in compact heat exchangers). Even with obstacles of simple geometry, their interactions within turbulent airflows result in a complex and unsteady flow field that affects surface drag partitioning and transport of scalars from adjacent evaporating surfaces. Observations of spatio-temporal thermal patterns on evaporating porous surfaces adjacent to bluff-body obstacles depict well-defined and persistent zonation of evaporation rates that were used to construct a simple mechanistic model for surface-turbulence interactions. Results from evaporative drying of sand surfaces with isolated cylindrical elements (bluff bodies) subjected to constant turbulent airflows were in good agreement with model predictions for localized exchange rates. Experimental and theoretical results show persistent enhancement of evaporative fluxes from bluff-rough surfaces relative to smooth flat surfaces under similar conditions. The enhancement is attributed to formation of vortices that induce a thinner boundary layer over part of the interacting surface footprint. For a practical range of air velocities (0.5-4.0 m/s), low-aspect ratio cylindrical bluff elements placed on evaporating sand surfaces enhanced evaporative mass losses (relative to a flat surface) by up to 300% for high density of elements and high wind velocity, similar to observations reported in the literature. Concepts from drag partitioning were used to generalize the model and upscale predictions to evaporation from surfaces with multiple obstacles for potential applications to natural bluff-rough surfaces.
PAQ: Partition Analysis of Quasispecies.
Baccam, P; Thompson, R J; Fedrigo, O; Carpenter, S; Cornette, J L
2001-01-01
The complexities of genetic data may not be accurately described by any single analytical tool. Phylogenetic analysis is often used to study the genetic relationship among different sequences. Evolutionary models and assumptions are invoked to reconstruct trees that describe the phylogenetic relationship among sequences. Genetic databases are rapidly accumulating large amounts of sequences. Newly acquired sequences, which have not yet been characterized, may require preliminary genetic exploration in order to build models describing the evolutionary relationship among sequences. There are clustering techniques that rely less on models of evolution, and thus may provide nice exploratory tools for identifying genetic similarities. Some of the more commonly used clustering methods perform better when data can be grouped into mutually exclusive groups. Genetic data from viral quasispecies, which consist of closely related variants that differ by small changes, however, may best be partitioned by overlapping groups. We have developed an intuitive exploratory program, Partition Analysis of Quasispecies (PAQ), which utilizes a non-hierarchical technique to partition sequences that are genetically similar. PAQ was used to analyze a data set of human immunodeficiency virus type 1 (HIV-1) envelope sequences isolated from different regions of the brain and another data set consisting of the equine infectious anemia virus (EIAV) regulatory gene rev. Analysis of the HIV-1 data set by PAQ was consistent with phylogenetic analysis of the same data, and the EIAV rev variants were partitioned into two overlapping groups. PAQ provides an additional tool which can be used to glean information from genetic data and can be used in conjunction with other tools to study genetic similarities and genetic evolution of viral quasispecies.
Modeling of adipose/blood partition coefficient for environmental chemicals.
Papadaki, K C; Karakitsios, S P; Sarigiannis, D A
2017-12-01
A Quantitative Structure Activity Relationship (QSAR) model was developed in order to predict the adipose/blood partition coefficient of environmental chemical compounds. The first step of QSAR modeling was the collection of inputs. Input data included the experimental values of adipose/blood partition coefficient and two sets of molecular descriptors for 67 organic chemical compounds; a) the descriptors from Linear Free Energy Relationship (LFER) and b) the PaDEL descriptors. The datasets were split to training and prediction set and were analysed using two statistical methods; Genetic Algorithm based Multiple Linear Regression (GA-MLR) and Artificial Neural Networks (ANN). The models with LFER and PaDEL descriptors, coupled with ANN, produced satisfying performance results. The fitting performance (R 2 ) of the models, using LFER and PaDEL descriptors, was 0.94 and 0.96, respectively. The Applicability Domain (AD) of the models was assessed and then the models were applied to a large number of chemical compounds with unknown values of adipose/blood partition coefficient. In conclusion, the proposed models were checked for fitting, validity and applicability. It was demonstrated that they are stable, reliable and capable to predict the values of adipose/blood partition coefficient of "data poor" chemical compounds that fall within the applicability domain. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Bernard, Julien; Eychenne, Julia; Le Pennec, Jean-Luc; Narváez, Diego
2016-08-01
How and how much the mass of juvenile magma is split between vent-derived tephra, PDC deposits and lavas (i.e., mass partition) is related to eruption dynamics and style. Estimating such mass partitioning budgets may reveal important for hazard evaluation purposes. We calculated the volume of each product emplaced during the August 2006 paroxysmal eruption of Tungurahua volcano (Ecuador) and converted it into masses using high-resolution grainsize, componentry and density data. This data set is one of the first complete descriptions of mass partitioning associated with a VEI 3 andesitic event. The scoria fall deposit, near-vent agglutinate and lava flow include 28, 16 and 12 wt. % of the erupted juvenile mass, respectively. Much (44 wt. %) of the juvenile material fed Pyroclastic Density Currents (i.e., dense flows, dilute surges and co-PDC plumes), highlighting that tephra fall deposits do not depict adequately the size and fragmentation processes of moderate PDC-forming event. The main parameters controlling the mass partitioning are the type of magmatic fragmentation, conditions of magma ascent, and crater area topography. Comparisons of our data set with other PDC-forming eruptions of different style and magma composition suggest that moderate andesitic eruptions are more prone to produce PDCs, in proportions, than any other eruption type. This finding may be explained by the relatively low magmatic fragmentation efficiency of moderate andesitic eruptions. These mass partitioning data reveal important trends that may be critical for hazard assessment, notably at frequently active andesitic edifices.
Krajewski, C; Fain, M G; Buckley, L; King, D G
1999-11-01
ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.
NASA Astrophysics Data System (ADS)
Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.
2015-10-01
In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.
Abraham, Michael H; Gola, Joelle M R; Ibrahim, Adam; Acree, William E; Liu, Xiangli
2014-07-01
There is considerable interest in the blood-tissue distribution of agrochemicals, and a number of researchers have developed experimental methods for in vitro distribution. These methods involve the determination of saline-blood and saline-tissue partitions; not only are they indirect, but they do not yield the required in vivo distribution. The authors set out equations for gas-tissue and blood-tissue distribution, for partition from water into skin and for permeation from water through human skin. Together with Abraham descriptors for the agrochemicals, these equations can be used to predict values for all of these processes. The present predictions compare favourably with experimental in vivo blood-tissue distribution where available. The predictions require no more than simple arithmetic. The present method represents a much easier and much more economic way of estimating blood-tissue partitions than the method that uses saline-blood and saline-tissue partitions. It has the added advantages of yielding the required in vivo partitions and being easily extended to the prediction of partition of agrochemicals from water into skin and permeation from water through skin. © 2013 Society of Chemical Industry.
3d expansions of 5d instanton partition functions
NASA Astrophysics Data System (ADS)
Nieri, Fabrizio; Pan, Yiwen; Zabzine, Maxim
2018-04-01
We propose a set of novel expansions of Nekrasov's instanton partition functions. Focusing on 5d supersymmetric pure Yang-Mills theory with unitary gauge group on C_{q,{t}^{-1}}^2× S^1 , we show that the instanton partition function admits expansions in terms of partition functions of unitary gauge theories living on the 3d subspaces C_q× S^1 , C_{t^{-1}}× S^1 and their intersection along S^1 . These new expansions are natural from the BPS/CFT viewpoint, as they can be matched with W q,t correlators involving an arbitrary number of screening charges of two kinds. Our constructions generalize and interpolate existing results in the literature.
NASA Astrophysics Data System (ADS)
Hatch, Spencer M.; LaBelle, James; Chaston, Christopher C.
2018-01-01
We review the role of Alfvén waves in magnetosphere-ionosphere coupling during geomagnetically active periods, and use three years of high-latitude FAST satellite observations of inertial Alfvén waves (IAWs) together with 55 years of tabulated measurements of the Dst index to answer the following questions: 1) How do global rates of IAW-related energy deposition, electron precipitation, and ion outflow during storm main phase and storm recovery phase compare with global rates during geomagnetically quiet periods? 2) What fraction of net IAW-related energy deposition, electron precipitation, and ion outflow is associated with storm main phase and storm recovery phase; that is, how are these budgets partitioned by storm phase? We find that during the period between October 1996 and November 1999, rates of IAW-related energy deposition, electron precipitation, and ion outflow during geomagnetically quiet periods are increased by factors of 4-5 during storm phases. We also find that ∼62-68% of the net Alfvénic energy deposition, electron precipitation, and ion outflow in the auroral ionosphere occurred during storm main and recovery phases, despite storm phases comprising only 31% of this period. In particular storm main phase, which comprised less than 14% of the three-year period, was associated with roughly a third of the total Alfvénic energy input and ion outflow in the auroral ionosphere. Measures of geomagnetic activity during the IAW study period fall near corresponding 55-year median values, from which we conclude that each storm phase is associated with a fraction of total Alfvénic energy, precipitation, and outflow budgets in the auroral ionosphere that is, in the long term, probably as great or greater than the fraction associated with geomagnetic quiescence for all times except possibly those when geomagnetic activity is protractedly weak, such as solar minimum. These results suggest that the budgets of IAW-related energy deposition, electron precipitation, and ion outflow are roughly equally partitioned by geomagnetic storm phase.
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
A Novel Method for Discovering Fuzzy Sequential Patterns Using the Simple Fuzzy Partition Method.
ERIC Educational Resources Information Center
Chen, Ruey-Shun; Hu, Yi-Chung
2003-01-01
Discusses sequential patterns, data mining, knowledge acquisition, and fuzzy sequential patterns described by natural language. Proposes a fuzzy data mining technique to discover fuzzy sequential patterns by using the simple partition method which allows the linguistic interpretation of each fuzzy set to be easily obtained. (Author/LRW)
Finding and testing network communities by lumped Markov chains.
Piccardi, Carlo
2011-01-01
Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition that is based on a quality threshold. By means of a lumped Markov chain model of a random walker, a quality measure called "persistence probability" is associated to a cluster, which is then defined as an "α-community" if such a probability is not smaller than α. Consistently, a partition composed of α-communities is an "α-partition." These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired α-level allows one to immediately select the α-partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the quality of each single community. Given its ability in individually assessing each single cluster, this approach can also disclose single well-defined communities even in networks that overall do not possess a definite clusterized structure.
ERIC Educational Resources Information Center
Narli, Serkan; Ozgen, Kemal; Alkan, Huseyin
2011-01-01
The present study aims to identify the relationship between individuals' multiple intelligence areas and their learning styles with mathematical clarity using the concept of rough sets which is used in areas such as artificial intelligence, data reduction, discovery of dependencies, prediction of data significance, and generating decision…
A Mathematical Approach in Evaluating Biotechnology Attitude Scale: Rough Set Data Analysis
ERIC Educational Resources Information Center
Narli, Serkan; Sinan, Olcay
2011-01-01
Individuals' thoughts and attitudes towards biotechnology have been investigated in many countries. A Likert-type scale is the most commonly used scale to measure attitude. However, the weak side of a likert-type scale is that different responses may produce the same score. The Rough set method has been regarded to address this shortcoming. A…
Dynamic connectivity regression: Determining state-related changes in brain connectivity
Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.
2014-01-01
Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408
NASA Astrophysics Data System (ADS)
Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil
2001-07-01
We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.
Rough Play: One of the Most Challenging Behaviors
ERIC Educational Resources Information Center
Carlson, Frances M.
2011-01-01
Most children engage in rough play, and research demonstrates its physical, social, emotional, and cognitive value. Early childhood education settings have the responsibility to provide children with what best serves their developmental needs. One of the best ways teachers can support rough play is by modeling it for children. When adults model…
NASA Astrophysics Data System (ADS)
Topping, D. O.; Lowe, D.; McFiggans, G.; Zaveri, R. A.
2016-12-01
Gas to particle partitioning of atmospheric compounds occurs through disequilibrium mass transfer rather than through instantaneous equilibrium. However, it is common to treat only the inorganic compounds as partitioning dynamically whilst organic compounds, represented by the Volatility Basis Set (VBS), are partitioned instantaneously. In this study we implement a more realistic dynamic partitioning of organic compounds in a regional framework and assess impact on aerosol mass and microphysics. It is also common to assume condensed phase water is only associated with inorganic components. We thus also assess sensitivity to assuming all organics are hygroscopic according to their prescribed molecular weight.For this study we use WRF-Chem v3.4.1, focusing on anthropogenic dominated North-Western Europe. Gas-phase chemistry is represented using CBM-Z whilst aerosol dynamics are simulated using the 8-section MOSAIC scheme, including a 9-bin volatility basis set (VBS) treatment of organic aerosol. Results indicate that predicted mass loadings can vary significantly. Without gas phase ageing of higher volatility compounds, dynamic partitioning always results in lower mass loadings downwind of emission sources. The inclusion of condensed phase water in both partitioning models increases the predicted PM mass, resulting from a larger contribution from higher volatility organics, if present. If gas phase ageing of VBS compounds is allowed to occur in a dynamic model, this can often lead to higher predicted mass loadings, contrary to expected behaviour from a simple non-reactive gas phase box model. As descriptions of aerosol phase processes improve within regional models, the baseline descriptions of partitioning should retain the ability to treat dynamic partitioning of organic compounds. Using our simulations, we discuss whether derived sensitivities to aerosol processes in existing models may be inherently biased.This work was supported by the Nature Environment Research Council within the RONOCO (NE/F004656/1) and CCN-Vol (NE/L007827/1) projects.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.
Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method
NASA Astrophysics Data System (ADS)
Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao
2016-09-01
To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.
Surface roughness measurement in the submicrometer range using laser scattering
NASA Astrophysics Data System (ADS)
Wang, S. H.; Quan, Chenggen; Tay, C. J.; Shang, H. M.
2000-06-01
A technique for measuring surface roughness in the submicrometer range is developed. The principle of the method is based on laser scattering from a rough surface. A telecentric optical setup that uses a laser diode as a light source is used to record the light field scattered from the surface of a rough object. The light intensity distribution of the scattered band, which is correlated to the surface roughness, is recorded by a linear photodiode array and analyzed using a single-chip microcomputer. Several sets of test surfaces prepared by different machining processes are measured and a method for the evaluation of surface roughness is proposed.
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
Generalization of multifractal theory within quantum calculus
NASA Astrophysics Data System (ADS)
Olemskoi, A.; Shuda, I.; Borisyuk, V.
2010-03-01
On the basis of the deformed series in quantum calculus, we generalize the partition function and the mass exponent of a multifractal, as well as the average of a random variable distributed over a self-similar set. For the partition function, such expansion is shown to be determined by binomial-type combinations of the Tsallis entropies related to manifold deformations, while the mass exponent expansion generalizes the known relation τq=Dq(q-1). We find the equation for the set of averages related to ordinary, escort, and generalized probabilities in terms of the deformed expansion as well. Multifractals related to the Cantor binomial set, exchange currency series, and porous-surface condensates are considered as examples.
An intelligent knowledge mining model for kidney cancer using rough set theory.
Durai, M A Saleem; Acharjya, D P; Kannan, A; Iyengar, N Ch Sriman Narayana
2012-01-01
Medical diagnosis processes vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases themselves. Rough set approach has two major advantages over the other methods. First, it can handle different types of data such as categorical, numerical etc. Secondly, it does not make any assumption like probability distribution function in stochastic modeling or membership grade function in fuzzy set theory. It involves pattern recognition through logical computational rules rather than approximating them through smooth mathematical functional forms. In this paper we use rough set theory as a data mining tool to derive useful patterns and rules for kidney cancer faulty diagnosis. In particular, the historical data of twenty five research hospitals and medical college is used for validation and the results show the practical viability of the proposed approach.
Jin, Ke-ming; Cao, Xue-jun; Su, Jin; Ma, Li; Zhuang, Ying-ping; Chu, Ju; Zhang, Si-liang
2008-03-01
Immobilized penicillin acylase was used for bioconversion of penicillin PG into 6-APA in aqueous two-phase systems consisting of a light-sensitive polymer PNBC and a pH-sensitive polymer PADB. Partition coefficients of 6-APA was found to be about 5.78 in the presence of 1% NaCl. Enzyme kinetics showed that the reaction reached equilibrium at roughly 7 h. The 6-APA mole yields were 85.3% (pH 7.8, 20 degrees C), with about 20% increment as compared with the reaction of single aqueous phase buffer. The partition coefficient of PG (Na) varied scarcely, while that of the product, 6-APA and phenylacetic acid (PA) significantly varied due to Donnan effect of the phase systems and hydrophobicity of the products. The variation of the partition coefficients of the products also affected the bioconversion yield of the products. In the aqueous two-phase systems, the substrate, PG, the products of 6-APA and PA were biased in the top phase, while immobilized penicillin acylase at completely partitioned at the bottom. The substrate and PG entered the bottom phase, where it was catalyzed into 6-APA and PA and entered the top phase. Inhibition of the substrate and products was removed to result in improvement of the product yield, and the immobilized enzyme showed higher efficiency than the immobilized cells and occupied smaller volume. Compared with the free enzyme, immobilized enzyme had greater stability, longer life-time, and was completely partitioned in the bottom phase and recycle. Bioconversion in two-phase systems using immobilized penicillin acylase showed outstanding advantage. The light-sensitive copolymer forming aqueous two-phase systems could be recovered by laser radiation at 488 nm or filtered 450 nm light, while pH-sensitive polymer PADB could be recovered at the isoelectric point (pH 4.1). The recovery of the two copolymers was between 95% and 99%.
Partitioning error components for accuracy-assessment of near-neighbor methods of imputation
Albert R. Stage; Nicholas L. Crookston
2007-01-01
Imputation is applied for two quite different purposes: to supply missing data to complete a data set for subsequent modeling analyses or to estimate subpopulation totals. Error properties of the imputed values have different effects in these two contexts. We partition errors of imputation derived from similar observation units as arising from three sources:...
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Albert C.R., E-mail: albert@fisica.ufjf.br; Takakura, Flavio I., E-mail: takakura@fisica.ufjf.br; Abreu, Everton M.C., E-mail: evertonabreu@ufrrj.br
In this work we have obtained a higher-derivative Lagrangian for a charged fluid coupled with the electromagnetic fluid and the Dirac’s constraints analysis was discussed. A set of first-class constraints fixed by noncovariant gauge condition were obtained. The path integral formalism was used to obtain the partition function for the corresponding higher-derivative Hamiltonian and the Faddeev–Popov ansatz was used to construct an effective Lagrangian. Through the partition function, a Stefan–Boltzmann type law was obtained. - Highlights: • Higher-derivative Lagrangian for a charged fluid. • Electromagnetic coupling and Dirac’s constraint analysis. • Partition function through path integral formalism. • Stefan–Boltzmann-kind lawmore » through the partition function.« less
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
NASA Astrophysics Data System (ADS)
Kruglov, V. E.; Malyshev, D. S.; Pochinka, O. V.
2018-01-01
Studying the dynamics of a flow on surfaces by partitioning the phase space into cells with the same limit behaviour of trajectories within a cell goes back to the classical papers of Andronov, Pontryagin, Leontovich and Maier. The types of cells (the number of which is finite) and how the cells adjoin one another completely determine the topological equivalence class of a flow with finitely many special trajectories. If one trajectory is chosen in every cell of a rough flow without periodic orbits, then the cells are partitioned into so-called triangular regions of the same type. A combinatorial description of such a partition gives rise to the three-colour Oshemkov-Sharko graph, the vertices of which correspond to the triangular regions, and the edges to separatrices connecting them. Oshemkov and Sharko proved that such flows are topologically equivalent if and only if the three-colour graphs of the flows are isomorphic, and described an algorithm of distinguishing three-colour graphs. But their algorithm is not efficient with respect to graph theory. In the present paper, we describe the dynamics of Ω-stable flows without periodic trajectories on surfaces in the language of four-colour graphs, present an efficient algorithm for distinguishing such graphs, and develop a realization of a flow from some abstract graph. Bibliography: 17 titles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burant, Aniela; Thompson, Christopher; Lowry, Gregory V.
2016-05-17
Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch reactor system with dual spectroscopic detectors: a near infrared spectrometer for measuring the organic analyte in the CO2 phase, and a UV detector for quantifying the analyte inmore » the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly-parameter linear free energy relationship and to develop five new linear free energy relationships for predicting water-sc-CO2 partitioning coefficients. Four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than the model built for the entire dataset.« less
Multi-scale modularity and motif distributional effect in metabolic networks.
Gao, Shang; Chen, Alan; Rahmani, Ali; Zeng, Jia; Tan, Mehmet; Alhajj, Reda; Rokne, Jon; Demetrick, Douglas; Wei, Xiaohui
2016-01-01
Metabolism is a set of fundamental processes that play important roles in a plethora of biological and medical contexts. It is understood that the topological information of reconstructed metabolic networks, such as modular organization, has crucial implications on biological functions. Recent interpretations of modularity in network settings provide a view of multiple network partitions induced by different resolution parameters. Here we ask the question: How do multiple network partitions affect the organization of metabolic networks? Since network motifs are often interpreted as the super families of evolved units, we further investigate their impact under multiple network partitions and investigate how the distribution of network motifs influences the organization of metabolic networks. We studied Homo sapiens, Saccharomyces cerevisiae and Escherichia coli metabolic networks; we analyzed the relationship between different community structures and motif distribution patterns. Further, we quantified the degree to which motifs participate in the modular organization of metabolic networks.
Measuring Skew in Average Surface Roughness as a Function of Surface Preparation
NASA Technical Reports Server (NTRS)
Stahl, Mark
2015-01-01
Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
Surface roughness of composite resin veneer after application of herbal and non-herbal toothpaste
NASA Astrophysics Data System (ADS)
Nuraini, S.; Herda, E.; Irawan, B.
2017-08-01
The aim of this study was to find out the surface roughness of composite resin veneer after brushing. In this study, 24 specimens of composite resin veneer are divided into three subgroups: brushed without toothpaste, brushed with non-herbal toothpaste, and brushed with herbal toothpaste. Brushing was performed for one set of 5,000 strokes and continued for a second set of 5,000 strokes. Roughness of composite resin veneer was determined using a Surface Roughness Tester. The results were statistically analyzed using Kruskal-Wallis nonparametric test and Post Hoc Mann-Whitney. The results indicate that the highest difference among the Ra values occurred within the subgroup that was brushed with the herbal toothpaste. In conclusion, the herbal toothpaste produced a rougher surface on composite resin veneer compared to non-herbal toothpaste.
Characterization of Ice Roughness From Simulated Icing Encounters
NASA Technical Reports Server (NTRS)
Anderson, David N.; Shin, Jaiwon
1997-01-01
Detailed measurements of the size of roughness elements on ice accreted on models in the NASA Lewis Icing Research Tunnel (IRT) were made in a previous study. Only limited data from that study have been published, but included were the roughness element height, diameter and spacing. In the present study, the height and spacing data were found to correlate with the element diameter, and the diameter was found to be a function primarily of the non-dimensional parameters freezing fraction and accumulation parameter. The width of the smooth zone which forms at the leading edge of the model was found to decrease with increasing accumulation parameter. Although preliminary, the success of these correlations suggests that it may be possible to develop simple relationships between ice roughness and icing conditions for use in ice-accretion-prediction codes. These codes now require an ice-roughness estimate to determine convective heat transfer. Studies using a 7.6-cm-diameter cylinder and a 53.3-cm-chord NACA 0012 airfoil were also performed in which a 1/2-min icing spray at an initial set of conditions was followed by a 9-1/2-min spray at a second set of conditions. The resulting ice shape was compared with that from a full 10-min spray at the second set of conditions. The initial ice accumulation appeared to have no effect on the final ice shape. From this result, it would appear the accreting ice is affected very little by the initial roughness or shape features.
ERIC Educational Resources Information Center
Narli, Serkan; Yorek, Nurettin; Sahin, Mehmet; Usak, Muhammet
2010-01-01
This study investigates the possibility of analyzing educational data using the theory of rough sets which is mostly employed in the fields of data analysis and data mining. Data were collected using an open-ended conceptual understanding test of the living things administered to first-year high school students. The responses of randomly selected…
NASA Astrophysics Data System (ADS)
Rutigliani, Vito; Lorusso, Gian Francesco; De Simone, Danilo; Lazzarino, Frederic; Rispens, Gijsbert; Papavieros, George; Gogolides, Evangelos; Constantoudis, Vassilios; Mack, Chris A.
2018-03-01
Power spectral density (PSD) analysis is playing more and more a critical role in the understanding of line-edge roughness (LER) and linewidth roughness (LWR) in a variety of applications across the industry. It is an essential step to get an unbiased LWR estimate, as well as an extremely useful tool for process and material characterization. However, PSD estimate can be affected by both random to systematic artifacts caused by image acquisition and measurement settings, which could irremediably alter its information content. In this paper, we report on the impact of various setting parameters (smoothing image processing filters, pixel size, and SEM noise levels) on the PSD estimate. We discuss also the use of PSD analysis tool in a variety of cases. Looking beyond the basic roughness estimate, we use PSD and autocorrelation analysis to characterize resist blur[1], as well as low and high frequency roughness contents and we apply this technique to guide the EUV material stack selection. Our results clearly indicate that, if properly used, PSD methodology is a very sensitive tool to investigate material and process variations
NASA Astrophysics Data System (ADS)
Chung, Juyeon; Hagishima, Aya; Ikegaya, Naoki; Tanimoto, Jun
2015-11-01
We report the result of a wind-tunnel experiment to measure the scalar transfer efficiency of three types of surfaces, wet street surfaces of cube arrays, wet smooth surfaces with dry patches, and fully wet smooth surfaces, to examine the effects of roughness topography and scalar source allocation. Scalar transfer coefficients defined by the source area {C}_{E wet} for an underlying wet street surface of dry block arrays show a convex trend against the block density λ _p. Comparison with past data, and results for wet smooth surfaces including dry patches, reveal that the positive peak of {C}_{E wet} with increasing λ _p is caused by reduced horizontal advection due to block roughness and enhanced evaporation due to a heterogeneous scalar source distribution. In contrast, scalar transfer coefficients defined by a lot-area including wet and dry areas {C}_{E lot} for smooth surfaces with dry patches indicate enhanced evaporation compared to the fully wet smooth surface (the oasis effect) for all three conditions of dry plan-area ratio up to 31 %. Relationships between the local Sherwood and Reynolds numbers derived from experimental data suggest that attenuation of {C}_{E wet} for a wet street of cube arrays against streamwise distance is weaker than for a wet smooth surface because of canopy flow around the blocks. Relevant parameters of ratio of roughness length for momentum to scalar {B}^{-1} were calculated from observational data. The result implies that {B}^{-1} possibly increases with block roughness, and decreases with the partitioning of the scalar boundary layer because of dry patches.
Rough set approach for accident chains exploration.
Wong, Jinn-Tsai; Chung, Yi-Shih
2007-05-01
This paper presents a novel non-parametric methodology--rough set theory--for accident occurrence exploration. The rough set theory allows researchers to analyze accidents in multiple dimensions and to model accident occurrence as factor chains. Factor chains are composed of driver characteristics, trip characteristics, driver behavior and environment factors that imply typical accident occurrence. A real-world database (2003 Taiwan single auto-vehicle accidents) is used as an example to demonstrate the proposed approach. The results show that although most accident patterns are unique, some accident patterns are significant and worth noting. Student drivers who are young and less experienced exhibit a relatively high possibility of being involved in off-road accidents on roads with a speed limit between 51 and 79 km/h under normal driving circumstances. Notably, for bump-into-facility accidents, wet surface is a distinctive environmental factor.
Babiarz, Christopher; Hoffmann, Stephen; Wieben, Ann; Hurley, James; Andren, Anders; Shafer, Martin; Armstrong, David
2012-02-01
Knowledge of the partitioning and sources of mercury are important to understanding the human impact on mercury levels in Lake Superior wildlife. Fluvial fluxes of total mercury (Hg(T)) and methylmercury (MeHg) were compared to discharge and partitioning trends in 20 sub-basins having contrasting land uses and geological substrates. The annual tributary yield was correlated with watershed characteristics and scaled up to estimate the basin-wide loading. Tributaries with clay sediments and agricultural land use had the largest daily yields with maxima observed near the peak in water discharge. Roughly 42% of Hg(T) and 57% of MeHg was delivered in the colloidal phase. Tributary inputs, which are confined to near-shore zones of the lake, may be more important to the food-web than atmospheric sources. The annual basin-wide loading from tributaries was estimated to be 277 kg yr(-1) Hg(T) and 3.4 kg yr(-1) MeHg (5.5 and 0.07 mg km(-2) d(-1), respectively). Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Foda, O.; Welsh, T. A.
2016-04-01
We study the Andrews-Gordon-Bressoud (AGB) generalisations of the Rogers-Ramanujan q-series identities in the context of cylindric partitions. We recall the definition of r-cylindric partitions, and provide a simple proof of Borodin’s product expression for their generating functions, that can be regarded as a limiting case of an unpublished proof by Krattenthaler. We also recall the relationships between the r-cylindric partition generating functions, the principal characters of {\\hat{{sl}}}r algebras, the {{\\boldsymbol{ M }}}r r,r+d minimal model characters of {{\\boldsymbol{ W }}}r algebras, and the r-string abaci generating functions, providing simple proofs for each. We then set r = 2, and use two-cylindric partitions to re-derive the AGB identities as follows. Firstly, we use Borodin’s product expression for the generating functions of the two-cylindric partitions with infinitely long parts, to obtain the product sides of the AGB identities, times a factor {(q;q)}∞ -1, which is the generating function of ordinary partitions. Next, we obtain a bijection from the two-cylindric partitions, via two-string abaci, into decorated versions of Bressoud’s restricted lattice paths. Extending Bressoud’s method of transforming between restricted paths that obey different restrictions, we obtain sum expressions with manifestly non-negative coefficients for the generating functions of the two-cylindric partitions which contains a factor {(q;q)}∞ -1. Equating the product and sum expressions of the same two-cylindric partitions, and canceling a factor of {(q;q)}∞ -1 on each side, we obtain the AGB identities.
Convection from Hemispherical and Conical Model Ice Roughness Elements in Stagnation Region Flows
NASA Technical Reports Server (NTRS)
Hughes, Michael T.; Shannon, Timothy A.; McClain, Stephen T.; Vargas, Mario; Broeren, Andy
2016-01-01
To improve ice accretion prediction codes, more data regarding ice roughness and its effects on convective heat transfer are required. The Vertical Icing Studies Tunnel (VIST) at NASA Glenn Research was used to model realistic ice roughness in the stagnation region of a NACA 0012 airfoil. In the VIST, a test plate representing the leading 2% chord of the airfoil was subjected to flows of 7.62 m/s (25 ft/s), 12.19 m/s (40 ft/s), and 16.76 m/s (55 ft/s). The test plate was fitted with multiple surfaces or sets of roughness panels, each with a different representation of ice roughness. The sets of roughness panels were constructed using two element distribution patterns that were created based on a laser scan of an iced airfoil acquired in the Icing Research Tunnel at NASA Glenn. For both roughness patterns, surfaces were constructed using plastic hemispherical elements, plastic conical elements, and aluminum conical elements. Infrared surface thermometry data from tests run in the VIST were used to calculate area averaged heat transfer coefficient values. The values from the roughness surfaces were compared to the smooth control surface, showing convective enhancement as high as 400% in some cases. The data gathered during this study will ultimately be used to improve the physical modeling in LEWICE or other ice accretion codes and produce predictions of in-flight ice accretion on aircraft surfaces with greater confidence.
Common y-intercept and single compound regressions of gas-particle partitioning data vs 1/T
NASA Astrophysics Data System (ADS)
Pankow, James F.
Confidence intervals are placed around the log Kp vs 1/ T correlation equations obtained using simple linear regressions (SLR) with the gas-particle partitioning data set of Yamasaki et al. [(1982) Env. Sci. Technol.16, 189-194]. The compounds and groups of compounds studied include the polycylic aromatic hydrocarbons phenanthrene + anthracene, me-phenanthrene + me-anthracene, fluoranthene, pyrene, benzo[ a]fluorene + benzo[ b]fluorene, chrysene + benz[ a]anthracene + triphenylene, benzo[ b]fluoranthene + benzo[ k]fluoranthene, and benzo[ a]pyrene + benzo[ e]pyrene (note: me = methyl). For any given compound, at equilibrium, the partition coefficient Kp equals ( F/ TSP)/ A where F is the particulate-matter associated concentration (ng m -3), A is the gas-phase concentration (ng m -3), and TSP is the concentration of particulate matter (μg m -3). At temperatures more than 10°C from the mean sampling temperature of 17°C, the confidence intervals are quite wide. Since theory predicts that similar compounds sorbing on the same particulate matter should possess very similar y-intercepts, the data set was also fitted using a special common y-intercept regression (CYIR). For most of the compounds, the CYIR equations fell inside of the SLR 95% confidence intervals. The CYIR y-intercept value is -18.48, and is reasonably close to the type of value that can be predicted for PAH compounds. The set of CYIR regression equations is probably more reliable than the set of SLR equations. For example, the CYIR-derived desorption enthalpies are much more highly correlated with vaporization enthalpies than are the SLR-derived desorption enthalpies. It is recommended that the CYIR approach be considered whenever analysing temperature-dependent gas-particle partitioning data.
What are the structural features that drive partitioning of proteins in aqueous two-phase systems?
Wu, Zhonghua; Hu, Gang; Wang, Kui; Zaslavsky, Boris Yu; Kurgan, Lukasz; Uversky, Vladimir N
2017-01-01
Protein partitioning in aqueous two-phase systems (ATPSs) represents a convenient, inexpensive, and easy to scale-up protein separation technique. Since partition behavior of a protein dramatically depends on an ATPS composition, it would be highly beneficial to have reliable means for (even qualitative) prediction of partitioning of a target protein under different conditions. Our aim was to understand which structural features of proteins contribute to partitioning of a query protein in a given ATPS. We undertook a systematic empirical analysis of relations between 57 numerical structural descriptors derived from the corresponding amino acid sequences and crystal structures of 10 well-characterized proteins and the partition behavior of these proteins in 29 different ATPSs. This analysis revealed that just a few structural characteristics of proteins can accurately determine behavior of these proteins in a given ATPS. However, partition behavior of proteins in different ATPSs relies on different structural features. In other words, we could not find a unique set of protein structural features derived from their crystal structures that could be used for the description of the protein partition behavior of all proteins in all ATPSs analyzed in this study. We likely need to gain better insight into relationships between protein-solvent interactions and protein structure peculiarities, in particular given limitations of the used here crystal structures, to be able to construct a model that accurately predicts protein partition behavior across all ATPSs. Copyright © 2016 Elsevier B.V. All rights reserved.
Chen, Xuewu; Wei, Ming; Wu, Jingxian; Hou, Xianyao
2014-01-01
Most traditional mode choice models are based on the principle of random utility maximization derived from econometric theory. Alternatively, mode choice modeling can be regarded as a pattern recognition problem reflected from the explanatory variables of determining the choices between alternatives. The paper applies the knowledge discovery technique of rough sets theory to model travel mode choices incorporating household and individual sociodemographics and travel information, and to identify the significance of each attribute. The study uses the detailed travel diary survey data of Changxing county which contains information on both household and individual travel behaviors for model estimation and evaluation. The knowledge is presented in the form of easily understood IF-THEN statements or rules which reveal how each attribute influences mode choice behavior. These rules are then used to predict travel mode choices from information held about previously unseen individuals and the classification performance is assessed. The rough sets model shows high robustness and good predictive ability. The most significant condition attributes identified to determine travel mode choices are gender, distance, household annual income, and occupation. Comparative evaluation with the MNL model also proves that the rough sets model gives superior prediction accuracy and coverage on travel mode choice modeling. PMID:25431585
Prosperi, Mattia C F; De Luca, Andrea; Di Giambenedetto, Simona; Bracciale, Laura; Fabbiani, Massimiliano; Cauda, Roberto; Salemi, Marco
2010-10-25
Phylogenetic methods produce hierarchies of molecular species, inferring knowledge about taxonomy and evolution. However, there is not yet a consensus methodology that provides a crisp partition of taxa, desirable when considering the problem of intra/inter-patient quasispecies classification or infection transmission event identification. We introduce the threshold bootstrap clustering (TBC), a new methodology for partitioning molecular sequences, that does not require a phylogenetic tree estimation. The TBC is an incremental partition algorithm, inspired by the stochastic Chinese restaurant process, and takes advantage of resampling techniques and models of sequence evolution. TBC uses as input a multiple alignment of molecular sequences and its output is a crisp partition of the taxa into an automatically determined number of clusters. By varying initial conditions, the algorithm can produce different partitions. We describe a procedure that selects a prime partition among a set of candidate ones and calculates a measure of cluster reliability. TBC was successfully tested for the identification of type-1 human immunodeficiency and hepatitis C virus subtypes, and compared with previously established methodologies. It was also evaluated in the problem of HIV-1 intra-patient quasispecies clustering, and for transmission cluster identification, using a set of sequences from patients with known transmission event histories. TBC has been shown to be effective for the subtyping of HIV and HCV, and for identifying intra-patient quasispecies. To some extent, the algorithm was able also to infer clusters corresponding to events of infection transmission. The computational complexity of TBC is quadratic in the number of taxa, lower than other established methods; in addition, TBC has been enhanced with a measure of cluster reliability. The TBC can be useful to characterise molecular quasipecies in a broad context.
Measuring skew in average surface roughness as a function of surface preparation
NASA Astrophysics Data System (ADS)
Stahl, Mark T.
2015-08-01
Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.
Fuzzy-Rough Nearest Neighbour Classification
NASA Astrophysics Data System (ADS)
Jensen, Richard; Cornelis, Chris
A new fuzzy-rough nearest neighbour (FRNN) classification algorithm is presented in this paper, as an alternative to Sarkar's fuzzy-rough ownership function (FRNN-O) approach. By contrast to the latter, our method uses the nearest neighbours to construct lower and upper approximations of decision classes, and classifies test instances based on their membership to these approximations. In the experimental analysis, we evaluate our approach with both classical fuzzy-rough approximations (based on an implicator and a t-norm), as well as with the recently introduced vaguely quantified rough sets. Preliminary results are very good, and in general FRNN outperforms FRNN-O, as well as the traditional fuzzy nearest neighbour (FNN) algorithm.
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
Hydraulic geometry of the Platte River in south-central Nebraska
Eschner, T.R.
1982-01-01
At-a-station hydraulic-geometry of the Platte River in south-central Nebraska is complex. The range of exponents of simple power-function relations is large, both between different reaches of the river, and among different sections within a given reach. The at-a-station exponents plot in several fields of the b-f-m diagram, suggesting that morphologic and hydraulic changes with increasing discharge vary considerably. Systematic changes in the plotting positions of the exponents with time indicate that in general, the width exponent has decreased, although trends are not readily apparent in the other exponents. Plots of the hydraulic-geometry relations indicate that simple power functions are not the proper model in all instances. For these sections, breaks in the slopes of the hydraulic geometry relations serve to partition the data sets. Power functions fit separately to the partitioned data described the width-, depth-, and velocity-discharge relations more accurately than did a single power function. Plotting positions of the exponents from hydraulic geometry relations of partitioned data sets on b-f-m diagrams indicate that much of the apparent variations of plotting positions of single power functions results because the single power functions compromise both subsets of partitioned data. For several sections, the shape of the channel primarily accounts for the better fit of two-power functions to partitioned data than a single power function over the entire range of data. These non-log linear relations may have significance for channel maintenance. (USGS)
NASA Astrophysics Data System (ADS)
Hopcroft, Peter O.; Gallagher, Kerry; Pain, Christopher C.
2009-08-01
Collections of suitably chosen borehole profiles can be used to infer large-scale trends in ground-surface temperature (GST) histories for the past few hundred years. These reconstructions are based on a large database of carefully selected borehole temperature measurements from around the globe. Since non-climatic thermal influences are difficult to identify, representative temperature histories are derived by averaging individual reconstructions to minimize the influence of these perturbing factors. This may lead to three potentially important drawbacks: the net signal of non-climatic factors may not be zero, meaning that the average does not reflect the best estimate of past climate; the averaging over large areas restricts the useful amount of more local climate change information available; and the inversion methods used to reconstruct the past temperatures at each site must be mathematically identical and are therefore not necessarily best suited to all data sets. In this work, we avoid these issues by using a Bayesian partition model (BPM), which is computed using a trans-dimensional form of a Markov chain Monte Carlo algorithm. This then allows the number and spatial distribution of different GST histories to be inferred from a given set of borehole data by partitioning the geographical area into discrete partitions. Profiles that are heavily influenced by non-climatic factors will be partitioned separately. Conversely, profiles with climatic information, which is consistent with neighbouring profiles, will then be inferred to lie in the same partition. The geographical extent of these partitions then leads to information on the regional extent of the climatic signal. In this study, three case studies are described using synthetic and real data. The first demonstrates that the Bayesian partition model method is able to correctly partition a suite of synthetic profiles according to the inferred GST history. In the second, more realistic case, a series of temperature profiles are calculated using surface air temperatures of a global climate model simulation. In the final case, 23 real boreholes from the United Kingdom, previously used for climatic reconstructions, are examined and the results compared with a local instrumental temperature series and the previous estimate derived from the same borehole data. The results indicate that the majority (17) of the 23 boreholes are unsuitable for climatic reconstruction purposes, at least without including other thermal processes in the forward model.
Crystal-chemistry and partitioning of REE in whitlockite
NASA Technical Reports Server (NTRS)
Colson, R. O.; Jolliff, B. L.
1993-01-01
Partitioning of Rare Earth Elements (REE) in whitlockite is complicated by the fact that two or more charge-balancing substitutions are involved and by the fact that concentrations of REE in natural whitlockites are sufficiently high such that simple partition coefficients are not expected to be constant even if mixing in the system is completely ideal. The present study combines preexisting REE partitioning data in whitlockites with new experiments in the same compositional system and at the same temperature (approximately 1030 C) to place additional constraints on the complex variations of REE partition coefficients and to test theoretical models for how REE partitioning should vary with REE concentration and other compositional variables. With this data set, and by combining crystallographic and thermochemical constraints with a SAS simultaneous-equation best-fitting routine, it is possible to infer answers to the following questions: what is the speciation on the individual sites Ca(B), Mg, and Ca(IIA) (where the ideal structural formula is Ca(B)18 Mg2Ca(IIA)2P14O56); how are REE's charge-balanced in the crystal; and is mixing of REE in whitlockite ideal or non-ideal. This understanding is necessary in order to extrapolate derived partition coefficients to other compositional systems and provides a broadened understanding of the crystal chemistry of whitlockite.
Wang, Li Kun; Heng, Paul Wan Sia; Liew, Celine Valeria
2015-04-01
Bottom spray fluid-bed coating is a common technique for coating multiparticulates. Under the quality-by-design framework, particle recirculation within the partition column is one of the main variability sources affecting particle coating and coat uniformity. However, the occurrence and mechanism of particle recirculation within the partition column of the coater are not well understood. The purpose of this study was to visualize and define particle recirculation within the partition column. Based on different combinations of partition gap setting, air accelerator insert diameter, and particle size fraction, particle movements within the partition column were captured using a high-speed video camera. The particle recirculation probability and voidage information were mapped using a visiometric process analyzer. High-speed images showed that particles contributing to the recirculation phenomenon were behaving as clustered colonies. Fluid dynamics analysis indicated that particle recirculation within the partition column may be attributed to the combined effect of cluster formation and drag reduction. Both visiometric process analysis and particle coating experiments showed that smaller particles had greater propensity toward cluster formation than larger particles. The influence of cluster formation on coating performance and possible solutions to cluster formation were further discussed. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Raevsky, O A; Grigor'ev, V J; Raevskaja, O E; Schaper, K-J
2006-06-01
QSPR analyses of a data set containing experimental partition coefficients in the three systems octanol-water, water-gas, and octanol-gas for 98 chemicals have shown that it is possible to calculate any partition coefficient in the system 'gas phase/octanol/water' by three different approaches: (1) from experimental partition coefficients obtained in the corresponding two other subsystems. However, in many cases these data may not be available. Therefore, a solution may be approached (2), a traditional QSPR analysis based on e.g. HYBOT descriptors (hydrogen bond acceptor and donor factors, SigmaCa and SigmaCd, together with polarisability alpha, a steric bulk effect descriptor) and supplemented with substructural indicator variables. (3) A very promising approach which is a combination of the similarity concept and QSPR based on HYBOT descriptors. In this approach observed partition coefficients of structurally nearest neighbours of a compound-of-interest are used. In addition, contributions arising from differences in alpha, SigmaCa, and SigmaCd values between the compound-of-interest and its nearest neighbour(s), respectively, are considered. In this investigation highly significant relationships were obtained by approaches (1) and (3) for the octanol/gas phase partition coefficient (log Log).
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
On twelve types of covering-based rough sets.
Safari, Samira; Hooshmandasl, Mohammad Reza
2016-01-01
Covering approximation spaces are a generalization of equivalence-based rough set theories. In this paper, we will consider twelve types of covering based approximation operators by combining four types of covering lower approximation operators and three types of covering upper approximation operators. Then, we will study the properties of these new pairs and show they have most of the common properties among existing covering approximation pairs. Finally, the relation between these new pairs is studied.
Hybrid machine learning technique for forecasting Dhaka stock market timing decisions.
Banik, Shipra; Khodadad Khan, A F M; Anwer, Mohammad
2014-01-01
Forecasting stock market has been a difficult job for applied researchers owing to nature of facts which is very noisy and time varying. However, this hypothesis has been featured by several empirical experiential studies and a number of researchers have efficiently applied machine learning techniques to forecast stock market. This paper studied stock prediction for the use of investors. It is always true that investors typically obtain loss because of uncertain investment purposes and unsighted assets. This paper proposes a rough set model, a neural network model, and a hybrid neural network and rough set model to find optimal buy and sell of a share on Dhaka stock exchange. Investigational findings demonstrate that our proposed hybrid model has higher precision than the single rough set model and the neural network model. We believe this paper findings will help stock investors to decide about optimal buy and/or sell time on Dhaka stock exchange.
Hybrid Machine Learning Technique for Forecasting Dhaka Stock Market Timing Decisions
Banik, Shipra; Khodadad Khan, A. F. M.; Anwer, Mohammad
2014-01-01
Forecasting stock market has been a difficult job for applied researchers owing to nature of facts which is very noisy and time varying. However, this hypothesis has been featured by several empirical experiential studies and a number of researchers have efficiently applied machine learning techniques to forecast stock market. This paper studied stock prediction for the use of investors. It is always true that investors typically obtain loss because of uncertain investment purposes and unsighted assets. This paper proposes a rough set model, a neural network model, and a hybrid neural network and rough set model to find optimal buy and sell of a share on Dhaka stock exchange. Investigational findings demonstrate that our proposed hybrid model has higher precision than the single rough set model and the neural network model. We believe this paper findings will help stock investors to decide about optimal buy and/or sell time on Dhaka stock exchange. PMID:24701205
Evolving bipartite authentication graph partitions
Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.
2017-01-16
As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less
Platinum Partitioning at Low Oxygen Fugacity: Implications for Core Formation Processes
NASA Technical Reports Server (NTRS)
Medard, E.; Martin, A. M.; Righter, K.; Lanziroti, A.; Newville, M.
2016-01-01
Highly siderophile elements (HSE = Au, Re, and the Pt-group elements) are tracers of silicate / metal interactions during planetary processes. Since most core-formation models involve some state of equilibrium between liquid silicate and liquid metal, understanding the partioning of highly siderophile elements (HSE) between silicate and metallic melts is a key issue for models of core / mantle equilibria and for core formation scenarios. However, partitioning models for HSE are still inaccurate due to the lack of sufficient experimental constraints to describe the variations of partitioning with key variable like temperature, pressure, and oxygen fugacity. In this abstract, we describe a self-consistent set of experiments aimed at determining the valence of platinum, one of the HSE, in silicate melts. This is a key information required to parameterize the evolution of platinum partitioning with oxygen fugacity.
Evolving bipartite authentication graph partitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.
As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Partitioning behavior of aromatic components in jet fuel into diverse membrane-coated fibers.
Baynes, Ronald E; Xia, Xin-Rui; Barlow, Beth M; Riviere, Jim E
2007-11-01
Jet fuel components are known to partition into skin and produce occupational irritant contact dermatitis (OICD) and potentially adverse systemic effects. The purpose of this study was to determine how jet fuel components partition (1) from solvent mixtures into diverse membrane-coated fibers (MCFs) and (2) from biological media into MCFs to predict tissue distribution. Three diverse MCFs, polydimethylsiloxane (PDMS, lipophilic), polyacrylate (PA, polarizable), and carbowax (CAR, polar), were selected to simulate the physicochemical properties of skin in vivo. Following an appropriate equilibrium time between the MCF and dosing solutions, the MCF was injected directly into a gas chromatograph/mass spectrometer (GC-MS) to quantify the amount that partitioned into the membrane. Three vehicles (water, 50% ethanol-water, and albumin-containing media solution) were studied for selected jet fuel components. The more hydrophobic the component, the greater was the partitioning into the membranes across all MCF types, especially from water. The presence of ethanol as a surrogate solvent resulted in significantly reduced partitioning into the MCFs with discernible differences across the three fibers based on their chemistries. The presence of a plasma substitute (media) also reduced partitioning into the MCF, with the CAR MCF system being better correlated to the predicted partitioning of aromatic components into skin. This study demonstrated that a single or multiple set of MCF fibers may be used as a surrogate for octanol/water systems and skin to assess partitioning behavior of nine aromatic components frequently formulated with jet fuels. These diverse inert fibers were able to assess solute partitioning from a blood substitute such as media into a membrane possessing physicochemical properties similar to human skin. This information may be incorporated into physiologically based pharmacokinetic (PBPK) models to provide a more accurate assessment of tissue dosimetry of related toxicants.
NASA Astrophysics Data System (ADS)
Langel, Christopher Michael
A computational investigation has been performed to better understand the impact of surface roughness on the flow over a contaminated surface. This thesis highlights the implementation and development of the roughness amplification model in the flow solver OVERFLOW-2. The model, originally proposed by Dassler, Kozulovic, and Fiala, introduces an additional scalar field roughness amplification quantity. This value is explicitly set at rough wall boundaries using surface roughness parameters and local flow quantities. This additional transport equation allows non-local effects of surface roughness to be accounted for downstream of rough sections. This roughness amplification variable is coupled with the Langtry-Menter model and used to modify the criteria for transition. Results from flat plate test cases show good agreement with experimental transition behavior on the flow over varying sand grain roughness heights. Additional validation studies were performed on a NACA 0012 airfoil with leading edge roughness. The computationally predicted boundary layer development demonstrates good agreement with experimental results. New tests using varying roughness configurations are being carried out at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel to provide further calibration of the roughness amplification method. An overview and preliminary results are provided of this concurrent experimental investigation.
A Sharp methodology for VLSI layout
NASA Astrophysics Data System (ADS)
Bapat, Shekhar
1993-01-01
The layout problem for VLSI circuits is recognized as a very difficult problem and has been traditionally decomposed into the several seemingly independent sub-problems of placement, global routing, and detailed routing. Although this structure achieves a reduction in programming complexity, it is also typically accompanied by a reduction in solution quality. Most current placement research recognizes that the separation is artificial, and that the placement and routing problems should be solved ideally in tandem. We propose a new interconnection model, Sharp and an associated partitioning algorithm. The Sharp interconnection model uses a partitioning shape that roughly resembles the musical sharp 'number sign' and makes extensive use of pre-computed rectilinear Steiner trees. The model is designed to generate strategic routing information along with the partitioning results. Additionally, the Sharp model also generates estimates of the routing congestion. We also propose the Sharp layout heuristic that solves the layout problem in its entirety. The Sharp layout heuristic makes extensive use of the Sharp partitioning model. The use of precomputed Steiner tree forms enables the method to model accurately net characteristics. For example, the Steiner tree forms can model both the length of the net and more importantly its route. In fact, the tree forms are also appropriate for modeling the timing delays of nets. The Sharp heuristic works to minimize both the total layout area by minimizing total net length (thus reducing the total wiring area), and the congestion imbalances in the various channels (thus reducing the unused or wasted channel area). Our heuristic uses circuit element movements amongst the different partitioning blocks and selection of alternate minimal Steiner tree forms to achieve this goal. The objective function for the algorithm can be modified readily to include other important circuit constraints like propagation delays. The layout technique first computes a very high-level approximation of the layout solution (i.e., the positions of the circuit elements and the associated net routes). The approximate solution is alternately refined, objective function. The technique creates well defined sub-problems and offers intermediary steps that can be solved in parallel, as well as a parallel mechanism to merge the sub-problem solutions.
Rational design of polymer-based absorbents: application to the fermentation inhibitor furfural.
Nwaneshiudu, Ikechukwu C; Schwartz, Daniel T
2015-01-01
Reducing the amount of water-soluble fermentation inhibitors like furfural is critical for downstream bio-processing steps to biofuels. A theoretical approach for tailoring absorption polymers to reduce these pretreatment contaminants would be useful for optimal bioprocess design. Experiments were performed to measure aqueous furfural partitioning into polymer resins of 5 bisphenol A diglycidyl ether (epoxy) and polydimethylsiloxane (PDMS). Experimentally measured partitioning of furfural between water and PDMS, the more hydrophobic polymer, showed poor performance, with the logarithm of PDMS-to-water partition coefficient falling between -0.62 and -0.24 (95% confidence). In contrast, the fast setting epoxy was found to effectively partition furfural with the logarithm of the epoxy-to-water partition coefficient falling between 0.41 and 0.81 (95% confidence). Flory-Huggins theory is used to predict the partitioning of furfural into diverse polymer absorbents and is useful for predicting these results. We show that Flory-Huggins theory can be adapted to guide the selection of polymer adsorbents for the separation of low molecular weight organic species from aqueous solutions. This work lays the groundwork for the general design of polymers for the separation of a wide range of inhibitory compounds in biomass pretreatment streams.
Theoretical and experimental models of the diffuse radar backscatter from Mars
NASA Technical Reports Server (NTRS)
England, A. W.
1995-01-01
The general objective for this work was to develop a theoretically and experimentally consistent explanation for the diffuse component of radar backscatter from Mars. The strength, variability, and wavelength independence of Mars' diffuse backscatter are unique among our Moon and the terrestrial planets. This diffuse backscatter is generally attributed to wavelength-scale surface roughness and to rock clasts within the Martian regolith. Through the combination of theory and experiment, the authors attempted to bound the range of surface characteristics that could produce the observed diffuse backscatter. Through these bounds they gained a limited capability for data inversion. Within this umbrella, specific objectives were: (1) To better define the statistical roughness parameters of Mars' surface so that they are consistent with observed radar backscatter data, and with the physical and chemical characteristics of Mars' surface as inferred from Mariner 9, the Viking probes, and Earth-based spectroscopy; (2) To better understand the partitioning between surface and volume scattering in the Mars regolith; (3) To develop computational models of Mars' radio emission that incorporate frequency dependent, surface and volume scattering.
Evaluation of Hierarchical Clustering Algorithms for Document Datasets
2002-06-03
link, complete-link, and group average ( UPGMA )) and a new set of merging criteria derived from the six partitional criterion functions. Overall, we...used the single-link, complete-link, and UPGMA schemes, as well as, the various partitional criterion functions described in Section 3.1. The single-link...other (complete-link approach). The UPGMA scheme [16] (also known as group average) overcomes these problems by measuring the similarity of two clusters
Effect of truncated cone roughness element density on hydrodynamic drag
NASA Astrophysics Data System (ADS)
Womack, Kristofer; Schultz, Michael; Meneveau, Charles
2017-11-01
An experimental study was conducted on rough-wall, turbulent boundary layer flow with roughness elements whose idealized shape model barnacles that cause hydrodynamic drag in many applications. Varying planform densities of truncated cone roughness elements were investigated. Element densities studied ranged from 10% to 79%. Detailed turbulent boundary layer velocity statistics were recorded with a two-component LDV system on a three-axis traverse. Hydrodynamic roughness length (z0) and skin-friction coefficient (Cf) were determined and compared with the estimates from existing roughness element drag prediction models including Macdonald et al. (1998) and other recent models. The roughness elements used in this work model idealized barnacles, so implications of this data set for ship powering are considered. This research was supported by the Office of Naval Research and by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.
A new fiber optic sensor for inner surface roughness measurement
NASA Astrophysics Data System (ADS)
Xu, Xiaomei; Liu, Shoubin; Hu, Hong
2009-11-01
In order to measure inner surface roughness of small holes nondestructively, a new fiber optic sensor is researched and developed. Firstly, a new model for surface roughness measurement is proposed, which is based on intensity-modulated fiber optic sensors and scattering modeling of rough surfaces. Secondly, a fiber optical measurement system is designed and set up. Under the help of new techniques, the fiber optic sensor can be miniaturized. Furthermore, the use of micro prism makes the light turn 90 degree, so the inner side surface roughness of small holes can be measured. Thirdly, the fiber optic sensor is gauged by standard surface roughness specimens, and a series of measurement experiments have been done. The measurement results are compared with those obtained by TR220 Surface Roughness Instrument and Form Talysurf Laser 635, and validity of the developed fiber optic sensor is verified. Finally, precision and influence factors of the fiber optic sensor are analyzed.
NASA Astrophysics Data System (ADS)
Gan, Chee Kwan; Challacombe, Matt
2003-05-01
Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.
2010-01-01
Background Comparative genomics methods such as phylogenetic profiling can mine powerful inferences from inherently noisy biological data sets. We introduce Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL), a method that applies the Partial Phylogenetic Profiling (PPP) approach locally within a protein sequence to discover short sequence signatures associated with functional sites. The approach is based on the basic scoring mechanism employed by PPP, namely the use of binomial distribution statistics to optimize sequence similarity cutoffs during searches of partitioned training sets. Results Here we illustrate and validate the ability of the SIMBAL method to find functionally relevant short sequence signatures by application to two well-characterized protein families. In the first example, we partitioned a family of ABC permeases using a metabolic background property (urea utilization). Thus, the TRUE set for this family comprised members whose genome of origin encoded a urea utilization system. By moving a sliding window across the sequence of a permease, and searching each subsequence in turn against the full set of partitioned proteins, the method found which local sequence signatures best correlated with the urea utilization trait. Mapping of SIMBAL "hot spots" onto crystal structures of homologous permeases reveals that the significant sites are gating determinants on the cytosolic face rather than, say, docking sites for the substrate-binding protein on the extracellular face. In the second example, we partitioned a protein methyltransferase family using gene proximity as a criterion. In this case, the TRUE set comprised those methyltransferases encoded near the gene for the substrate RF-1. SIMBAL identifies sequence regions that map onto the substrate-binding interface while ignoring regions involved in the methyltransferase reaction mechanism in general. Neither method for training set construction requires any prior experimental characterization. Conclusions SIMBAL shows that, in functionally divergent protein families, selected short sequences often significantly outperform their full-length parent sequence for making functional predictions by sequence similarity, suggesting avenues for improved functional classifiers. When combined with structural data, SIMBAL affords the ability to localize and model functional sites. PMID:20102603
An in situ approach to study trace element partitioning in the laser heated diamond anvil cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petitgirard, S.; Mezouar, M.; Borchert, M.
2012-01-15
Data on partitioning behavior of elements between different phases at in situ conditions are crucial for the understanding of element mobility especially for geochemical studies. Here, we present results of in situ partitioning of trace elements (Zr, Pd, and Ru) between silicate and iron melts, up to 50 GPa and 4200 K, using a modified laser heated diamond anvil cell (DAC). This new experimental set up allows simultaneous collection of x-ray fluorescence (XRF) and x-ray diffraction (XRD) data as a function of time using the high pressure beamline ID27 (ESRF, France). The technique enables the simultaneous detection of sample meltingmore » based to the appearance of diffuse scattering in the XRD pattern, characteristic of the structure factor of liquids, and measurements of elemental partitioning of the sample using XRF, before, during and after laser heating in the DAC. We were able to detect elements concentrations as low as a few ppm level (2-5 ppm) on standard solutions. In situ measurements are complimented by mapping of the chemical partitions of the trace elements after laser heating on the quenched samples to constrain the partitioning data. Our first results indicate a strong partitioning of Pd and Ru into the metallic phase, while Zr remains clearly incompatible with iron. This novel approach extends the pressure and temperature range of partitioning experiments derived from quenched samples from the large volume presses and could bring new insight to the early history of Earth.« less
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Pye, Havala O. T.; Zuend, Andreas; Fry, Juliane L.; Isaacman-VanWertz, Gabriel; Capps, Shannon L.; Appel, K. Wyat; Foroutan, Hosein; Xu, Lu; Ng, Nga L.; Goldstein, Allen H.
2018-01-01
Several models were used to describe the partitioning of ammonia, water, and organic compounds between the gas and particle phases for conditions in the southeastern US during summer 2013. Existing equilibrium models and frameworks were found to be sufficient, although additional improvements in terms of estimating pure-species vapor pressures are needed. Thermodynamic model predictions were consistent, to first order, with a molar ratio of ammonium to sulfate of approximately 1.6 to 1.8 (ratio of ammonium to 2× sulfate, RN/2S ≈ 0.8 to 0.9) with approximately 70% of total ammonia and ammonium (NHx) in the particle. Southeastern Aerosol Research and Characterization Network (SEARCH) gas and aerosol and Southern Oxidant and Aerosol Study (SOAS) Monitor for AeRosols and Gases in Ambient air (MARGA) aerosol measurements were consistent with these conditions. CMAQv5.2 regional chemical transport model predictions did not reflect these conditions due to a factor of 3 overestimate of the nonvolatile cations. In addition, gas-phase ammonia was overestimated in the CMAQ model leading to an even lower fraction of total ammonia in the particle. Chemical Speciation Network (CSN) and aerosol mass spectrometer (AMS) measurements indicated less ammonium per sulfate than SEARCH and MARGA measurements and were inconsistent with thermodynamic model predictions. Organic compounds were predicted to be present to some extent in the same phase as inorganic constituents, modifying their activity and resulting in a decrease in [H+]air (H+ in μgm−3 air), increase in ammonia partitioning to the gas phase, and increase in pH compared to complete organic vs. inorganic liquid–liquid phase separation. In addition, accounting for nonideal mixing modified the pH such that a fully interactive inorganic–organic system had a pH roughly 0.7 units higher than predicted using traditional methods (pH = 1.5 vs. 0.7). Particle-phase interactions of organic and inorganic compounds were found to increase partitioning towards the particle phase (vs. gas phase) for highly oxygenated (O : C≥0.6) compounds including several isoprene-derived tracers as well as levoglu-cosan but decrease particle-phase partitioning for low O: C, monoterpene-derived species.
NASA Astrophysics Data System (ADS)
Pye, Havala O. T.; Zuend, Andreas; Fry, Juliane L.; Isaacman-VanWertz, Gabriel; Capps, Shannon L.; Wyat Appel, K.; Foroutan, Hosein; Xu, Lu; Ng, Nga L.; Goldstein, Allen H.
2018-01-01
Several models were used to describe the partitioning of ammonia, water, and organic compounds between the gas and particle phases for conditions in the southeastern US during summer 2013. Existing equilibrium models and frameworks were found to be sufficient, although additional improvements in terms of estimating pure-species vapor pressures are needed. Thermodynamic model predictions were consistent, to first order, with a molar ratio of ammonium to sulfate of approximately 1.6 to 1.8 (ratio of ammonium to 2 × sulfate, RN/2S ≈ 0.8 to 0.9) with approximately 70 % of total ammonia and ammonium (NHx) in the particle. Southeastern Aerosol Research and Characterization Network (SEARCH) gas and aerosol and Southern Oxidant and Aerosol Study (SOAS) Monitor for AeRosols and Gases in Ambient air (MARGA) aerosol measurements were consistent with these conditions. CMAQv5.2 regional chemical transport model predictions did not reflect these conditions due to a factor of 3 overestimate of the nonvolatile cations. In addition, gas-phase ammonia was overestimated in the CMAQ model leading to an even lower fraction of total ammonia in the particle. Chemical Speciation Network (CSN) and aerosol mass spectrometer (AMS) measurements indicated less ammonium per sulfate than SEARCH and MARGA measurements and were inconsistent with thermodynamic model predictions. Organic compounds were predicted to be present to some extent in the same phase as inorganic constituents, modifying their activity and resulting in a decrease in [H+]air (H+ in µg m-3 air), increase in ammonia partitioning to the gas phase, and increase in pH compared to complete organic vs. inorganic liquid-liquid phase separation. In addition, accounting for nonideal mixing modified the pH such that a fully interactive inorganic-organic system had a pH roughly 0.7 units higher than predicted using traditional methods (pH = 1.5 vs. 0.7). Particle-phase interactions of organic and inorganic compounds were found to increase partitioning towards the particle phase (vs. gas phase) for highly oxygenated (O : C ≥ 0.6) compounds including several isoprene-derived tracers as well as levoglucosan but decrease particle-phase partitioning for low O : C, monoterpene-derived species.
NASA Astrophysics Data System (ADS)
McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.
2017-03-01
We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.
Sharpe, Jennifer B.; Soong, David T.
2015-01-01
This study used the National Land Cover Dataset (NLCD) and developed an automated process for determining the area of the three land cover types, thereby allowing faster updating of future models, and for evaluating land cover changes by use of historical NLCD datasets. The study also carried out a raingage partitioning analysis so that the segmentation of land cover and rainfall in each modeled unit is directly applicable to the HSPF modeling. Historical and existing impervious, grass, and forest land acreages partitioned by percentages covered by two sets of raingages for the Lake Michigan diversion SCAs, gaged basins, and ungaged basins are presented.
NASA Astrophysics Data System (ADS)
Kassem, M.; Soize, C.; Gagliardini, L.
2011-02-01
In a recent work [ Journal of Sound and Vibration 323 (2009) 849-863] the authors presented an energy-density field approach for the vibroacoustic analysis of complex structures in the low and medium frequency ranges. In this approach, a local vibroacoustic energy model as well as a simplification of this model were constructed. In this paper, firstly an extension of the previous theory is performed in order to include the case of general input forces and secondly, a structural partitioning methodology is presented along with a set of tools used for the construction of a partitioning. Finally, an application is presented for an automotive vehicle.
A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems
2005-05-01
Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT
Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.
Zhao, Jian; Chen, Lian-Kuan
2017-04-17
We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
Wang, Thanh; Han, Shanlong; Yuan, Bo; Zeng, Lixi; Li, Yingming; Wang, Yawei; Jiang, Guibin
2012-12-01
Short chain chlorinated paraffins (SCCPs) are semi-volatile chemicals that are considered persistent in the environment, potential toxic and subject to long-range transport. This study investigates the concentrations and gas-particle partitioning of SCCPs at an urban site in Beijing during summer and wintertime. The total atmospheric SCCP levels ranged 1.9-33.0 ng/m(3) during wintertime. Significantly higher levels were found during the summer (range 112-332 ng/m(3)). The average fraction of total SCCPs in the particle phase (ϕ) was 0.67 during wintertime but decreased significantly during the summer (ϕ = 0.06). The ten and eleven carbon chain homologues with five to eight chlorine atoms were the predominant SCCP formula groups in air. Significant linear correlations were found between the gas-particle partition coefficients and the predicted subcooled vapor pressures and octanol-air partition coefficients. The gas-particle partitioning of SCCPs was further investigated and compared with both the Junge-Pankow adsorption and K(oa)-based absorption models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Rough flows and homogenization in stochastic turbulence
NASA Astrophysics Data System (ADS)
Bailleul, I.; Catellier, R.
2017-10-01
We provide in this work a tool-kit for the study of homogenisation of random ordinary differential equations, under the form of a friendly-user black box based on the technology of rough flows. We illustrate the use of this setting on the example of stochastic turbulence.
Empirical Constraints on Proton and Electron Heating in the Fast Solar Wind
NASA Technical Reports Server (NTRS)
Cranmer, Steven R.; Matthaeus, William H.; Breech, Benjamin A.; Kasper, Justin C.
2009-01-01
This paper presents analyses of measured proton and electron temperatures in the high-speed solar wind that are used to calculate the separate rates of heat deposition for protons and electrons. It was found that the protons receive about 60% of the total plasma heating in the inner heliosphere, and that this fraction increases to approximately 80% by the orbit of Jupiter. The empirically derived partitioning of heat between protons and electrons is in rough agreement with theoretical predictions from a model of linear Vlasov wave damping. For a modeled power spectrum consisting only of Alfvenic fluctuations, the best agreement was found for a distribution of wavenumber vectors that evolves toward isotropy as distance increases.
Surface roughness manifestations of deep-seated landslide processes
NASA Astrophysics Data System (ADS)
Booth, A. M.; Roering, J. J.; Lamb, M. P.
2012-12-01
In many mountainous drainage basins, deep-seated landslides evacuate large volumes of sediment from small surface areas, leaving behind a strong topographic signature that sets landscape roughness over a range of spatial scales. At long spatial wavelengths of hundreds to thousands of meters, landslides tend to inhibit channel incision and limit topographic relief, effectively smoothing the topography at this length scale. However, at short spatial wavelengths on the order of meters, deformation of deep-seated landslides generates surface roughness that allows expert mappers or automated algorithms to distinguish landslides from the surrounding terrain. Here, we directly connect the characteristic spatial wavelengths and amplitudes of this fine scale surface roughness to the underlying landslide deformation processes. We utilize the two-dimensional wavelet transform with high-resolution, airborne LiDAR-derived digital elevation models to systematically document the characteristic length scales and amplitudes of different kinematic units within slow moving earthflows, a common type of deep-seated landslide. In earthflow source areas, discrete slumped blocks generate high surface roughness, reflecting an extensional deformation regime. In earthflow transport zones, where material translates with minimal surface deformation, roughness decreases as other surface processes quickly smooth short wavelength features. In earthflow depositional toes, compression folds and thrust faults again increase short wavelength surface roughness. When an earthflow becomes inactive, roughness in all of these kinematic zones systematically decreases with time, allowing relative dating of earthflow deposits. We also document how each of these roughness expressions depends on earthflow velocity, using sub-pixel change detection software (COSI-Corr) and pairs of orthorectified aerial photographs to determine spatially extensive landslide surface displacements. In source areas, the wavelength of slumped blocks tends to correlate with velocity as predicted by a simple sliding block model, but the amplitude is insensitive to velocity, suggesting that landslide depth rather than velocity sets this characteristic block amplitude. In both transport zones and depositional toes, the amplitude of the surface roughness is higher where the longitudinal gradient in velocity is higher, confirming that differential movement generates and maintains this fine scale roughness.
Selection of representative embankments based on rough set - fuzzy clustering method
NASA Astrophysics Data System (ADS)
Bin, Ou; Lin, Zhi-xiang; Fu, Shu-yan; Gao, Sheng-song
2018-02-01
The premise condition of comprehensive evaluation of embankment safety is selection of representative unit embankment, on the basis of dividing the unit levee the influencing factors and classification of the unit embankment are drafted.Based on the rough set-fuzzy clustering, the influence factors of the unit embankment are measured by quantitative and qualitative indexes.Construct to fuzzy similarity matrix of standard embankment then calculate fuzzy equivalent matrix of fuzzy similarity matrix by square method. By setting the threshold of the fuzzy equivalence matrix, the unit embankment is clustered, and the representative unit embankment is selected from the classification of the embankment.
An IDS Alerts Aggregation Algorithm Based on Rough Set Theory
NASA Astrophysics Data System (ADS)
Zhang, Ru; Guo, Tao; Liu, Jianyi
2018-03-01
Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.
The prefabricated building risk decision research of DM technology on the basis of Rough Set
NASA Astrophysics Data System (ADS)
Guo, Z. L.; Zhang, W. B.; Ma, L. H.
2017-08-01
With the resources crises and more serious pollution, the green building has been strongly advocated by most countries and become a new building style in the construction field. Compared with traditional building, the prefabricated building has its own irreplaceable advantages but is influenced by many uncertainties. So far, a majority of scholars have been studying based on qualitative researches from all of the word. This paper profoundly expounds its significance about the prefabricated building. On the premise of the existing research methods, combined with rough set theory, this paper redefines the factors which affect the prefabricated building risk. Moreover, it quantifies risk factors and establish an expert knowledge base through assessing. And then reduced risk factors about the redundant attributes and attribute values, finally form the simplest decision rule. This simplest decision rule, which is based on the DM technology of rough set theory, provides prefabricated building with a controllable new decision-making method.
Tannenbaum, David; Doctor, Jason N; Persell, Stephen D; Friedberg, Mark W; Meeker, Daniella; Friesema, Elisha M; Goldstein, Noah J; Linder, Jeffrey A; Fox, Craig R
2015-03-01
Healthcare professionals are rapidly adopting electronic health records (EHRs). Within EHRs, seemingly innocuous menu design configurations can influence provider decisions for better or worse. The purpose of this study was to examine whether the grouping of menu items systematically affects prescribing practices among primary care providers. We surveyed 166 primary care providers in a research network of practices in the greater Chicago area, of whom 84 responded (51% response rate). Respondents and non-respondents were similar on all observable dimensions except that respondents were more likely to work in an academic setting. The questionnaire consisted of seven clinical vignettes. Each vignette described typical signs and symptoms for acute respiratory infections, and providers chose treatments from a menu of options. For each vignette, providers were randomly assigned to one of two menu partitions. For antibiotic-inappropriate vignettes, the treatment menu either listed over-the-counter (OTC) medications individually while grouping prescriptions together, or displayed the reverse partition. For antibiotic-appropriate vignettes, the treatment menu either listed narrow-spectrum antibiotics individually while grouping broad-spectrum antibiotics, or displayed the reverse partition. The main outcome was provider treatment choice. For antibiotic-inappropriate vignettes, we categorized responses as prescription drugs or OTC-only options. For antibiotic-appropriate vignettes, we categorized responses as broad- or narrow-spectrum antibiotics. Across vignettes, there was an 11.5 percentage point reduction in choosing aggressive treatment options (e.g., broad-spectrum antibiotics) when aggressive options were grouped compared to when those same options were listed individually (95% CI: 2.9 to 20.1%; p = .008). Provider treatment choice appears to be influenced by the grouping of menu options, suggesting that the layout of EHR order sets is not an arbitrary exercise. The careful crafting of EHR order sets can serve as an important opportunity to improve patient care without constraining physicians' ability to prescribe what they believe is best for their patients.
A rough set-based measurement model study on high-speed railway safety operation.
Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun
2018-01-01
Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.
On the Effects of Surface Roughness on Boundary Layer Transition
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Li, Fei; Chang, Chau-Lyan; Edwards, Jack
2009-01-01
Surface roughness can influence laminar-turbulent transition in many different ways. This paper outlines selected analyses performed at the NASA Langley Research Center, ranging in speed from subsonic to hypersonic Mach numbers and highlighting the beneficial as well as adverse roles of the surface roughness in technological applications. The first theme pertains to boundary-layer tripping on the forebody of a hypersonic airbreathing configuration via a spanwise periodic array of trip elements, with the goal of understanding the physical mechanisms underlying roughness-induced transition in a high-speed boundary layer. The effect of an isolated, finite amplitude roughness element on a supersonic boundary layer is considered next. The other set of flow configurations examined herein corresponds to roughness based laminar flow control in subsonic and supersonic swept wing boundary layers. A common theme to all of the above configurations is the need to apply higher fidelity, physics based techniques to develop reliable predictions of roughness effects on laminar-turbulent transition.
Measuring Skew in Average Surface Roughness as a Function of Surface Preparation
NASA Technical Reports Server (NTRS)
Stahl, Mark T.
2015-01-01
Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces grinding saving both time and money and allows the science requirements to be better defined. In this study various materials are polished from a fine grind to a fine polish. Each sample's RMS surface roughness is measured at 81 locations in a 9x9 square grid using a Zygo white light interferometer at regular intervals during the polishing process. Each data set is fit with various standard distributions and tested for goodness of fit. We show that the skew in the RMS data changes as a function of polishing time.
NASA Astrophysics Data System (ADS)
Arab Bafrani, Hamidreza; Ebrahimi, Mahdi; Bagheri Shouraki, Saeed; Moshfegh, Alireza Z.
2018-01-01
Memristor devices have attracted tremendous interest due to different applications ranging from nonvolatile data storage to neuromorphic computing units. Exploring the role of surface roughness of the bottom electrode (BE)/active layer interface provides useful guidelines for the optimization of the memristor switching performance. This study focuses on the effect of surface roughness of the BE electrode on the switching characteristics of Au/TiO2/Au three-layer memristor devices. An optimized wet-etching treatment condition was found to modify the surface roughness of the Au BE where the measurement results indicate that the roughness of the Au BE is affected by both duration time and solution concentrations of the wet-etching process. Then we fabricated arrays of TiO2-based nanostructured memristors sandwiched between two sets of cross-bar Au electrode lines (junction area 900 μm2). The results revealed a reduction in the working voltages in current-voltage characteristic of the device performance when increasing the surface roughness at the Au(BE)/TiO2 active layer interface. The set voltage of the device (Vset) significantly decreased from 2.26-1.93 V when we increased the interface roughness from 4.2-13.1 nm. The present work provides information for better understanding the switching mechanism of titanium-dioxide-based devices, and it can be inferred that enhancing the roughness of the Au BE/TiO2 active layer interface leads to a localized non-uniform electric field distribution that plays a vital role in reducing the energy consumption of the device.
A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method
NASA Astrophysics Data System (ADS)
Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.
2007-09-01
Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.
Tourrette, T.Z.L.; Burnett, D.S.; Bacon, C.R.
1991-01-01
Crystal-liquid partitioning in Fe-Ti oxides and zircon was studied in partially melted granodiorite blocks ejected during the climactic eruption of Mt. Mazama (Crater Lake), Oregon. The blocks, which contain up to 33% rhyolite glass (75 wt% SiO2), are interpreted to be portions of the magma chamber walls that were torn off during eruption. The glass is clear and well homogenized for all measured elements except Zr. Results for Fe-Ti oxides give DUoxide/liq ??? 0.1. Partitioning of Mg, Mn, Al, Si, V, and Cr in Fe-Ti oxides indicates that grains surrounded by glass are moderately well equilibrated with the melt for many of the minor elements, while those that are inclusions in relict plagioclase are not. Uranium and ytterbium inhomogeneities in zircons indicate that the zircons have only partially equilibrated with the melt and that uranium appears to have been diffusing out of the zircons faster than the zircons were dissolving. Minimum U, Y, and P concentrations in zircons give maximum DUzrc/liq = 13,DYzrc/liq = 23, and DPzrc/liq = 1, but these are considerably lower than reported by other workers for U and Y. Based on our measurements and given their low abundances in most rocks, Fe-Ti oxides probably do not play a major role in U-Th fractionation during partial melting. The partial melts were undersaturated with zircon and apatite, but both phases are present in our samples. This demonstrates an actual case of non-equilibrium source retention of accessory phases, which in general could be an important trace-element fractionation mechanism. Our results do not support the hypothesis that liquid structure is the dominant factor controlling trace-element partitioning in high-silica rhyolites. Rough calculations based on Zr gradients in the glass indicate that the samples could have been partially molten for 800 to 8000 years. ?? 1991.
An, Yan; Zou, Zhihong; Li, Ranran
2014-01-01
A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A). Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B), was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system. PMID:24675643
A three-way approach for protein function classification
2017-01-01
The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy. PMID:28234929
A three-way approach for protein function classification.
Ur Rehman, Hafeez; Azam, Nouman; Yao, JingTao; Benso, Alfredo
2017-01-01
The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy.
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Fayyoumi, Ebaa; Oommen, B John
2009-10-01
We consider the microaggregation problem (MAP) that involves partitioning a set of individual records in a microdata file into a number of mutually exclusive and exhaustive groups. This problem, which seeks for the best partition of the microdata file, is known to be NP-hard and has been tackled using many heuristic solutions. In this paper, we present the first reported fixed-structure-stochastic-automata-based solution to this problem. The newly proposed method leads to a lower value of the information loss (IL), obtains a better tradeoff between the IL and the disclosure risk (DR) when compared with state-of-the-art methods, and leads to a superior value of the scoring index, which is a criterion involving a combination of the IL and the DR. The scheme has been implemented, tested, and evaluated for different real-life and simulated data sets. The results clearly demonstrate the applicability of learning automata to the MAP and its ability to yield a solution that obtains the best tradeoff between IL and DR when compared with the state of the art.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Mueller, R F; Characklis, W G; Jones, W L; Sears, J T
1992-05-01
The processes leading to bacterial colonization on solid-water interfaces are adsorption, desorption, growth, and erosion. These processes have been measured individually in situ in a flowing system in real time using image analysis. Four different substrata (copper, silicon, 316 stainless-steel and glass) and 2 different bacterial species (Pseudomonas aeruginosa and Pseudomonas fluorescens) were used in the experiments. The flow was laminar (Re = 1.4) and the shear stress was kept constant during all experiments at 0.75 N m(-2). The surface roughness varied among the substrata from 0.002 microm (for silicon) to 0.015 microm (for copper). Surface free energies varied from 25.1 dynes cm(-1) for silicon to 31.2 dynes cm(-1) for copper. Cell curface hydrophobicity, reported as hydrocarbon partitioning values, ranged from 0.67 for Ps. fluorescens to 0.97 for Ps. aeruginosa.The adsorption rate coefficient varied by as much as a factor of 10 among the combinations of bacterial strain and substratum material, and was positively correlated with surface free energy, the surface roughness of the substratum, and the hydrophobicity of the cells. The probability of desorption decreased with increasing surface free energy and surface roughness of the substratum. Cell growth was inhibited on copper, but replication of cells overlying an initial cell layer was observed with increased exposure time to the cell-containing bulk water. A mathematical model describing cell accumulation on a substratum is presented.
Reconstructing photorealistic 3D models from image sequence using domain decomposition method
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei
2009-11-01
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.
Rispin, Karen; Wee, Joy
2015-07-01
This study was conducted to compare the performance of three types of chairs in a low-resource setting. The larger goal was to provide information which will enable more effective use of limited funds by wheelchair manufacturers and suppliers in low-resource settings. The Motivation Rough Terrain and Whirlwind Rough Rider were compared in six skills tests which participants completed in one wheelchair type and then a day later in the other. A hospital-style folding transport wheelchair was also included in one test. For all skills, participants rated the ease or difficulty on a visual analogue scale. For all tracks, distance traveled and the physiological cost index were recorded. Data were analyzed using repeated measures analysis of variance. The Motivation wheelchair outperformed Whirlwind wheelchair on rough and smooth tracks, and in some metrics on the tight spaces track. Motivation and Whirlwind wheelchairs significantly outperformed the hospital transport wheelchair in all metrics on the rough track skills test. This comparative study provides data that are valuable for manufacturers and for those who provide wheelchairs to users. The comparison with the hospital-style transport chair confirms the cost to users of inappropriate wheelchair provision. Implications for Rehabilitation For those with compromised lower limb function, wheelchairs are essential to enable full participation and improved quality of life. Therefore, provision of wheelchairs which effectively enable mobility in the cultures and environments in which people with disabilities live is crucial. This includes low-resource settings where the need for appropriate seating is especially urgent. A repeated measures study to measure wheelchair performances in everyday skills in the setting where wheelchairs are used gives information on the quality of mobility provided by those wheelchairs. This study highlights differences in the performance of three types of wheelchairs often distributed in low-resource settings. This information can improve mobility for wheelchair users in those settings by enabling wheelchair manufacturers to optimize wheelchair design and providers to optimize the use of limited funds.
NASA Astrophysics Data System (ADS)
OgéE, J.; Peylin, P.; Ciais, P.; Bariac, T.; Brunet, Y.; Berbigier, P.; Roche, C.; Richard, P.; Bardoux, G.; Bonnefond, J.-M.
2003-06-01
The current emphasis on global climate studies has led the scientific community to set up a number of sites for measuring the long-term biosphere-atmosphere net CO2 exchange (net ecosystem exchange, NEE). Partitioning this flux into its elementary components, net assimilation (FA), and respiration (FR), remains necessary in order to get a better understanding of biosphere functioning and design better surface exchange models. Noting that FR and FA have different isotopic signatures, we evaluate the potential of isotopic 13CO2 measurements in the air (combined with CO2 flux and concentration measurements) to partition NEE into FR and FA on a routine basis. The study is conducted at a temperate coniferous forest where intensive isotopic measurements in air, soil, and biomass were performed in summer 1997. The multilayer soil-vegetation-atmosphere transfer model MuSICA is adapted to compute 13CO2 flux and concentration profiles. Using MuSICA as a "perfect" simulator and taking advantage of the very dense spatiotemporal resolution of the isotopic data set (341 flasks over a 24-hour period) enable us to test each hypothesis and estimate the performance of the method. The partitioning works better in midafternoon when isotopic disequilibrium is strong. With only 15 flasks, i.e., two 13CO2 nighttime profiles (to estimate the isotopic signature of FR) and five daytime measurements (to perform the partitioning) we get mean daily estimates of FR and FA that agree with the model within 15-20%. However, knowledge of the mesophyll conductance seems crucial and may be a limitation to the method.
Boundaries on Range-Range Constrained Admissible Regions for Optical Space Surveillance
NASA Astrophysics Data System (ADS)
Gaebler, J. A.; Axelrad, P.; Schumacher, P. W., Jr.
We propose a new type of admissible-region analysis for track initiation in multi-satellite problems when apparent angles measured at known stations are the only observable. The goal is to create an efficient and parallelizable algorithm for computing initial candidate orbits for a large number of new targets. It takes at least three angles-only observations to establish an orbit by traditional means. Thus one is faced with a problem that requires N-choose-3 sets of calculations to test every possible combination of the N observations. An alternative approach is to reduce the number of combinations by making hypotheses of the range to a target along the observed line-of-sight. If realistic bounds on the range are imposed, consistent with a given partition of the space of orbital elements, a pair of range possibilities can be evaluated via Lambert’s method to find candidate orbits for that that partition, which then requires Nchoose- 2 times M-choose-2 combinations, where M is the average number of range hypotheses per observation. The contribution of this work is a set of constraints that establish bounds on the range-range hypothesis region for a given element-space partition, thereby minimizing M. Two effective constraints were identified, which together, constrain the hypothesis region in range-range space to nearly that of the true admissible region based on an orbital partition. The first constraint is based on the geometry of the vacant orbital focus. The second constraint is based on time-of-flight and Lagrange’s form of Kepler’s equation. A complete and efficient parallelization of the problem is possible on this approach because the element partitions can be arbitrary and can be handled independently of each other.
Influence of Silicate Melt Composition on Metal/Silicate Partitioning of W, Ge, Ga and Ni
NASA Technical Reports Server (NTRS)
Singletary, S. J.; Domanik, K.; Drake, M. J.
2005-01-01
The depletion of the siderophile elements in the Earth's upper mantle relative to the chondritic meteorites is a geochemical imprint of core segregation. Therefore, metal/silicate partition coefficients (Dm/s) for siderophile elements are essential to investigations of core formation when used in conjunction with the pattern of elemental abundances in the Earth's mantle. The partitioning of siderophile elements is controlled by temperature, pressure, oxygen fugacity, and by the compositions of the metal and silicate phases. Several recent studies have shown the importance of silicate melt composition on the partitioning of siderophile elements between silicate and metallic liquids. It has been demonstrated that many elements display increased solubility in less polymerized (mafic) melts. However, the importance of silicate melt composition was believed to be minor compared to the influence of oxygen fugacity until studies showed that melt composition is an important factor at high pressures and temperatures. It was found that melt composition is also important for partitioning of high valency siderophile elements. Atmospheric experiments were conducted, varying only silicate melt composition, to assess the importance of silicate melt composition for the partitioning of W, Co and Ga and found that the valence of the dissolving species plays an important role in determining the effect of composition on solubility. In this study, we extend the data set to higher pressures and investigate the role of silicate melt composition on the partitioning of the siderophile elements W, Ge, Ga and Ni between metallic and silicate liquid.
Spatial coding-based approach for partitioning big spatial data in Hadoop
NASA Astrophysics Data System (ADS)
Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai
2017-09-01
Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.
Measuring uncertainty by extracting fuzzy rules using rough sets
NASA Technical Reports Server (NTRS)
Worm, Jeffrey A.
1991-01-01
Despite the advancements in the computer industry in the past 30 years, there is still one major deficiency. Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. The methods are examined of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to possibly provide the optimal solution. By incorporating principles from these theories, a decision making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much these rules is believed is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of a set of fuzzy attributes is studied.
NASA Astrophysics Data System (ADS)
Carlyle-Moses, D. E.; Schooling, J. T.
2014-12-01
Urban tree canopy processes affect the volume and biogeochemistry of inputs to the hydrological cycle in cities. We studied stemflow from 37 isolated deciduous trees in an urban park in Kamloops, British Columbia which has a semi-arid climate dominated by small precipitation events. Precipitation and stemflow were measured on an event basis from June 12, 2012 to November 3, 2013. To clarify the effect of canopy traits on stemflow thresholds, rates, yields, percent, and funneling ratios, we analyzed branch angles, bark roughness, tree size, cover, leaf size, and branch and leader counts. High branch angles promoted stemflow in all trees, while bark roughness influenced stemflow differently for single- and multi-leader trees. The association between stemflow and numerous leaders deserves further study. Columnar-form trees often partitioned a large percentage of precipitation into stemflow, with event-scale values as high as 27.9 % recorded for an Armstrong Freeman Maple (Acer x freemanii 'Armstrong'). Under growing-season conditions funneling ratios as high as 196.9 were derived for an American Beech (Fagus grandifolia) individual. Among meteorological variables, rain depth was strongly correlated with stemflow yields; intra-storm break duration, rainfall intensity, rainfall inclination, wind speed, and vapour pressure deficit also played roles. Greater stemflow was associated with leafless canopies and with rain or mixed events versus snow. Results can inform climate-sensitive selection and siting of urban trees towards integrated rainwater management. For example, previous studies suggest that the reduction in storm-water generation by urban trees is accomplished through canopy interception loss alone. However, trees that partition large quantities of precipitation canopy-drainage as stemflow to the base of their trunks, where it has the potential to infiltrate into the soil media rather than fall on impervious surfaces as throughfall, may assist in reducing stormwater flow.
Solving Multi-variate Polynomial Equations in a Finite Field
2013-06-01
Algebraic Background In this section, some algebraic definitions and basics are discussed as they pertain to this re- search. For a more detailed...definitions and basics are discussed as they pertain to this research. For a more detailed treatment, consult a graph theory text such as [10]. A graph G...graph if V(G) can be partitioned into k subsets V1,V2, ...,Vk such that uv is only an edge of G if u and v belong to different partite sets. If, in
Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation
NASA Astrophysics Data System (ADS)
Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.
2018-05-01
Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.
NASA Astrophysics Data System (ADS)
Armadi, A. S.; Usman, M.; Suprastiwi, E.
2017-08-01
The aim of this study was to find out the surface roughness of composite resin veneer after brushing. In this study, 24 specimens of composite resin veneer are divided into three subgroups: brushed without toothpaste, brushed with non-herbal toothpaste, and brushed with herbal toothpaste. Brushing was performed for one set of 5,000 strokes and continued for a second set of 5,000 strokes. Roughness of composite resin veneer was determined using a Surface Roughness Tester. The results were statistically analyzed using Kruskal-Wallis nonparametric test and Post Hoc Mann-Whitney. The results indicate that the highest difference among the Ra values occurred within the subgroup that was brushed with the herbal toothpaste. In conclusion, the herbal toothpaste produced a rougher surface on composite resin veneer compared to non-herbal toothpaste.
Simulation for Rough Mill Options
Janice K. Wiedenbeck
1992-01-01
How is rough mill production affected by lumber length? Lumber grade? Cutting quality? Cutting sizes? How would equipment purchase plans be prioritized? How do personnel shifts affect system productivity? What effect would a reduction in machine set-up time have on material flow? Simulation modeling is being widely used in many industries to provide valuable insight...
Benchmarking performance measurement and lean manufacturing in the rough mill
Dan Cumbo; D. Earl Kline; Matthew S. Bumgardner
2006-01-01
Lean manufacturing represents a set of tools and a stepwise strategy for achieving smooth, predictable product flow, maximum product flexibility, and minimum system waste. While lean manufacturing principles have been successfully applied to some components of the secondary wood products value stream (e.g., moulding, turning, assembly, and finishing), the rough mill is...
Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J.
2018-01-01
Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface. PMID:29515494
A Granular Self-Organizing Map for Clustering and Gene Selection in Microarray Data.
Ray, Shubhra Sankar; Ganivada, Avatharam; Pal, Sankar K
2016-09-01
A new granular self-organizing map (GSOM) is developed by integrating the concept of a fuzzy rough set with the SOM. While training the GSOM, the weights of a winning neuron and the neighborhood neurons are updated through a modified learning procedure. The neighborhood is newly defined using the fuzzy rough sets. The clusters (granules) evolved by the GSOM are presented to a decision table as its decision classes. Based on the decision table, a method of gene selection is developed. The effectiveness of the GSOM is shown in both clustering samples and developing an unsupervised fuzzy rough feature selection (UFRFS) method for gene selection in microarray data. While the superior results of the GSOM, as compared with the related clustering methods, are provided in terms of β -index, DB-index, Dunn-index, and fuzzy rough entropy, the genes selected by the UFRFS are not only better in terms of classification accuracy and a feature evaluation index, but also statistically more significant than the related unsupervised methods. The C-codes of the GSOM and UFRFS are available online at http://avatharamg.webs.com/software-code.
Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J
2018-01-01
Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.
Two-lattice models of trace element behavior: A response
NASA Astrophysics Data System (ADS)
Ellison, Adam J. G.; Hess, Paul C.
1990-08-01
Two-lattice melt components of Bottinga and Weill (1972), Nielsen and Drake (1979), and Nielsen (1985) are applied to major and trace element partitioning between coexisting immiscible liquids studied by RYERSON and Hess (1978) and Watson (1976). The results show that (1) the set of components most successful in one system is not necessarily portable to another system; (2) solution non-ideality within a sublattice severely limits applicability of two-lattice models; (3) rigorous application of two-lattice melt components may yield effective partition coefficients for major element components with no physical interpretation; and (4) the distinction between network-forming and network-modifying components in the sense of the two-lattice models is not clear cut. The algebraic description of two-lattice models is such that they will most successfully limit the compositional dependence of major and trace element solution behavior when the effective partition coefficient of the component of interest is essentially the same as the bulk partition coefficient of all other components within its sublattice.
Toward prediction of alkane/water partition coefficients.
Toulmin, Anita; Wood, J Matthew; Kenny, Peter W
2008-07-10
Partition coefficients were measured for 47 compounds in the hexadecane/water ( P hxd) and 1-octanol/water ( P oct) systems. Some types of hydrogen bond acceptor presented by these compounds to the partitioning systems are not well represented in the literature of alkane/water partitioning. The difference, DeltalogP, between logP oct and logP hxd is a measure of the hydrogen bonding potential of a molecule and is identified as a target for predictive modeling. Minimized molecular electrostatic potential ( V min) was shown to be an effective predictor of the contribution of hydrogen bond acceptors to DeltalogP. Carbonyl oxygen atoms were found to be stronger hydrogen bond acceptors for their electrostatic potential than heteroaromatic nitrogen or oxygen bound to hypervalent sulfur or nitrogen. Values of V min calculated for hydrogen-bonded complexes were used to explore polarization effects. Predicted logP hxd and DeltalogP were shown to be more effective than logP oct for modeling brain penetration for a data set of 18 compounds.
Sharing the cell's bounty - organelle inheritance in yeast.
Knoblach, Barbara; Rachubinski, Richard A
2015-02-15
Eukaryotic cells replicate and partition their organelles between the mother cell and the daughter cell at cytokinesis. Polarized cells, notably the budding yeast Saccharomyces cerevisiae, are well suited for the study of organelle inheritance, as they facilitate an experimental dissection of organelle transport and retention processes. Much progress has been made in defining the molecular players involved in organelle partitioning in yeast. Each organelle uses a distinct set of factors - motor, anchor and adaptor proteins - that ensures its inheritance by future generations of cells. We propose that all organelles, regardless of origin or copy number, are partitioned by the same fundamental mechanism involving division and segregation. Thus, the mother cell keeps, and the daughter cell receives, their fair and equitable share of organelles. This mechanism of partitioning moreover facilitates the segregation of organelle fragments that are not functionally equivalent. In this Commentary, we describe how this principle of organelle population control affects peroxisomes and other organelles, and outline its implications for yeast life span and rejuvenation. © 2015. Published by The Company of Biologists Ltd.
Partitioning of polar and non-polar neutral organic chemicals into human and cow milk.
Geisler, Anett; Endo, Satoshi; Goss, Kai-Uwe
2011-10-01
The aim of this work was to develop a predictive model for milk/water partition coefficients of neutral organic compounds. Batch experiments were performed for 119 diverse organic chemicals in human milk and raw and processed cow milk at 37°C. No differences (<0.3 log units) in the partition coefficients of these types of milk were observed. The polyparameter linear free energy relationship model fit the calibration data well (SD=0.22 log units). An experimental validation data set including hormones and hormone active compounds was predicted satisfactorily by the model. An alternative modelling approach based on log K(ow) revealed a poorer performance. The model presented here provides a significant improvement in predicting enrichment of potentially hazardous chemicals in milk. In combination with physiologically based pharmacokinetic modelling this improvement in the estimation of milk/water partitioning coefficients may allow a better risk assessment for a wide range of neutral organic chemicals. Copyright © 2011 Elsevier Ltd. All rights reserved.
The "p"-Median Model as a Tool for Clustering Psychological Data
ERIC Educational Resources Information Center
Kohn, Hans-Friedrich; Steinley, Douglas; Brusco, Michael J.
2010-01-01
The "p"-median clustering model represents a combinatorial approach to partition data sets into disjoint, nonhierarchical groups. Object classes are constructed around "exemplars", that is, manifest objects in the data set, with the remaining instances assigned to their closest cluster centers. Effective, state-of-the-art implementations of…
The Backscattering Phase Function for a Sphere with a Two-Scale Relief of Rough Surface
NASA Astrophysics Data System (ADS)
Klass, E. V.
2017-12-01
The backscattering of light from spherical surfaces characterized by one and two-scale roughness reliefs has been investigated. The analysis is performed using the three-dimensional Monte-Carlo program POKS-RG (geometrical-optics approximation), which makes it possible to take into account the roughness of objects under study by introducing local geometries of different levels. The geometric module of the program is aimed at describing objects by equations of second-order surfaces. One-scale roughness is set as an ensemble of geometric figures (convex or concave halves of ellipsoids or cones). The two-scale roughness is modeled by convex halves of ellipsoids, with surface containing ellipsoidal pores. It is shown that a spherical surface with one-scale convex inhomogeneities has a flatter backscattering phase function than a surface with concave inhomogeneities (pores). For a sphere with two-scale roughness, the dependence of the backscattering intensity is found to be determined mostly by the lower-level inhomogeneities. The influence of roughness on the dependence of the backscattering from different spatial regions of spherical surface is analyzed.
NASA Astrophysics Data System (ADS)
Haghighi, Erfan; Gianotti, Daniel J.; Rigden, Angela J.; Salvucci, Guido D.; Kirchner, James W.; Entekhabi, Dara
2017-04-01
Being located in the transitional zone between dry and wet climate areas, semiarid ecosystems (and their associated ecohydrological processes) play a critical role in controlling climate change and global warming. Land evapotranspiration (ET), which is a central process in the climate system and a nexus of the water, energy and carbon cycles, typically accounts for up to 95% of the water budget in semiarid areas. Thus, the manner in which ET is partitioned into soil evaporation and plant transpiration in these settings is of practical importance for water and carbon cycling and their feedbacks to the climate system. ET (and its partitioning) in these regions is primarily controlled by surface soil moisture which varies episodically under stochastic precipitation inputs. Important as the ET-soil moisture relationship is, it remains empirical, and physical mechanisms governing its nature and dynamics are underexplored. Thus, the objective of this study is twofold: (1) to provide observational evidence for the influence of surface cover conditions on ET-soil moisture coupling in semiarid regions using soil moisture data from NASA's SMAP satellite mission combined with independent observationally based ET estimates, and (2) to develop a relatively simple mechanistic modeling platform improving our physical understanding of interactions between micro and macroscale processes controlling ET and its partitioning in partially vegetated areas. To this end, we invoked concepts from recent progress in mechanistic modeling of turbulent energy flux exchange in bluff-rough regions, and developed a physically based ET model that explicitly accounts for how vegetation-induced turbulence in the near-surface region influences soil drying and thus ET rates and dynamics. Model predictions revealed nonlinearities in the strength of the ET-soil moisture relationship (i.e., ∂ET/∂θ) as vegetation cover fraction increases, accounted for by the nonlinearity of surface-cover-dependent turbulent interactions. We identified a (predictable) critical vegetation cover fraction (as a function of vegetation stature and environmental conditions) that yields the strongest ET-soil moisture relationship under prescribed atmospheric conditions. Overall, the results suggest that ∂ET/ ∂θ varies more widely in regions with tall-stature woody vegetation that experience higher rates of change in turbulence intensity as the cover fraction increases. Our results facilitate a mathematically tractable description of ∂ET/ ∂θ, which is a core component of models that seek to predict hydrology-climate feedback processes in a changing climate.
Exact deconstruction of the 6D (2,0) theory
NASA Astrophysics Data System (ADS)
Hayling, J.; Papageorgakis, C.; Pomoni, E.; Rodríguez-Gómez, D.
2017-06-01
The dimensional-deconstruction prescription of Arkani-Hamed, Cohen, Kaplan, Karch and Motl provides a mechanism for recovering the A-type (2,0) theories on T 2, starting from a four-dimensional N=2 circular-quiver theory. We put this conjecture to the test using two exact-counting arguments: in the decompactification limit, we compare the Higgs-branch Hilbert series of the 4D N=2 quiver to the "half-BPS" limit of the (2,0) superconformal index. We also compare the full partition function for the 4D quiver on S 4 to the (2,0) partition function on S 4 × T 2. In both cases we find exact agreement. The partition function calculation sets up a dictionary between exact results in 4D and 6D.
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
Metatranscriptome analyses indicate resource partitioning between diatoms in the field.
Alexander, Harriet; Jenkins, Bethany D; Rynearson, Tatiana A; Dyhrman, Sonya T
2015-04-28
Diverse communities of marine phytoplankton carry out half of global primary production. The vast diversity of the phytoplankton has long perplexed ecologists because these organisms coexist in an isotropic environment while competing for the same basic resources (e.g., inorganic nutrients). Differential niche partitioning of resources is one hypothesis to explain this "paradox of the plankton," but it is difficult to quantify and track variation in phytoplankton metabolism in situ. Here, we use quantitative metatranscriptome analyses to examine pathways of nitrogen (N) and phosphorus (P) metabolism in diatoms that cooccur regularly in an estuary on the east coast of the United States (Narragansett Bay). Expression of known N and P metabolic pathways varied between diatoms, indicating apparent differences in resource utilization capacity that may prevent direct competition. Nutrient amendment incubations skewed N/P ratios, elucidating nutrient-responsive patterns of expression and facilitating a quantitative comparison between diatoms. The resource-responsive (RR) gene sets deviated in composition from the metabolic profile of the organism, being enriched in genes associated with N and P metabolism. Expression of the RR gene set varied over time and differed significantly between diatoms, resulting in opposite transcriptional responses to the same environment. Apparent differences in metabolic capacity and the expression of that capacity in the environment suggest that diatom-specific resource partitioning was occurring in Narragansett Bay. This high-resolution approach highlights the molecular underpinnings of diatom resource utilization and how cooccurring diatoms adjust their cellular physiology to partition their niche space.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Gong, Yadong; Wang, Jinsheng
2013-11-01
The current research of micro-grinding mainly focuses on the optimal processing technology for different materials. However, the material removal mechanism in micro-grinding is the base of achieving high quality processing surface. Therefore, a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography is proposed in this paper. The differences of material removal mechanism between convention grinding process and micro-grinding process are analyzed. Topography characterization has been done on micro-grinding tools which are fabricated by electroplating. Models of grain density generation and grain interval are built, and new predicting model of micro-grinding surface roughness is developed. In order to verify the precision and application effect of the surface roughness prediction model proposed, a micro-grinding orthogonally experiment on soda-lime glass is designed and conducted. A series of micro-machining surfaces which are 78 nm to 0.98 μm roughness of brittle material is achieved. It is found that experimental roughness results and the predicting roughness data have an evident coincidence, and the component variable of describing the size effects in predicting model is calculated to be 1.5×107 by reverse method based on the experimental results. The proposed model builds a set of distribution to consider grains distribution densities in different protrusion heights. Finally, the characterization of micro-grinding tools which are used in the experiment has been done based on the distribution set. It is concluded that there is a significant coincidence between surface prediction data from the proposed model and measurements from experiment results. Therefore, the effectiveness of the model is demonstrated. This paper proposes a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography, which would provide significant research theory and experimental reference of material removal mechanism in micro-grinding of soda-lime glass.
Comparison of two metrological approaches for the prediction of human haptic perception
NASA Astrophysics Data System (ADS)
Neumann, Annika; Frank, Daniel; Vondenhoff, Thomas; Schmitt, Robert
2016-06-01
Haptic perception is regarded as a key component of customer appreciation and acceptance for various products. The prediction of customers’ haptic perception is of interest both during product development and production phases. This paper presents the results of a multivariate analysis between perceived roughness and texture related surface measurements, to examine whether perceived roughness can be accurately predicted using technical measurements. Studies have shown that standardized measurement parameters, such as the roughness coefficients (e.g. Rz or Ra), do not show a one-dimensional linear correlation with the human perception (of roughness). Thus, an alternative measurement method was compared to standard measurements of roughness, in regard to its capability of predicting perceived roughness through technical measurements. To estimate perceived roughness, an experimental study was conducted in which 102 subjects evaluated four sets of 12 different geometrical surface structures regarding their relative perceived roughness. The two different metrological procedures were examined in relation to their capability to predict the perceived roughness of the subjects stated within the study. The standardized measurements of the surface roughness were made using a structured light 3D-scanner. As an alternative method, surface induced vibrations were measured by a finger-like sensor during robot-controlled traverse over a surface. The presented findings provide a better understanding of the predictability of human haptic perception using technical measurements.
The role of connectedness in haptic object perception.
Plaisier, Myrthe A; van Polanen, Vonne; Kappers, Astrid M L
2017-03-02
We can efficiently detect whether there is a rough object among a set of smooth objects using our sense of touch. We can also quickly determine the number of rough objects in our hand. In this study, we investigated whether the perceptual processing of rough and smooth objects is influenced if these objects are connected. In Experiment 1, participants were asked to identify whether there were exactly two rough target spheres among smooth distractor spheres, while we recorded their response times. The spheres were connected to form pairs: rough spheres were paired together and smooth spheres were paired together ('within pairs arrangement'), or a rough and a smooth sphere were connected ('between pairs arrangement'). Participants responded faster when the spheres in a pair were identical. In Experiment 2, we found that the advantage for within pairs arrangements was not driven by feature saliency. Overall our results show that haptic information is processed faster when targets were connected together compared to when targets were connected to distractors.
K-Partite RNA Secondary Structures
NASA Astrophysics Data System (ADS)
Jiang, Minghui; Tejada, Pedro J.; Lasisi, Ramoni O.; Cheng, Shanhong; Fechser, D. Scott
RNA secondary structure prediction is a fundamental problem in structural bioinformatics. The prediction problem is difficult because RNA secondary structures may contain pseudoknots formed by crossing base pairs. We introduce k-partite secondary structures as a simple classification of RNA secondary structures with pseudoknots. An RNA secondary structure is k-partite if it is the union of k pseudoknot-free sub-structures. Most known RNA secondary structures are either bipartite or tripartite. We show that there exists a constant number k such that any secondary structure can be modified into a k-partite secondary structure with approximately the same free energy. This offers a partial explanation of the prevalence of k-partite secondary structures with small k. We give a complete characterization of the computational complexities of recognizing k-partite secondary structures for all k ≥ 2, and show that this recognition problem is essentially the same as the k-colorability problem on circle graphs. We present two simple heuristics, iterated peeling and first-fit packing, for finding k-partite RNA secondary structures. For maximizing the number of base pair stackings, our iterated peeling heuristic achieves a constant approximation ratio of at most k for 2 ≤ k ≤ 5, and at most frac6{1-(1-6/k)^k} le frac6{1-e^{-6}} < 6.01491 for k ≥ 6. Experiment on sequences from PseudoBase shows that our first-fit packing heuristic outperforms the leading method HotKnots in predicting RNA secondary structures with pseudoknots. Source code, data set, and experimental results are available at
ROMI-RIP: Rough Mill RIP-first simulator user's guide
R. Edward Thomas
1995-01-01
The ROugh Mill RIP-first simulator (ROMI-RIP) is a computer software package for IBM compatible personal computers that simulates current industrial practices for gang-ripping lumber. This guide shows the user how to set and examine the results of simulations regarding current or proposed mill practices. ROMI-RIP accepts cutting bills with up to 300 different part...
Weights and topology: a study of the effects of graph construction on 3D image segmentation.
Grady, Leo; Jolly, Marie-Pierre
2008-01-01
Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.
NASA Astrophysics Data System (ADS)
Kaboli, Shirin; McDermid, Joseph R.
2014-08-01
A galvanizing simulator was used to determine the effect of galvanizing bath antimony (Sb) content, substrate surface roughness, and cooling rate on the microstructural development of metallic zinc coatings. Substrate surface roughness was varied through the use of relatively rough hot-rolled and relatively smooth bright-rolled steels, cooling rates were varied from 0.1 to 10 K/s, and bulk bath Sb levels were varied from 0 to 0.1 wt pct. In general, it was found that increasing bath Sb content resulted in coatings with a larger grain size and strongly promoted the development of coatings with the close-packed {0002} basal plane parallel to the substrate surface. Increasing substrate surface roughness tended to decrease the coating grain size and promoted a more random coating crystallographic texture, except in the case of the highest Sb content bath (0.1 wt pct Sb), where substrate roughness had no significant effect on grain size except at higher cooling rates (10 K/s). Increased cooling rates tended to decrease the coating grain size and promote the {0002} basal orientation. Calculations showed that increasing the bath Sb content from 0 to 0.1 wt pct Sb increased the dendrite tip growth velocity from 0.06 to 0.11 cm/s by decreasing the solid-liquid interface surface energy from 0.77 to 0.45 J/m2. Increased dendrite tip velocity only partially explains the formation of larger zinc grains at higher Sb levels. It was also found that the classic nucleation theory cannot completely explain the present experimental observations, particularly the effect of increasing the bath Sb, where the classical theory predicts increased nucleation and a finer grain size. In this case, the "poisoning" theory of nucleation sites by segregated Sb may provide a partial explanation. However, any analysis is greatly hampered by the lack of fundamental thermodynamic information such as partition coefficients and surface energies and by a lack of fundamental structural studies. Overall, it was concluded that the fundamental mechanisms behind the microstructural development of solidified metallic zinc coatings have yet to be completely elucidated and require further investigation.
NASA Astrophysics Data System (ADS)
Johnson, J. P.; Aronovitz, A. C.
2012-12-01
We conducted laboratory flume experiments to quantify changes in multiple factors leading to mountain river bed stability (i.e., minimal bed changes in space and time), and to understand how stable beds respond to perturbations in sediment supply. Experiments were run in a small flume 4 m long by 0.1 m wide. We imposed an initial well-graded size distribution of sediment (from coarse sand to up to 4 cm clasts), a steady water discharge (0.9 L/s), and initial bed surface slopes (8% and 12%). We measured outlet sediment flux and size distribution, bed topography and surface size distributions, and water depths; from these we calculated total shear stress, form drag and skin friction stress partitioning, and hydraulic roughness. The bed was initially allowed to stabilize with no imposed upstream sediment flux. This stabilization occurred due to significant changes in all of the factors listed in the title, and resulted in incipient step-pool like bed morphologies. In addition, this study was designed to explore possible long-term effects of gravel augmentation on mountain channel morphology and surface grain size. While the short-term goal of gravel augmentation is usually to cause fining of surface sediment patches, we find that the long-term effects may be opposite. We perturbed the stabilized channels by temporarily imposing an upstream sediment flux of the finest sediment size fraction (sand to granules). Median surface sizes initially decreased due to fine sediment deposition, although transport rates of intermediate-sized grains increased. When the fine sediment supply was stopped, beds evolved to be both rougher and coarser than they had been previously, because the largest grains remained on the bed but intermediate-sized grains were preferentially transported out, leaving higher fractions of larger grains on the surface. Existing models for mixed grain size transport actually predict changes in mobilization reasonably well, but do not explicity account for surface roughness evolution. Our results indicate a nonlinear relationship between surface median grain size and bed roughness.
Dynamic mortar finite element method for modeling of shear rupture on frictional rough surfaces
NASA Astrophysics Data System (ADS)
Tal, Yuval; Hager, Bradford H.
2017-09-01
This paper presents a mortar-based finite element formulation for modeling the dynamics of shear rupture on rough interfaces governed by slip-weakening and rate and state (RS) friction laws, focusing on the dynamics of earthquakes. The method utilizes the dual Lagrange multipliers and the primal-dual active set strategy concepts, together with a consistent discretization and linearization of the contact forces and constraints, and the friction laws to obtain a semi-smooth Newton method. The discretization of the RS friction law involves a procedure to condense out the state variables, thus eliminating the addition of another set of unknowns into the system. Several numerical examples of shear rupture on frictional rough interfaces demonstrate the efficiency of the method and examine the effects of the different time discretization schemes on the convergence, energy conservation, and the time evolution of shear traction and slip rate.
Rough set soft computing cancer classification and network: one stone, two birds.
Zhang, Yue
2010-07-15
Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article.
Rough Set Approach to Incomplete Multiscale Information System
Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu
2014-01-01
Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852
A discrete scattering series representation for lattice embedded models of chain cyclization
NASA Astrophysics Data System (ADS)
Fraser, Simon J.; Winnik, Mitchell A.
1980-01-01
In this paper we develop a lattice based model of chain cyclization in the presence of a set of occupied sites V in the lattice. We show that within the approximation of a Markovian chain propagator the effect of V on the partition function for the system can be written as a time-ordered exponential series in which V behaves like a scattering potential and chainlength is the timelike parameter. The discrete and finite nature of this model allows us to obtain rigorous upper and lower bounds to the series limit. We adapt these formulas to calculation of the partition functions and cyclization probabilities of terminally and globally cyclizing chains. Two classes of cyclization are considered: in the first model the target set H may be visited repeatedly (the Markovian model); in the second case vertices in H may be visited at most once(the non-Markovian or taboo model). This formulation depends on two fundamental combinatorial structures, namely the inclusion-exclusion principle and the set of subsets of a set. We have tried to interpret these abstract structures with physical analogies throughout the paper.
Fragment-based prediction of skin sensitization using recursive partitioning
NASA Astrophysics Data System (ADS)
Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian
2011-09-01
Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.
A knowledge based system for scientific data visualization
NASA Technical Reports Server (NTRS)
Senay, Hikmet; Ignatius, Eve
1992-01-01
A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.
Partitioning in parallel processing of production systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oflazer, K.
1987-01-01
This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
Harnessing the Bethe free energy†
Bapst, Victor
2016-01-01
ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Sloma, Michael F.; Mathews, David H.
2016-01-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924
Hall, Matthew; Woolhouse, Mark; Rambaut, Andrew
2015-01-01
The use of genetic data to reconstruct the transmission tree of infectious disease epidemics and outbreaks has been the subject of an increasing number of studies, but previous approaches have usually either made assumptions that are not fully compatible with phylogenetic inference, or, where they have based inference on a phylogeny, have employed a procedure that requires this tree to be fixed. At the same time, the coalescent-based models of the pathogen population that are employed in the methods usually used for time-resolved phylogeny reconstruction are a considerable simplification of epidemic process, as they assume that pathogen lineages mix freely. Here, we contribute a new method that is simultaneously a phylogeny reconstruction method for isolates taken from an epidemic, and a procedure for transmission tree reconstruction. We observe that, if one or more samples is taken from each host in an epidemic or outbreak and these are used to build a phylogeny, a transmission tree is equivalent to a partition of the set of nodes of this phylogeny, such that each partition element is a set of nodes that is connected in the full tree and contains all the tips corresponding to samples taken from one and only one host. We then implement a Monte Carlo Markov Chain (MCMC) procedure for simultaneous sampling from the spaces of both trees, utilising a newly-designed set of phylogenetic tree proposals that also respect node partitions. We calculate the posterior probability of these partitioned trees based on a model that acknowledges the population structure of an epidemic by employing an individual-based disease transmission model and a coalescent process taking place within each host. We demonstrate our method, first using simulated data, and then with sequences taken from the H7N7 avian influenza outbreak that occurred in the Netherlands in 2003. We show that it is superior to established coalescent methods for reconstructing the topology and node heights of the phylogeny and performs well for transmission tree reconstruction when the phylogeny is well-resolved by the genetic data, but caution that this will often not be the case in practice and that existing genetic and epidemiological data should be used to configure such analyses whenever possible. This method is available for use by the research community as part of BEAST, one of the most widely-used packages for reconstruction of dated phylogenies. PMID:26717515
Daniel, J B; Friggens, N C; van Laar, H; Ingvartsen, K L; Sauvant, D
2018-06-01
The control of nutrient partitioning is complex and affected by many factors, among them physiological state and production potential. Therefore, the current model aims to provide for dairy cows a dynamic framework to predict a consistent set of reference performance patterns (milk component yields, body composition change, dry-matter intake) sensitive to physiological status across a range of milk production potentials (within and between breeds). Flows and partition of net energy toward maintenance, growth, gestation, body reserves and milk components are described in the model. The structure of the model is characterized by two sub-models, a regulating sub-model of homeorhetic control which sets dynamic partitioning rules along the lactation, and an operating sub-model that translates this into animal performance. The regulating sub-model describes lactation as the result of three driving forces: (1) use of previously acquired resources through mobilization, (2) acquisition of new resources with a priority of partition towards milk and (3) subsequent use of resources towards body reserves gain. The dynamics of these three driving forces were adjusted separately for fat (milk and body), protein (milk and body) and lactose (milk). Milk yield is predicted from lactose and protein yields with an empirical equation developed from literature data. The model predicts desired dry-matter intake as an outcome of net energy requirements for a given dietary net energy content. The parameters controlling milk component yields and body composition changes were calibrated using two data sets in which the diet was the same for all animals. Weekly data from Holstein dairy cows was used to calibrate the model within-breed across milk production potentials. A second data set was used to evaluate the model and to calibrate it for breed differences (Holstein, Danish Red and Jersey) on the mobilization/reconstitution of body composition and on the yield of individual milk components. These calibrations showed that the model framework was able to adequately simulate milk yield, milk component yields, body composition changes and dry-matter intake throughout lactation for primiparous and multiparous cows differing in their production level.
Various forms of indexing HDMR for modelling multivariate classification problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksu, Çağrı; Tunga, M. Alper
2014-12-10
The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, Kieran; Carroll, Kenneth C.; Brusseau, Mark L.
2016-07-01
Two different methods are currently used for measuring interfacial areas between immiscible fluids within 3-D porous media, high-resolution microtomographic imaging and interfacial partitioning tracer tests (IPTT). Both methods were used in this study to measure nonwetting/wetting interfacial areas for a natural sand. The microtomographic imaging was conducted on the same packed columns that were used for the IPTTs. This is in contrast to prior studies comparing the two methods, for which in all cases different samples were used for the two methods. In addition, the columns were imaged before and after the IPTTs to evaluate the potential impacts of themore » tracer solution on fluid configuration and attendant interfacial area. The interfacial areas measured using IPTT are ~5 times larger than the microtomographic-measured values, which is consistent with previous work. Analysis of the image data revealed no significant impact of the tracer solution on NAPL configuration or interfacial area. Other potential sources of error were evaluated, and all were demonstrated to be insignificant. The disparity in measured interfacial areas between the two methods is attributed to the limitation of the microtomography method to characterize interfacial area associated with microscopic surface roughness due to resolution constraints.« less
McDonald, Kieran; Carroll, Kenneth C; Brusseau, Mark L
2016-07-01
Two different methods are currently used for measuring interfacial areas between immiscible fluids within 3-D porous media, high-resolution microtomographic imaging and interfacial partitioning tracer tests (IPTT). Both methods were used in this study to measure non-wetting/wetting interfacial areas for a natural sand. The microtomographic imaging was conducted on the same packed columns that were used for the IPTTs. This is in contrast to prior studies comparing the two methods, for which in all cases different samples were used for the two methods. In addition, the columns were imaged before and after the IPTTs to evaluate the potential impacts of the tracer solution on fluid configuration and attendant interfacial area. The interfacial areas measured using IPTT are ~5 times larger than the microtomographic-measured values, which is consistent with previous work. Analysis of the image data revealed no significant impact of the tracer solution on NAPL configuration or interfacial area. Other potential sources of error were evaluated, and all were demonstrated to be insignificant. The disparity in measured interfacial areas between the two methods is attributed to the limitation of the microtomography method to characterize interfacial area associated with microscopic surface roughness due to resolution constraints.
Inner-ear sound pressures near the base of the cochlea in chinchilla: Further investigation
Ravicz, Michael E.; Rosowski, John J.
2013-01-01
The middle-ear pressure gain GMEP, the ratio of sound pressure in the cochlear vestibule PV to sound pressure at the tympanic membrane PTM, is a descriptor of middle-ear sound transfer and the cochlear input for a given stimulus in the ear canal. GMEP and the cochlear partition differential pressure near the cochlear base ΔPCP, which determines the stimulus for cochlear partition motion and has been linked to hearing ability, were computed from simultaneous measurements of PV, PTM, and the sound pressure in scala tympani near the round window PST in chinchilla. GMEP magnitude was approximately 30 dB between 0.1 and 10 kHz and decreased sharply above 20 kHz, which is not consistent with an ideal transformer or a lossless transmission line. The GMEP phase was consistent with a roughly 50-μs delay between PV and PTM. GMEP was little affected by the inner-ear modifications necessary to measure PST. GMEP is a good predictor of ΔPCP at low and moderate frequencies where PV ⪢ PST but overestimates ΔPCP above a few kilohertz where PV ≈ PST. The ratio of PST to PV provides insight into the distribution of sound pressure within the cochlear scalae. PMID:23556590
Equal Graph Partitioning on Estimated Infection Network as an Effective Epidemic Mitigation Measure
Hadidjojo, Jeremy; Cheong, Siew Ann
2011-01-01
Controlling severe outbreaks remains the most important problem in infectious disease area. With time, this problem will only become more severe as population density in urban centers grows. Social interactions play a very important role in determining how infectious diseases spread, and organization of people along social lines gives rise to non-spatial networks in which the infections spread. Infection networks are different for diseases with different transmission modes, but are likely to be identical or highly similar for diseases that spread the same way. Hence, infection networks estimated from common infections can be useful to contain epidemics of a more severe disease with the same transmission mode. Here we present a proof-of-concept study demonstrating the effectiveness of epidemic mitigation based on such estimated infection networks. We first generate artificial social networks of different sizes and average degrees, but with roughly the same clustering characteristic. We then start SIR epidemics on these networks, censor the simulated incidences, and use them to reconstruct the infection network. We then efficiently fragment the estimated network by removing the smallest number of nodes identified by a graph partitioning algorithm. Finally, we demonstrate the effectiveness of this targeted strategy, by comparing it against traditional untargeted strategies, in slowing down and reducing the size of advancing epidemics. PMID:21799777
ROMI 4.0: Rough mill simulator 4.0 users manual
R. Edward Thomas; Timo Grueneberg; Urs Buehlmann
2015-01-01
The Rough MIll simulator (ROMI Version 4.0) is a computer software package for personal computers (PCs) that simulates current industrial practices for rip-first, chop-first, and rip and chop-first lumber processing. This guide shows how to set up the software; design, implement, and execute simulations; and examine the results. ROMI 4.0 accepts cutting bills with as...
The fission yeast cytokinetic contractile ring regulates septum shape and closure
Thiyagarajan, Sathish; Munteanu, Emilia Laura; Arasada, Rajesh; Pollard, Thomas D.; O'Shaughnessy, Ben
2015-01-01
ABSTRACT During cytokinesis, fission yeast and other fungi and bacteria grow a septum that divides the cell in two. In fission yeast closure of the circular septum hole by the β-glucan synthases (Bgs) and other glucan synthases in the plasma membrane is tightly coupled to constriction of an actomyosin contractile ring attached to the membrane. It is unknown how septum growth is coordinated over scales of several microns to maintain septum circularity. Here, we documented the shapes of ingrowing septum edges by measuring the roughness of the edges, a measure of the deviation from circularity. The roughness was small, with spatial correlations indicative of spatially coordinated growth. We hypothesized that Bgs-mediated septum growth is mechanosensitive and coupled to contractile ring tension. A mathematical model showed that ring tension then generates almost circular septum edges by adjusting growth rates in a curvature-dependent fashion. The model reproduced experimental roughness statistics and showed that septum synthesis sets the mean closure rate. Our results suggest that the fission yeast cytokinetic ring tension does not set the constriction rate but regulates septum closure by suppressing roughness produced by inherently stochastic molecular growth processes. PMID:26240178
The fission yeast cytokinetic contractile ring regulates septum shape and closure.
Thiyagarajan, Sathish; Munteanu, Emilia Laura; Arasada, Rajesh; Pollard, Thomas D; O'Shaughnessy, Ben
2015-10-01
During cytokinesis, fission yeast and other fungi and bacteria grow a septum that divides the cell in two. In fission yeast closure of the circular septum hole by the β-glucan synthases (Bgs) and other glucan synthases in the plasma membrane is tightly coupled to constriction of an actomyosin contractile ring attached to the membrane. It is unknown how septum growth is coordinated over scales of several microns to maintain septum circularity. Here, we documented the shapes of ingrowing septum edges by measuring the roughness of the edges, a measure of the deviation from circularity. The roughness was small, with spatial correlations indicative of spatially coordinated growth. We hypothesized that Bgs-mediated septum growth is mechanosensitive and coupled to contractile ring tension. A mathematical model showed that ring tension then generates almost circular septum edges by adjusting growth rates in a curvature-dependent fashion. The model reproduced experimental roughness statistics and showed that septum synthesis sets the mean closure rate. Our results suggest that the fission yeast cytokinetic ring tension does not set the constriction rate but regulates septum closure by suppressing roughness produced by inherently stochastic molecular growth processes. © 2015. Published by The Company of Biologists Ltd.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
NASA Astrophysics Data System (ADS)
Martin, A. C. H.; Boutin, J.; Hauser, D.; Dinnat, E. P.
2014-08-01
The impact of the ocean surface roughness on the ocean L-band emissivity is investigated using simultaneous airborne measurements from an L-band radiometer (CAROLS) and from a C-band scatterometer (STORM) acquired in the Gulf of Biscay (off-the French Atlantic coasts) in November 2010. Two synergetic approaches are used to investigate the impact of surface roughness on the L-band brightness temperature (Tb). First, wind derived from the scatterometer measurements is used to analyze the roughness contribution to Tb as a function of wind and compare it with the one simulated by SMOS and Aquarius roughness models. Then residuals from this mean relationship are analyzed in terms of mean square slope derived from the STORM instrument. We show improvement of new radiometric roughness models derived from SMOS and Aquarius satellite measurements in comparison with prelaunch models. Influence of wind azimuth on Tb could not be evidenced from our data set. However, we point out the importance of taking into account large roughness scales (>20 cm) in addition to small roughness scale (5 cm) rapidly affected by wind to interpret radiometric measurements far from nadir. This was made possible thanks to simultaneous estimates of large and small roughness scales using STORM at small (7-16°) and large (30°) incidence angles.
NASA Astrophysics Data System (ADS)
Domínguez, Noemí; Castilla, Pau; Linzoain, María Eugenia; Durand, Géraldine; García, Cristina; Arasa, Josep
2018-04-01
This work presents the validation study of a method developed to measure contact angles with a confocal device in a set of hydrophobic samples. The use of this device allows the evaluation of the roughness of the surface and the determination of the contact angle in the same area of the sample. Furthermore, a theoretical evaluation of the impact of the roughness of a nonsmooth surface in the calculation of the contact angle when it is not taken into account according to Wenzel's model is also presented.
Fingerprinting the type of line edge roughness
NASA Astrophysics Data System (ADS)
Fernández Herrero, A.; Pflüger, M.; Scholze, F.; Soltwisch, V.
2017-06-01
Lamellar gratings are widely used diffractive optical elements and are prototypes of structural elements in integrated electronic circuits. EUV scatterometry is very sensitive to structure details and imperfections, which makes it suitable for the characterization of nanostructured surfaces. As compared to X-ray methods, EUV scattering allows for steeper angles of incidence, which is highly preferable for the investigation of small measurement fields on semiconductor wafers. For the control of the lithographic manufacturing process, a rapid in-line characterization of nanostructures is indispensable. Numerous studies on the determination of regular geometry parameters of lamellar gratings from optical and Extreme Ultraviolet (EUV) scattering also investigated the impact of roughness on the respective results. The challenge is to appropriately model the influence of structure roughness on the diffraction intensities used for the reconstruction of the surface profile. The impact of roughness was already studied analytically but for gratings with a periodic pseudoroughness, because of practical restrictions of the computational domain. Our investigation aims at a better understanding of the scattering caused by line roughness. We designed a set of nine lamellar Si-gratings to be studied by EUV scatterometry. It includes one reference grating with no artificial roughness added, four gratings with a periodic roughness distribution, two with a prevailing line edge roughness (LER) and another two with line width roughness (LWR), and four gratings with a stochastic roughness distribution (two with LER and two with LWR). We show that the type of line roughness has a strong impact on the diffuse scatter angular distribution. Our experimental results are not described well by the present modelling approach based on small, periodically repeated domains.
47 CFR 101.1415 - Partitioning and disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... American Datum (NAD83). (d) Unjust enrichment. 12 GHz licensees that received a bidding credit and... be subject to the provisions concerning unjust enrichment as set forth in § 1.2111 of this chapter...
Clustering, Seriation, and Subset Extraction of Confusion Data
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2006-01-01
The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although…
Foreign Language Analysis and Recognition (FLARe)
2016-10-08
10 7 Chinese CER ...Rates ( CERs ) were obtained with each feature set: (1) 19.2%, (2) 17.3%, and (3) 15.3%. Based on these results, a GMM-HMM speech recognition system...These systems were evaluated on the HUB4 and HKUST test partitions. Table 7 shows the CER obtained on each test set. Whereas including the HKUST data
Reppas-Chrysovitsinos, Efstathios; Sobek, Anna; MacLeod, Matthew
2016-06-15
Polymeric materials flowing through the technosphere are repositories of organic chemicals throughout their life cycle. Equilibrium partition ratios of organic chemicals between these materials and air (KMA) or water (KMW) are required for models of fate and transport, high-throughput exposure assessment and passive sampling. KMA and KMW have been measured for a growing number of chemical/material combinations, but significant data gaps still exist. We assembled a database of 363 KMA and 910 KMW measurements for 446 individual compounds and nearly 40 individual polymers and biopolymers, collected from 29 studies. We used the EPI Suite and ABSOLV software packages to estimate physicochemical properties of the compounds and we employed an empirical correlation based on Trouton's rule to adjust the measured KMA and KMW values to a standard reference temperature of 298 K. Then, we used a thermodynamic triangle with Henry's law constant to calculate a complete set of 1273 KMA and KMW values. Using simple linear regression, we developed a suite of single parameter linear free energy relationship (spLFER) models to estimate KMA from the EPI Suite-estimated octanol-air partition ratio (KOA) and KMW from the EPI Suite-estimated octanol-water (KOW) partition ratio. Similarly, using multiple linear regression, we developed a set of polyparameter linear free energy relationship (ppLFER) models to estimate KMA and KMW from ABSOLV-estimated Abraham solvation parameters. We explored the two LFER approaches to investigate (1) their performance in estimating partition ratios, and (2) uncertainties associated with treating all different polymers as a single "bulk" polymeric material compartment. The models we have developed are suitable for screening assessments of the tendency for organic chemicals to be emitted from materials, and for use in multimedia models of the fate of organic chemicals in the indoor environment. In screening applications we recommend that KMA and KMW be modeled as 0.06 ×KOA and 0.06 ×KOW respectively, with an uncertainty range of a factor of 15.
NASA Astrophysics Data System (ADS)
Abatzoglou, John T.; Ficklin, Darren L.
2017-09-01
The geographic variability in the partitioning of precipitation into surface runoff (Q) and evapotranspiration (ET) is fundamental to understanding regional water availability. The Budyko equation suggests this partitioning is strictly a function of aridity, yet observed deviations from this relationship for individual watersheds impede using the framework to model surface water balance in ungauged catchments and under future climate and land use scenarios. A set of climatic, physiographic, and vegetation metrics were used to model the spatial variability in the partitioning of precipitation for 211 watersheds across the contiguous United States (CONUS) within Budyko's framework through the free parameter ω. A generalized additive model found that four widely available variables, precipitation seasonality, the ratio of soil water holding capacity to precipitation, topographic slope, and the fraction of precipitation falling as snow, explained 81.2% of the variability in ω. The ω model applied to the Budyko equation explained 97% of the spatial variability in long-term Q for an independent set of watersheds. The ω model was also applied to estimate the long-term water balance across the CONUS for both contemporary and mid-21st century conditions. The modeled partitioning of observed precipitation to Q and ET compared favorably across the CONUS with estimates from more sophisticated land-surface modeling efforts. For mid-21st century conditions, the model simulated an increase in the fraction of precipitation used by ET across the CONUS with declines in Q for much of the eastern CONUS and mountainous watersheds across the western United States.
NASA Astrophysics Data System (ADS)
Lowe, Douglas; Topping, David; McFiggans, Gordon
2017-04-01
Gas to particle partitioning of atmospheric compounds occurs through disequilibrium mass transfer rather than through instantaneous equilibrium. However, it is common to treat only the inorganic compounds as partitioning dynamically whilst organic compounds, represented by the Volatility Basis Set (VBS), are partitioned instantaneously. In this study we implement a more realistic dynamic partitioning of organic compounds in a regional framework and assess impact on aerosol mass and microphysics. It is also common to assume condensed phase water is only associated with inorganic components. We thus also assess sensitivity to assuming all organics are hygroscopic according to their prescribed molecular weight. For this study we use WRF-Chem v3.4.1, focusing on anthropogenic dominated North-Western Europe. Gas-phase chemistry is represented using CBM-Z whilst aerosol dynamics are simulated using the 8-section MOSAIC scheme, including a 9-bin VBS treatment of organic aerosol. Results indicate that predicted mass loadings can vary significantly. Without gas phase ageing of higher volatility compounds, dynamic partitioning always results in lower mass loadings downwind of emission sources. The inclusion of condensed phase water in both partitioning models increases the predicted PM mass, resulting from a larger contribution from higher volatility organics, if present. If gas phase ageing of VBS compounds is allowed to occur in a dynamic model, this can often lead to higher predicted mass loadings, contrary to expected behaviour from a simple non-reactive gas phase box model. As descriptions of aerosol phase processes improve within regional models, the baseline descriptions of partitioning should retain the ability to treat dynamic partitioning of organics compounds. Using our simulations, we discuss whether derived sensitivities to aerosol processes in existing models may be inherently biased. This work was supported by the Natural Environment Research Council within the RONOCO (NE/F004656/1) and CCN-Vol (NE/L007827/1) projects.
Optimizing the Determination of Roughness Parameters for Model Urban Canopies
NASA Astrophysics Data System (ADS)
Huq, Pablo; Rahman, Auvi
2018-05-01
We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.
Non-Contact Surface Roughness Measurement by Implementation of a Spatial Light Modulator
Aulbach, Laura; Salazar Bloise, Félix; Lu, Min; Koch, Alexander W.
2017-01-01
The surface structure, especially the roughness, has a significant influence on numerous parameters, such as friction and wear, and therefore estimates the quality of technical systems. In the last decades, a broad variety of surface roughness measurement methods were developed. A destructive measurement procedure or the lack of feasibility of online monitoring are the crucial drawbacks of most of these methods. This article proposes a new non-contact method for measuring the surface roughness that is straightforward to implement and easy to extend to online monitoring processes. The key element is a liquid-crystal-based spatial light modulator, integrated in an interferometric setup. By varying the imprinted phase of the modulator, a correlation between the imprinted phase and the fringe visibility of an interferogram is measured, and the surface roughness can be derived. This paper presents the theoretical approach of the method and first simulation and experimental results for a set of surface roughnesses. The experimental results are compared with values obtained by an atomic force microscope and a stylus profiler. PMID:28294990
A rough set-based association rule approach implemented on a brand trust evaluation model
NASA Astrophysics Data System (ADS)
Liao, Shu-Hsien; Chen, Yin-Ju
2017-09-01
In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.
Rough Set Soft Computing Cancer Classification and Network: One Stone, Two Birds
Zhang, Yue
2010-01-01
Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article. PMID:20706619
Rough sets and Laplacian score based cost-sensitive feature selection
Yu, Shenglong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884
City traffic flow breakdown prediction based on fuzzy rough set
NASA Astrophysics Data System (ADS)
Yang, Xu; Da-wei, Hu; Bing, Su; Duo-jia, Zhang
2017-05-01
In city traffic management, traffic breakdown is a very important issue, which is defined as a speed drop of a certain amount within a dense traffic situation. In order to predict city traffic flow breakdown accurately, in this paper, we propose a novel city traffic flow breakdown prediction algorithm based on fuzzy rough set. Firstly, we illustrate the city traffic flow breakdown problem, in which three definitions are given, that is, 1) Pre-breakdown flow rate, 2) Rate, density, and speed of the traffic flow breakdown, and 3) Duration of the traffic flow breakdown. Moreover, we define a hazard function to represent the probability of the breakdown ending at a given time point. Secondly, as there are many redundant and irrelevant attributes in city flow breakdown prediction, we propose an attribute reduction algorithm using the fuzzy rough set. Thirdly, we discuss how to predict the city traffic flow breakdown based on attribute reduction and SVM classifier. Finally, experiments are conducted by collecting data from I-405 Freeway, which is located at Irvine, California. Experimental results demonstrate that the proposed algorithm is able to achieve lower average error rate of city traffic flow breakdown prediction.
Rough sets and Laplacian score based cost-sensitive feature selection.
Yu, Shenglong; Zhao, Hong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.
ROMI-3: Rough-Mill Simulator Version 3.0: User's Guide
Joel M. Weiss; R. Edward Thomas; R. Edward Thomas
2005-01-01
ROMI-3 Rough-Mill Simulator is a software package that simulates current industrial practices for rip-first and chop-first lumber processing. This guide shows the user how to set up and examine the results of simulations of current or proposed mill practices. ROMI-3 accepts cutting bills with as many as 600 combined solid and/or panel part sizes. Plots of processed...
NASA Astrophysics Data System (ADS)
Zanchi, Andrea; Zanchetta, Stefano; Balini, Marco; Ghassemi, Mohammad Reza
2014-05-01
The Lower-Middle Triassic Aghdarband Basin, NE Iran, consists of a strongly deformed arc-related marine succession deposited along the southern margin of Eurasia (Turan domain) in a highly mobile tectonic context. The marine deposits are unconformably covered by Upper Triassic continental beds, marking the Cimmerian collision of Iran with Eurasia. The Aghdarband Basin is a key-area for the study of the Cimmerian events, as the Triassic units were severely folded and thrust short time after the collision and were unconformably covered by the gently deformed Middle Jurassic succession which seals the Cimmerian structures. The Triassic deposits form a north-verging thrust stack interacting with an important left-lateral strike-slip shear zone exposed in the northernmost part of the basin. Transpressional structures as strike-slip faults and vertical folds are here associated with high angle reverse faults forming intricate positive flower structures. Systematic asymmetry of major and parasitic folds, as well as their geometrical features indicate that they generated in a left-lateral transpressional regime roughly coeval to thrust imbrication to the south, as a consequence of a marked strain partitioning. Aim of this presentation is to describe in detail the deformational structures of the Aghdarband region, based on structural mapping and detailed original mesoscopic field analyses, resuming from the excellent work performed in the '70s by Ruttner (1991). Our work is focused on the pre mid-Jurassic structures which can be related to the final stages of the Cimmerian deformation resulting from the oblique collision of the Iranian microplate with the southern margin of Eurasia, the so-called Turan domain. We will finally discuss the kinematic significance of the Late Triassic oblique convergence zone of Aghdarband in the frame of strain partitioning in transpressional deformation. Structural weakness favouring strain partitioning can be related to inversion of syn-sedimentary faults active during the Triassic, resulting from the reactivation of previous Palaeozoic structural lineaments which characterize the Turan domain. A right-lateral reactivation of the main left-lateral fault zone followed during Neogene and Quaternary as a consequence of the Arabia collision to the south
Role of roughness parameters on the tribology of randomly nano-textured silicon surface.
Gualtieri, E; Pugno, N; Rota, A; Spagni, A; Lepore, E; Valeri, S
2011-10-01
This experimental work is oriented to give a contribution to the knowledge of the relationship among surface roughness parameters and tribological properties of lubricated surfaces; it is well known that these surface properties are strictly related, but a complete comprehension of such correlations is still far to be reached. For this purpose, a mechanical polishing procedure was optimized in order to induce different, but well controlled, morphologies on Si(100) surfaces. The use of different abrasive papers and slurries enabled the formation of a wide spectrum of topographical irregularities (from the submicro- to the nano-scale) and a broad range of surface profiles. An AFM-based morphological and topographical campaign was carried out to characterize each silicon rough surface through a set of parameters. Samples were subsequently water lubricated and tribologically characterized through ball-on-disk tribometer measurements. Indeed, the wettability of each surface was investigated by measuring the water droplet contact angle, that revealed a hydrophilic character for all the surfaces, even if no clear correlation with roughness emerged. Nevertheless, this observation brings input to the purpose, as it allows to exclude that the differences in surface profile affect lubrication. So it is possible to link the dynamic friction coefficient of rough Si samples exclusively to the opportune set of surface roughness parameters that can exhaustively describe both height amplitude variations (Ra, Rdq) and profile periodicity (Rsk, Rku, Ic) that influence asperity-asperity interactions and hydrodynamic lift in different ways. For this main reason they cannot be treated separately, but with dependent approach through which it was possible to explain even counter intuitive results: the unexpected decreasing of friction coefficient with increasing Ra is justifiable by a more consistent increasing of kurtosis Rku.
Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F
2017-08-01
Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Gene Selection and Cancer Classification: A Rough Sets Based Approach
NASA Astrophysics Data System (ADS)
Sun, Lijun; Miao, Duoqian; Zhang, Hongyun
Indentification of informative gene subsets responsible for discerning between available samples of gene expression data is an important task in bioinformatics. Reducts, from rough sets theory, corresponding to a minimal set of essential genes for discerning samples, is an efficient tool for gene selection. Due to the compuational complexty of the existing reduct algoritms, feature ranking is usually used to narrow down gene space as the first step and top ranked genes are selected . In this paper,we define a novel certierion based on the expression level difference btween classes and contribution to classification of the gene for scoring genes and present a algorithm for generating all possible reduct from informative genes.The algorithm takes the whole attribute sets into account and find short reduct with a significant reduction in computational complexity. An exploration of this approach on benchmark gene expression data sets demonstrates that this approach is successful for selecting high discriminative genes and the classification accuracy is impressive.
Entropy Based Feature Selection for Fuzzy Set-Valued Information Systems
NASA Astrophysics Data System (ADS)
Ahmed, Waseem; Sufyan Beg, M. M.; Ahmad, Tanvir
2018-06-01
In Set-valued Information Systems (SIS), several objects contain more than one value for some attributes. Tolerance relation used for handling SIS sometimes leads to loss of certain information. To surmount this problem, fuzzy rough model was introduced. However, in some cases, SIS may contain some real or continuous set-values. Therefore, the existing fuzzy rough model for handling Information system with fuzzy set-values needs some changes. In this paper, Fuzzy Set-valued Information System (FSIS) is proposed and fuzzy similarity relation for FSIS is defined. Yager's relative conditional entropy was studied to find the significance measure of a candidate attribute of FSIS. Later, using these significance values, three greedy forward algorithms are discussed for finding the reduct and relative reduct for the proposed FSIS. An experiment was conducted on a sample population of the real dataset and a comparison of classification accuracies of the proposed FSIS with the existing SIS and single-valued Fuzzy Information Systems was made, which demonstrated the effectiveness of proposed FSIS.
Effects of partitioning and scheduling sparse matrix factorization on communication and load balance
NASA Technical Reports Server (NTRS)
Venugopal, Sesh; Naik, Vijay K.
1991-01-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
TriageTools: tools for partitioning and prioritizing analysis of high-throughput sequencing data.
Fimereli, Danai; Detours, Vincent; Konopka, Tomasz
2013-04-01
High-throughput sequencing is becoming a popular research tool but carries with it considerable costs in terms of computation time, data storage and bandwidth. Meanwhile, some research applications focusing on individual genes or pathways do not necessitate processing of a full sequencing dataset. Thus, it is desirable to partition a large dataset into smaller, manageable, but relevant pieces. We present a toolkit for partitioning raw sequencing data that includes a method for extracting reads that are likely to map onto pre-defined regions of interest. We show the method can be used to extract information about genes of interest from DNA or RNA sequencing samples in a fraction of the time and disk space required to process and store a full dataset. We report speedup factors between 2.6 and 96, depending on settings and samples used. The software is available at http://www.sourceforge.net/projects/triagetools/.
Anonymous quantum nonlocality.
Liang, Yeong-Cherng; Curchod, Florian John; Bowles, Joseph; Gisin, Nicolas
2014-09-26
We investigate the phenomenon of anonymous quantum nonlocality, which refers to the existence of multipartite quantum correlations that are not local in the sense of being Bell-inequality-violating but where the nonlocality is--due to its biseparability with respect to all bipartitions--seemingly nowhere to be found. Such correlations can be produced by the nonlocal collaboration involving definite subset(s) of parties but to an outsider, the identity of these nonlocally correlated parties is completely anonymous. For all n≥3, we present an example of an n-partite quantum correlation exhibiting anonymous nonlocality derived from the n-partite Greenberger-Horne-Zeilinger state. An explicit biseparable decomposition of these correlations is provided for any partitioning of the n parties into two groups. Two applications of these anonymous Greenberger-Horne-Zeilinger correlations in the device-independent setting are discussed: multipartite secret sharing between any two groups of parties and bipartite quantum key distribution that is robust against nearly arbitrary leakage of information.
Sloma, Michael F; Mathews, David H
2016-12-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. © 2016 Sloma and Mathews; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
NASA Astrophysics Data System (ADS)
Krieger, Ulrich; Marcolli, Claudia; Siegrist, Franziska
2015-04-01
The production of secondary organic aerosol (SOA) by gas-to-particle partitioning is generally represented by an equilibrium partitioning model. A key physical parameter which governs gas-particle partitioning is the pure component vapor pressure, which is difficult to measure for low- and semivolatile compounds. For typical atmospheric compounds like e.g. citric acid or tartaric acid, vapor pressures have been reported in the literature which differ by up to six orders of magnitude [Huisman et al., 2013]. Here, we report vapor pressures of a homologous series of polyethylene glycols (triethylene glycol to octaethylene glycol) determined by measuring the evaporation rate of single, levitated aerosol particles in an electrodynamic balance. We propose to use those as a reference data set for validating different vapor pressure measurement techniques. With each addition of a (O-CH2-CH2)-group the vapor pressure is lowered by about one order of magnitude which makes it easy to detect the lower limit of vapor pressures accessible with a particular technique down to a pressure of 10-8 Pa at room temperature. Reference: Huisman, A. J., Krieger, U. K., Zuend, A., Marcolli, C., and Peter, T., Atmos. Chem. Phys., 13, 6647-6662, 2013.
NASA Astrophysics Data System (ADS)
Mehrishal, Seyedahmad; Sharifzadeh, Mostafa; Shahriar, Korosh; Song, Jae-Jon
2017-04-01
In relation to the shearing of rock joints, the precise and continuous evaluation of asperity interlocking, dilation, and basic friction properties has been the most important task in the modeling of shear strength. In this paper, in order to investigate these controlling factors, two types of limestone joint samples were prepared and CNL direct shear tests were performed on these joints under various shear conditions. One set of samples were travertine and another were onyx marble with slickensided surfaces, surfaces ground to #80, and rough surfaces were tested. Direct shear experiments conducted on slickensided and ground surfaces of limestone indicated that by increasing the applied normal stress, under different shearing rates, the basic friction coefficient decreased. Moreover, in the shear tests under constant normal stress and shearing rate, the basic friction coefficient remained constant for the different contact sizes. The second series of direct shear experiments in this research was conducted on tension joint samples to evaluate the effect of surface roughness on the shear behavior of the rough joints. This paper deals with the dilation and roughness interlocking using a method that characterizes the surface roughness of the joint based on a fundamental combined surface roughness concept. The application of stress-dependent basic friction and quantitative roughness parameters in the continuous modeling of the shear behavior of rock joints is an important aspect of this research.
NASA Technical Reports Server (NTRS)
Joseph, A.T.; Lang, R.; O'Neill, P.E.; van der Velde, R.; Gish, T.
2008-01-01
A representative soil surface roughness parameterization needed for the retrieval of soil moisture from active microwave satellite observation is difficult to obtain through either in-situ measurements or remote sensing-based inversion techniques. Typically, for the retrieval of soil moisture, temporal variations in surface roughness are assumed to be negligible. Although previous investigations have suggested that this assumption might be reasonable for natural vegetation covers (Moran et al. 2002, Thoma et al. 2006), insitu measurements over plowed agricultural fields (Callens et al. 2006) have shown that the soil surface roughness can change considerably over time. This paper reports on the temporal stability of surface roughness effects on radar observations and soil moisture retrieved from these radar observations collected once a week during a corn growth cycle (May 10th - October 2002). The data set employed was collected during the Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) field campaign covering this 2002 corn growth cycle and consists of dual-polarized (HH and VV) L-band (1.6 GHz) acquired at view angles of 15, 35, and 55 degrees. Cross-polarized L baud radar data were also collected as part of this experiment, but are not used in the analysis reported on here. After accounting for vegetation effects on radar observations, time-invariant optimum roughness parameters were determined using the Integral Equation Method (IEM) and radar observations acquired over bare soil and cropped conditions (the complete radar data set includes entire corn growth cycle). The optimum roughness parameters, soil moisture retrieval uncertainty, temporal distribution of retrieval errors and its relationship with the weather conditions (e.g. rainfall and wind speed) have been analyzed. It is shown that over the corn growth cycle, temporal roughness variations due to weathering by rain are responsible for almost 50% of soil moisture retrieval uncertainty depending on the sensing configuration. The effects of surface roughness variations are found to be smallest for observations acquired at a view angle of 55 degrees and HH polarization. A possible explanation for this result is that at 55 degrees and HH polarization the effect of vertical surface height changes on the observed radar response are limited because the microwaves travel parallel to the incident plane and as a result will not interact directly with vertically oriented soil structures.
Effect of Blade-surface Finish on Performance of a Single-stage Axial-flow Compressor
NASA Technical Reports Server (NTRS)
Moses, Jason J; Serovy, George, K
1951-01-01
A set of modified NACA 5509-34 rotor and stator blades was investigated with rough-machine, hand-filed, and highly polished surface finishes over a range of weight flows at six equivalent tip speeds from 672 to 1092 feet per second to determine the effect of blade-surface finish on the performance of a single-stage axial-flow compressor. Surface-finish effects decreased with increasing compressor speed and with decreasing flow at a given speed. In general, finishing blade surfaces below the roughness that may be considered aerodynamically smooth on the basis of an admissible-roughness formula will have no effect on compressor performance.
NASA Technical Reports Server (NTRS)
Kenny, R. Jeremy; Casiano, Matthew; Fischbach, Sean; Hulka, James R.
2012-01-01
Liquid rocket engine combustion stability assessments are traditionally broken into three categories: dynamic stability, spontaneous stability, and rough combustion. This work focuses on comparing the spontaneous stability and rough combustion assessments for several liquid engine programs. The techniques used are those developed at Marshall Space Flight Center (MSFC) for the J-2X Workhorse Gas Generator program. Stability assessment data from the Integrated Powerhead Demonstrator (IPD), FASTRAC, and Common Extensible Cryogenic Engine (CECE) programs are compared against previously processed J-2X Gas Generator data. Prior metrics for spontaneous stability assessments are updated based on the compilation of all data sets.
Float polishing of optical materials.
Bennett, J M; Shaffer, J J; Shibano, Y; Namba, Y
1987-02-15
The float-polishing technique has been studied to determine its suitability for producing supersmooth surfaces on optical materials, yielding a roughness of <2 A rms. An attempt was made to polish six different materials including fused quartz, Zerodur, and sapphire. The low surface roughness was achieved on fused quartz, Zerodur, and Corning experimental glass-ceramic materials, and a surface roughness of <1 A rms was obtained on O-cut single-crystal sapphire. Presumably, similar surface finishes can also be obtained on CerVit and ULE quartz, which could not be polished satisfactorily in this set of experiments because of a mismatch between sample mounting and machine configuration.
Efficient bulk-loading of gridfiles
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Nicol, David M.
1994-01-01
This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.
Henneberger, Luise; Goss, Kai-Uwe; Endo, Satoshi
2016-07-05
The in vivo partitioning behavior of ionogenic organic chemicals (IOCs) is of paramount importance for their toxicokinetics and bioaccumulation. Among other proteins, structural proteins including muscle proteins could be an important sorption phase for IOCs, because of their high quantity in the human and other animals' body and their polar nature. Binding data for IOCs to structural proteins are, however, severely limited. Therefore, in this study muscle protein-water partition coefficients (KMP/w) of 51 systematically selected organic anions and cations were determined experimentally. A comparison of the measured KMP/w with bovine serum albumin (BSA)-water partition coefficients showed that anionic chemicals sorb more strongly to BSA than to muscle protein (by up to 3.5 orders of magnitude), while cations sorb similarly to both proteins. Sorption isotherms of selected IOCs to muscle protein are linear (i.e., KMP/w is concentration independent), and KMP/w is only marginally influenced by pH value and salt concentration. Using the obtained data set of KMP/w a polyparameter linear free energy relationship (PP-LFER) model was established. The derived equation fits the data well (R(2) = 0.89, RMSE = 0.29). Finally, it was demonstrated that the in vitro measured KMP/w values of this study have the potential to be used to evaluate tissue-plasma partitioning of IOCs in vivo.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.
NASA Astrophysics Data System (ADS)
Lau Sheng, Annie; Ismail, Izwan; Nur Aqida, Syarifah
2018-03-01
This study presents the effects of laser parameters on the surface roughness of laser modified tool steel after thermal cyclic loading. Pulse mode Nd:YAG laser was used to perform the laser surface modification process on AISI H13 tool steel samples. Samples were then treated with thermal cyclic loading experiments which involved alternate immersion in molten aluminium (800°C) and water (27°C) for 553 cycles. A full factorial design of experiment (DOE) was developed to perform the investigation. Factors for the DOE are the laser parameter namely overlap rate (η), pulse repetition frequency (f PRF) and peak power (Ppeak ) while the response is the surface roughness after thermal cyclic loading. Results indicate the surface roughness of the laser modified surface after thermal cyclic loading is significantly affected by laser parameter settings.
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levnajić, Zoran; Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, California 93106; Mezić, Igor
We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone,more » and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivasseau, Vincent, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org
The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property tomore » lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.« less
Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets.
Levnajić, Zoran; Mezić, Igor
2015-05-01
We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone, and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.
A Study of Energy Partitioning Using A Set of Related Explosive Formulations
NASA Astrophysics Data System (ADS)
Lieber, Mark; Foster, Joseph C., Jr.; Stewart, D. Scott
2011-06-01
Condensed phase high explosives convert potential energy stored in the electro-magnetic field structure of complex molecules to kinetic energy during the detonation process. This energy is manifest in the internal thermodynamic energy and the translational flow of the products. Historically, the explosive design problem has focused on intramolecular stoichiometry providing prompt reactions based on transport physics at the molecular scale. Modern material design has evolved to approaches that employee intermolecular ingredients to alter the spatial and temporal distribution of energy release. CHEETA has been used to produce data for a set of fictitious explosive formulations based on C-4 to study the partitioning of the available energy between internal and flow energy in the detonation. The equation of state information from CHEETA has been used in ALE3D to develop an understanding of the relationship between variations in the formulation parameters and the internal energy cycle in the products.
NASA Astrophysics Data System (ADS)
Huang, C. W.; Pockman, W.; Litvak, M. E.
2017-12-01
lthough it is well-established that land cover change influences water and carbon cycles across different spatiotemporal scales, the impact of climate-driven mortality events on site energy and water balance and subsequently on vegetation dynamics is more variable among studies. In semi-arid ecosystems globally, mortality events following severe drought are increasingly common. We used long-term observations (i.e., from 2009 to present) in two piñon-juniper (i.e., Pinus edulis and Juniperus monosperma) woodlands located at central New Mexico USA to explore the consequence of mortality events in such water-stressed environments. We compared a pinon-juniper woodland site where girdling was used to mimic mortality of adult pinon (PJG) with a nearby untreated woodland site (PJC). Our primary goal is to disentangle the reduction in water loss via biological pathway (i.e., leaf and sapwood area) introduced by girdling manipulation from other effects contributing to the response of surviving trees such as modifications in surface reflectivity (i.e., albedo and emissivity) and surface roughness impacting the partitioning between components in both energy and water balance at canopy level. To achieve this goal, we directly measured sap flux, environmental factors and ecosystem-atmosphere exchange of carbon, water and energy fluxes using eddy-covariance systems at both sites. We found that 1) for each component of the energy balance the difference between PJC and PJG was surprisingly negligible such that the canopy-level surface temperature (i.e., both radiometric and aerodynamic temperature) remains nearly identical for the two sites; 2) the surface reflectivity and roughness are mainly dominated by the soil surface especially when the foliage coverage in semi-arid regions is small; 3) the increase in soil evaporation after girdling manipulation outcompetes the surviving trees for the use of water in the soil. These results suggest that the so-called `water release hypothesis' may not occur in such water-stressed environments and the surviving trees may become less resilient to further drought conditions mainly due to the reduction in the soil water availability. Keywords: drought resilience, tree mortality, partitioning in energy and water balance, water release hypothesis
Local Table Condensation in Rough Set Approach for Jumping Emerging Pattern Induction
NASA Astrophysics Data System (ADS)
Terlecki, Pawel; Walczak, Krzysztof
This paper extends the rough set approach for JEP induction based on the notion of a condensed decision table. The original transaction database is transformed to a relational form and patterns are induced by means of local reducts. The transformation employs an item aggregation obtained by coloring a graph that re0ects con0icts among items. For e±ciency reasons we propose to perform this preprocessing locally, i.e. at the transaction level, to achieve a higher dimensionality gain. Special maintenance strategy is also used to avoid graph rebuilds. Both global and local approach have been tested and discussed for dense and synthetically generated sparse datasets.
Modification of surface morphology of Ti6Al4V alloy manufactured by Laser Sintering
NASA Astrophysics Data System (ADS)
Draganovská, Dagmar; Ižariková, Gabriela; Guzanová, Anna; Brezinová, Janette; Koncz, Juraj
2016-06-01
The paper deals with the evaluation of relation between roughness parameters of Ti6Al4V alloy produced by DMLS and modified by abrasive blasting. There were two types of blasting abrasives that were used - white corundum and Zirblast at three levels of air pressure. The effect of pressure on the value of individual roughness parameters and an influence of blasting media on the parameters for samples blasted by white corundum and Zirblast were evaluated by ANOVA. Based on the measured values, the correlation matrix was set and the standard of correlation statistic importance between the monitored parameters was determined from it. The correlation coefficient was also set.
Exploring KM Features of High-Performance Companies
NASA Astrophysics Data System (ADS)
Wu, Wei-Wen
2007-12-01
For reacting to an increasingly rival business environment, many companies emphasize the importance of knowledge management (KM). It is a favorable way to explore and learn KM features of high-performance companies. However, finding out the critical KM features of high-performance companies is a qualitative analysis problem. To handle this kind of problem, the rough set approach is suitable because it is based on data-mining techniques to discover knowledge without rigorous statistical assumptions. Thus, this paper explored KM features of high-performance companies by using the rough set approach. The results show that high-performance companies stress the importance on both tacit and explicit knowledge, and consider that incentives and evaluations are the essentials to implementing KM.
NASA Astrophysics Data System (ADS)
Wei, Xiao-Ran; Zhang, Yu-He; Geng, Guo-Hua
2016-09-01
In this paper, we examined how printing the hollow objects without infill via fused deposition modeling, one of the most widely used 3D-printing technologies, by partitioning the objects to shell parts. More specifically, we linked the partition to the exact cover problem. Given an input watertight mesh shape S, we developed region growing schemes to derive a set of surfaces that had inside surfaces that were printable without support on the mesh for the candidate parts. We then employed Monte Carlo tree search over the candidate parts to obtain the optimal set cover. All possible candidate subsets of exact cover from the optimal set cover were then obtained and the bounded tree was used to search the optimal exact cover. We oriented each shell part to the optimal position to guarantee the inside surface was printed without support, while the outside surface was printed with minimum support. Our solution can be applied to a variety of models, closed-hollowed or semi-closed, with or without holes, as evidenced by experiments and performance evaluation on our proposed algorithm.
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.
Panagopoulos, Dimitri; Jahnke, Annika; Kierkegaard, Amelie; MacLeod, Matthew
2015-10-20
The sorption of cyclic volatile methyl siloxanes (cVMS) to organic matter has a strong influence on their fate in the aquatic environment. We report new measurements of the partition ratios between freshwater sediment organic carbon and water (KOC) and between Aldrich humic acid dissolved organic carbon and water (KDOC) for three cVMS, and for three polychlorinated biphenyls (PCBs) that were used as reference chemicals. Our measurements were made using a purge-and-trap method that employs benchmark chemicals to calibrate mass transfer at the air/water interface in a fugacity-based multimedia model. The measured log KOC of octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), and dodecamethylcyclohexasiloxane (D6) were 5.06, 6.12, and 7.07, and log KDOC were 5.05, 6.13, and 6.79. To our knowledge, our measurements for KOC of D6 and KDOC of D4 and D6 are the first reported. Polyparameter linear free energy relationships (PP-LFERs) derived from training sets of empirical data that did not include cVMS generally did not predict our measured partition ratios of cVMS accurately (root-mean-squared-error (RMSE) for logKOC 0.76 and for logKDOC 0.73). We constructed new PP-LFERs that accurately describe partition ratios for the cVMS as well as for other chemicals by including our new measurements in the existing training sets (logKOC RMSEcVMS: 0.09, logKDOC RMSEcVMS: 0.12). The PP-LFERs we have developed here should be further evaluated and perhaps recalibrated when experimental data for other siloxanes become available.
Dai, D; Barranco, F T; Illangasekare, T H
2001-12-15
Research on the use of partitioning and interfacial tracers has led to the development of techniques for estimating subsurface NAPL amount and NAPL-water interfacial area. Although these techniques have been utilized with some success at field sites, current application is limited largely to NAPL at residual saturation, such as for the case of post-remediation settings where mobile NAPL has been removed through product recovery. The goal of this study was to fundamentally evaluate partitioning and interfacial tracer behavior in controlled column-scale test cells for a range of entrapment configurations varying in NAPL saturation, with the results serving as a determinant of technique efficacy (and design protocol) for use with complexly distributed NAPLs, possibly at high saturation, in heterogeneous aquifers. Representative end members of the range of entrapment configurations observed under conditions of natural heterogeneity (an occurrence with residual NAPL saturation [discontinuous blobs] and an occurrence with high NAPL saturation [continuous free-phase LNAPL lens]) were evaluated. Study results indicated accurate prediction (using measured tracer retardation and equilibrium-based computational techniques) of NAPL amount and NAPL-water interfacial area for the case of residual NAPL saturation. For the high-saturation LNAPL lens, results indicated that NAPL-water interfacial area, but not NAPL amount (underpredicted by 35%), can be reasonably determined using conventional computation techniques. Underprediction of NAPL amount lead to an erroneous prediction of NAPL distribution, as indicated by the NAPL morphology index. In light of these results, careful consideration should be given to technique design and critical assumptions before applying equilibrium-based partitioning tracer methodology to settings where NAPLs are complexly entrapped, such as in naturally heterogeneous subsurface formations.
Effect of video server topology on contingency capacity requirements
NASA Astrophysics Data System (ADS)
Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.
1996-03-01
Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.
Bezold, Franziska; Weinberger, Maria E; Minceva, Mirjana
2017-03-31
Tocopherols are a class of molecules with vitamin E activity. Among those, α-tocopherol is the most important vitamin E source in the human diet. The purification of tocopherols involving biphasic liquid systems can be challenging since these vitamins are poorly soluble in water. Deep eutectic solvents (DES) can be used to form water-free biphasic systems and have already proven applicable for centrifugal partition chromatography separations. In this work, a computational solvent system screening was performed using the predictive thermodynamic model COSMO-RS. Liquid-liquid equilibria of solvent systems composed of alkanes, alcohols and DES, as well as partition coefficients of α-tocopherol, β-tocopherol, γ-tocopherol, and σ-tocopherol in these biphasic solvent systems were calculated. From the results the best suited biphasic solvent system, namely heptane/ethanol/choline chloride-1,4-butanediol, was chosen and a batch injection of a tocopherol mixture, mainly consisting of α- and γ-tocopherol, was performed using a centrifugal partition chromatography set up (SCPE 250-BIO). A separation factor of 1.74 was achieved for α- and γ-tocopherol. Copyright © 2017 Elsevier B.V. All rights reserved.
Minimum nonuniform graph partitioning with unrelated weights
NASA Astrophysics Data System (ADS)
Makarychev, K. S.; Makarychev, Yu S.
2017-12-01
We give a bi-criteria approximation algorithm for the Minimum Nonuniform Graph Partitioning problem, recently introduced by Krauthgamer, Naor, Schwartz and Talwar. In this problem, we are given a graph G=(V,E) and k numbers ρ_1,\\dots, ρ_k. The goal is to partition V into k disjoint sets (bins) P_1,\\dots, P_k satisfying \\vert P_i\\vert≤ ρi \\vert V\\vert for all i, so as to minimize the number of edges cut by the partition. Our bi-criteria algorithm gives an O(\\sqrt{log \\vert V\\vert log k}) approximation for the objective function in general graphs and an O(1) approximation in graphs excluding a fixed minor. The approximate solution satisfies the relaxed capacity constraints \\vert P_i\\vert ≤ (5+ \\varepsilon)ρi \\vert V\\vert. This algorithm is an improvement upon the O(log \\vert V\\vert)-approximation algorithm by Krauthgamer, Naor, Schwartz and Talwar. We extend our results to the case of 'unrelated weights' and to the case of 'unrelated d-dimensional weights'. A preliminary version of this work was presented at the 41st International Colloquium on Automata, Languages and Programming (ICALP 2014). Bibliography: 7 titles.
Al-Nawas, B; Groetz, K A; Goetz, H; Duschner, H; Wagner, W
2008-01-01
Test of favourable conditions for osseointegration with respect to optimum bone-implant contact (BIC) in a loaded animal model. The varied parameters were surface roughness and surface topography of commercially available dental implants. Thirty-two implants of six types of macro and microstructure were included in the study (total 196). The different types were: minimally rough control: Branemark machined Mk III; oxidized surface: TiUnite MkIII and MkIV; ZL Ticer; blasted and etched surface: Straumann SLA; rough control: titanium plasma sprayed (TPS). Sixteen beagle dogs were implanted with the whole set of the above implants. After a healing period of 8 weeks, implants were loaded for 3 months. For the evaluation of the BIC areas, adequately sectioned biopsies were visualized by subsurface scans with confocal laser scanning microscopy (CLSM). The primary statistical analysis testing BIC of the moderately rough implants (mean 56.1+/-13.0%) vs. the minimally rough and the rough controls (mean 53.9+/-11.2%) does not reveal a significant difference (P=0.57). Mean values of 50-70% BIC were found for all implant types. Moderately rough oxidized implants show a median BIC, which is 8% higher than their minimally rough turned counterpart. The intraindividual difference between the TPS and the blasted and etched counterparts revealed no significant difference. The turned and the oxidized implants show median values of the resonance frequency [implant stability quotients (ISQ)] over 60; the nonself-tapping blasted and etched and TPS implants show median values below 60. In conclusion, the benefit of rough surfaces relative to minimally rough ones in this loaded animal model was confirmed histologically. The comparison of different surface treatment modalities revealed no significant differences between the modern moderately rough surfaces. Resonance frequency analysis seems to be influenced in a major part by the transducer used, thus prohibiting the comparison of different implant systems.
Analysis of accuracy in photogrammetric roughness measurements
NASA Astrophysics Data System (ADS)
Olkowicz, Marcin; Dąbrowski, Marcin; Pluymakers, Anne
2017-04-01
Regarding permeability, one of the most important features of shale gas reservoirs is the effective aperture of cracks opened during hydraulic fracturing, both propped and unpropped. In a propped fracture, the aperture is controlled mostly by proppant size and its embedment, and fracture surface roughness only has a minor influence. In contrast, in an unpropped fracture aperture is controlled by the fracture roughness and the wall displacement. To measure fracture surface roughness, we have used the photogrammetric method since it is time- and cost-efficient. To estimate the accuracy of this method we compare the photogrammetric measurements with reference measurements taken with a White Light Interferometer (WLI). Our photogrammetric setup is based on high resolution 50 Mpx camera combined with a focus stacking technique. The first step for photogrammetric measurements is to determine the optimal camera positions and lighting. We compare multiple scans of one sample, taken with different settings of lighting and camera positions, with the reference WLI measurement. The second step is to perform measurements of all studied fractures with the parameters that produced the best results in the first step. To compare photogrammetric and WLI measurements we regrid both data sets onto a regular 10 μm grid and determined the best fit, followed by a calculation of the difference between the measurements. The first results of the comparison show that for 90 % of measured points the absolute vertical distance between WLI and photogrammetry is less than 10 μm, while the mean absolute vertical distance is 5 μm. This proves that our setup can be used for fracture roughness measurements in shales.
Snyder, David A; Montelione, Gaetano T
2005-06-01
An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.
Counterintuitive effects of substrate roughness on PDCs
NASA Astrophysics Data System (ADS)
Andrews, B. J.; Manga, M.
2012-12-01
We model dilute pyroclastic density currents (PDCs) using scaled, warm, particle-laden density currents in a 6 m long, 0.6 m wide, 1.8 m tall air-filled tank. In this set of experiments, we run currents over substrates with characteristic roughness scales, hr, ranging over ~3 orders of magnitude from smooth, through 250 μm sandpaper, 0.1-, 1-, 2-, 5-, and 10 cm hemispheres. As substrate roughness increases, runout distance increases until a critical roughness height, hrc, is reached; further increases in roughness height decrease runout. The critical roughness height appears to be 0.25-0.5 htb, the thickness of the turbulent lower layer of the density currents. The dependence of runout on hr is most likely the result of increases in substrate roughness decreasing the average current velocity and converting that energy into increased turbulence intensity. Small values of hr thus result in increased runout as sedimentation is inhibited by the increased turbulence intensity. At larger values of hr current behavior is controlled by much larger decreases in average current velocity, even though sedimentation decreases. Scaling our experiments up to the size of real volcanic eruptions suggests that landscapes must have characteristic roughness hr>10 m to reduce the runout of natural PDCs, smaller roughness scales can increase runout. Comparison of relevant bulk (Reynolds number, densimetric and thermal Richardson numbers, excess buoyant thermal energy density) and turbulent (Stokes and settling numbers) between our experiments and natural dilute PDCs indicates that we are accurately modeling at least the large scale behaviors and dynamics of dilute PDCs.
Distribution of Diverse Escherichia coli between Cattle and Pasture.
NandaKafle, Gitanjali; Seale, Tarren; Flint, Toby; Nepal, Madhav; Venter, Stephanus N; Brözel, Volker S
2017-09-27
Escherichia coli is widely considered to not survive for extended periods outside the intestines of warm-blooded animals; however, recent studies demonstrated that E. coli strains maintain populations in soil and water without any known fecal contamination. The objective of this study was to investigate whether the niche partitioning of E. coli occurs between cattle and their pasture. We attempted to clarify whether E. coli from bovine feces differs phenotypically and genotypically from isolates maintaining a population in pasture soil over winter. Soil, bovine fecal, and run-off samples were collected before and after the introduction of cattle to the pasture. Isolates (363) were genotyped by uidA and mutS sequences and phylogrouping, and evaluated for curli formation (Rough, Dry, And Red, or RDAR). Three types of clusters emerged, viz. bovine-associated, clusters devoid of cattle isolates and representing isolates endemic to the pasture environment, and clusters with both. All isolates clustered with strains of E. coli sensu stricto, distinct from the cryptic species Clades I, III, IV, and V. Pasture soil endemic and bovine fecal populations had very different phylogroup distributions, indicating niche partitioning. The soil endemic population was largely comprised of phylogroup B1 and had a higher average RDAR score than other isolates. These results indicate the existence of environmental E. coli strains that are phylogenetically distinct from bovine fecal isolates, and that have the ability to maintain populations in the soil environment.
Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.
Cheng, Ching-Hsue; Liu, Wei-Xiang
2018-05-28
Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods.
Tourah, Anita; Moshaverinia, Alireza; Chee, Winston W
2014-02-01
Surface roughness and irregularities are important properties of dental investment materials that can affect the fit of a restoration. Whether setting under air pressure affects the surface irregularities of gypsum-bonded and phosphate-bonded investment materials is unknown. The purpose of this study was to investigate the effect of air pressure on the pore size and surface irregularities of investment materials immediately after pouring. Three dental investments, 1 gypsum-bonded investment and 2 phosphate-bonded investments, were investigated. They were vacuum mixed according to the manufacturers' recommendations, then poured into a ringless casting system. The prepared specimens were divided into 2 groups: 1 bench setting and the other placed in a pressure pot at 172 kPa. After 45 minutes of setting, the rings were removed and the investments were cut at a right angle to the long axis with a diamond disk. The surfaces of the investments were steam cleaned, dried with an air spray, and observed with a stereomicroscope. A profilometer was used to evaluate the surface roughness (μm) of the castings. The number of surface pores was counted for 8 specimens from each group and the means and standard deviations were reported. Two-way ANOVA was used to compare the data. Specimens that set under atmospheric air pressure had a significantly higher number of pores than specimens that set under increased pressure (P<.05). No statistically significant differences for surface roughness were found (P=.078). Also, no significant difference was observed among the 3 different types of materials tested (P>.05). Specimens set under positive pressure in a pressure chamber presented fewer surface bubbles than specimens set under atmospheric pressure. Positive pressure is effective and, therefore, is recommended for both gypsum-bonded and phosphate-bonded investment materials. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut
2016-02-20
An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.
NASA Technical Reports Server (NTRS)
Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut
2016-01-01
An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.
Quantitative three-dimensional ice roughness from scanning electron microscopy
NASA Astrophysics Data System (ADS)
Butterfield, Nicholas; Rowe, Penny M.; Stewart, Emily; Roesel, David; Neshyba, Steven
2017-03-01
We present a method for inferring surface morphology of ice from scanning electron microscope images. We first develop a novel functional form for the backscattered electron intensity as a function of ice facet orientation; this form is parameterized using smooth ice facets of known orientation. Three-dimensional representations of rough surfaces are retrieved at approximately micrometer resolution using Gauss-Newton inversion within a Bayesian framework. Statistical analysis of the resulting data sets permits characterization of ice surface roughness with a much higher statistical confidence than previously possible. A survey of results in the range -39°C to -29°C shows that characteristics of the roughness (e.g., Weibull parameters) are sensitive not only to the degree of roughening but also to the symmetry of the roughening. These results suggest that roughening characteristics obtained by remote sensing and in situ measurements of atmospheric ice clouds can potentially provide more facet-specific information than has previously been appreciated.
Cleaning of optical surfaces by capacitively coupled RF discharge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, P. K., E-mail: praveenyadav@rrcat.gov.in; Rai, S. K.; Nayak, M.
2014-04-24
In this paper, we report cleaning of carbon capped molybdenum (Mo) thin film by in-house developed radio frequency (RF) plasma reactor, at different powers and exposure time. Carbon capped Mo films were exposed to oxygen plasma for different durations at three different power settings, at a constant pressure. After each exposure, the thickness of the carbon layer and the roughness of the film were determined by hard x-ray reflectivity measurements. It was observed that most of the carbon film got removed in first 15 minutes exposure. A high density layer formed on top of the Mo film was also observedmore » and it was noted that this layer cannot be removed by successive exposures at different powers. A significant improvement in interface roughness with a slight improvement in top film roughness was observed. The surface roughness of the exposed and unexposed samples was also confirmed by atomic force microscopy measurements.« less
Pirkle, Catherine M; Wu, Yan Yan; Zunzunegui, Maria-Victoria; Gómez, José Fernando
2018-01-01
Objective Conceptual models underpinning much epidemiological research on ageing acknowledge that environmental, social and biological systems interact to influence health outcomes. Recursive partitioning is a data-driven approach that allows for concurrent exploration of distinct mixtures, or clusters, of individuals that have a particular outcome. Our aim is to use recursive partitioning to examine risk clusters for metabolic syndrome (MetS) and its components, in order to identify vulnerable populations. Study design Cross-sectional analysis of baseline data from a prospective longitudinal cohort called the International Mobility in Aging Study (IMIAS). Setting IMIAS includes sites from three middle-income countries—Tirana (Albania), Natal (Brazil) and Manizales (Colombia)—and two from Canada—Kingston (Ontario) and Saint-Hyacinthe (Quebec). Participants Community-dwelling male and female adults, aged 64–75 years (n=2002). Primary and secondary outcome measures We apply recursive partitioning to investigate social and behavioural risk factors for MetS and its components. Model-based recursive partitioning (MOB) was used to cluster participants into age-adjusted risk groups based on variabilities in: study site, sex, education, living arrangements, childhood adversities, adult occupation, current employment status, income, perceived income sufficiency, smoking status and weekly minutes of physical activity. Results 43% of participants had MetS. Using MOB, the primary partitioning variable was participant sex. Among women from middle-incomes sites, the predicted proportion with MetS ranged from 58% to 68%. Canadian women with limited physical activity had elevated predicted proportions of MetS (49%, 95% CI 39% to 58%). Among men, MetS ranged from 26% to 41% depending on childhood social adversity and education. Clustering for MetS components differed from the syndrome and across components. Study site was a primary partitioning variable for all components except HDL cholesterol. Sex was important for most components. Conclusion MOB is a promising technique for identifying disease risk clusters (eg, vulnerable populations) in modestly sized samples. PMID:29500203
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Sherrill, C. David
2014-07-01
We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu
2014-07-28
We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work throughmore » the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.« less
[Analytic methods for seed models with genotype x environment interactions].
Zhu, J
1996-01-01
Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
Partitioning Ocean Wave Spectra Obtained from Radar Observations
NASA Astrophysics Data System (ADS)
Delaye, Lauriane; Vergely, Jean-Luc; Hauser, Daniele; Guitton, Gilles; Mouche, Alexis; Tison, Celine
2016-08-01
2D wave spectra of ocean waves can be partitioned into several wave components to better characterize the scene. We present here two methods of component detection: one based on watershed algorithm and the other based on a Bayesian approach. We tested both methods on a set of simulated SWIM data, the Ku-band real aperture radar embarked on the CFOSAT (China- France Oceanography Satellite) mission which launch is planned mid-2018. We present the results and the limits of both approaches and show that Bayesian method can also be applied to other kind of wave spectra observations as those obtained with the radar KuROS, an airborne radar wave spectrometer.
Determining the Effect of Material Hardness During the Hard Turning of AISI4340 Steel
NASA Astrophysics Data System (ADS)
Kambagowni, Venkatasubbaiah; Chitla, Raju; Challa, Suresh
2018-05-01
In the present manufacturing industries hardened steels are most widely used in the applications like tool design and mould design. It enhances the application range of hard turning of hardened steels in manufacturing industries. This study discusses the impact of workpiece hardness, feed and depth of cut on Arithmetic mean roughness (Ra), root mean square roughness (Rq), mean depth of roughness (Rz) and total roughness (Rt) during the hard turning. Experiments have been planned according to the Box-Behnken design and conducted on hardened AISI4340 steel at 45, 50 and 55 HRC with wiper ceramic cutting inserts. Cutting speed is kept constant during this study. The analysis of variance was used to determine the effects of the machining parameters. 3-D response surface plots drawn based on RSM were utilized to set up the input-output relationships. The results indicated that the feed rate has the most significant parameter for Ra, Rq and Rz and hardness has the most critical parameter for the Rt. Further, hardness shows its influence over all the surface roughness characteristics.
ESTimating plant phylogeny: lessons from partitioning
de la Torre, Jose EB; Egan, Mary G; Katari, Manpreet S; Brenner, Eric D; Stevenson, Dennis W; Coruzzi, Gloria M; DeSalle, Rob
2006-01-01
Background While Expressed Sequence Tags (ESTs) have proven a viable and efficient way to sample genomes, particularly those for which whole-genome sequencing is impractical, phylogenetic analysis using ESTs remains difficult. Sequencing errors and orthology determination are the major problems when using ESTs as a source of characters for systematics. Here we develop methods to incorporate EST sequence information in a simultaneous analysis framework to address controversial phylogenetic questions regarding the relationships among the major groups of seed plants. We use an automated, phylogenetically derived approach to orthology determination called OrthologID generate a phylogeny based on 43 process partitions, many of which are derived from ESTs, and examine several measures of support to assess the utility of EST data for phylogenies. Results A maximum parsimony (MP) analysis resulted in a single tree with relatively high support at all nodes in the tree despite rampant conflict among trees generated from the separate analysis of individual partitions. In a comparison of broader-scale groupings based on cellular compartment (ie: chloroplast, mitochondrial or nuclear) or function, only the nuclear partition tree (based largely on EST data) was found to be topologically identical to the tree based on the simultaneous analysis of all data. Despite topological conflict among the broader-scale groupings examined, only the tree based on morphological data showed statistically significant differences. Conclusion Based on the amount of character support contributed by EST data which make up a majority of the nuclear data set, and the lack of conflict of the nuclear data set with the simultaneous analysis tree, we conclude that the inclusion of EST data does provide a viable and efficient approach to address phylogenetic questions within a parsimony framework on a genomic scale, if problems of orthology determination and potential sequencing errors can be overcome. In addition, approaches that examine conflict and support in a simultaneous analysis framework allow for a more precise understanding of the evolutionary history of individual process partitions and may be a novel way to understand functional aspects of different kinds of cellular classes of gene products. PMID:16776834
Evapotranspiration partitioning in a semi-arid African savanna using stable isotopes of water vapor
NASA Astrophysics Data System (ADS)
Soderberg, K.; Good, S. P.; O'Connor, M.; King, E. G.; Caylor, K. K.
2012-04-01
Evapotranspiration (ET) represents a major flux of water out of semi-arid ecosystems. Thus, understanding ET dynamics is central to the study of African savanna health and productivity. At our study site in central Kenya (Mpala Research Centre), we have been using stable isotopes of water vapor to partition ET into its constituent parts of plant transpiration (T) and soil evaporation (E). This effort includes continuous measurement (1 Hz) of δ2H and δ18O in water vapor using a portable water vapor isotope analyzer mounted on a 22.5 m eddy covariance flux tower. The flux tower has been collecting data since early 2010. The isotopic end-member of δET is calculated using a Keeling Plot approach, whereas δT and δE are measured directly via a leaf chamber and tubing buried in the soil, respectively. Here we report on a two recent sets of measurements for partitioning ET in the Kenya Long-term Exclosure Experiment (KLEE) and a nearby grassland. We combine leaf level measurements of photosynthesis and water use with canopy-scale isotope measurements for ET partitioning. In the KLEE experiment we compare ET partitioning in a 4 ha plot that has only seen cattle grazing for the past 15 years with an adjacent plot that has undergone grazing by both cattle and wild herbivores (antelope, elephants, giraffe). These results are compared with a detailed study of ET in an artificially watered grassland.
Unsupervised segmentation of MRI knees using image partition forests
NASA Astrophysics Data System (ADS)
Marčan, Marija; Voiculescu, Irina
2016-03-01
Nowadays many people are affected by arthritis, a condition of the joints with limited prevention measures, but with various options of treatment the most radical of which is surgical. In order for surgery to be successful, it can make use of careful analysis of patient-based models generated from medical images, usually by manual segmentation. In this work we show how to automate the segmentation of a crucial and complex joint -- the knee. To achieve this goal we rely on our novel way of representing a 3D voxel volume as a hierarchical structure of partitions which we have named Image Partition Forest (IPF). The IPF contains several partition layers of increasing coarseness, with partitions nested across layers in the form of adjacency graphs. On the basis of a set of properties (size, mean intensity, coordinates) of each node in the IPF we classify nodes into different features. Values indicating whether or not any particular node belongs to the femur or tibia are assigned through node filtering and node-based region growing. So far we have evaluated our method on 15 MRI knee images. Our unsupervised segmentation compared against a hand-segmented gold standard has achieved an average Dice similarity coefficient of 0.95 for femur and 0.93 for tibia, and an average symmetric surface distance of 0.98 mm for femur and 0.73 mm for tibia. The paper also discusses ways to introduce stricter morphological and spatial conditioning in the bone labelling process.
Making sense of metacommunities: dispelling the mythology of a metacommunity typology.
Brown, Bryan L; Sokol, Eric R; Skelton, James; Tornwall, Brett
2017-03-01
Metacommunity ecology has rapidly become a dominant framework through which ecologists understand the natural world. Unfortunately, persistent misunderstandings regarding metacommunity theory and the methods for evaluating hypotheses based on the theory are common in the ecological literature. Since its beginnings, four major paradigms-species sorting, mass effects, neutrality, and patch dynamics-have been associated with metacommunity ecology. The Big 4 have been misconstrued to represent the complete set of metacommunity dynamics. As a result, many investigators attempt to evaluate community assembly processes as strictly belonging to one of the Big 4 types, rather than embracing the full scope of metacommunity theory. The Big 4 were never intended to represent the entire spectrum of metacommunity dynamics and were rather examples of historical paradigms that fit within the new framework. We argue that perpetuation of the Big 4 typology hurts community ecology and we encourage researchers to embrace the full inference space of metacommunity theory. A related, but distinct issue is that the technique of variation partitioning is often used to evaluate the dynamics of metacommunities. This methodology has produced its own set of misunderstandings, some of which are directly a product of the Big 4 typology and others which are simply the product of poor study design or statistical artefacts. However, variation partitioning is a potentially powerful technique when used appropriately and we identify several strategies for successful utilization of variation partitioning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
Identifying finite-time coherent sets from limited quantities of Lagrangian data.
Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W
2015-08-01
A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.
Liang, Chao; Han, Shu-ying; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin
2014-11-01
A strategy to utilize neutral model compounds for lipophilicity measurement of ionizable basic compounds by reversed-phase high-performance liquid chromatography is proposed in this paper. The applicability of the novel protocol was justified by theoretical derivation. Meanwhile, the linear relationships between logarithm of apparent n-octanol/water partition coefficients (logKow '') and logarithm of retention factors corresponding to the 100% aqueous fraction of mobile phase (logkw ) were established for a basic training set, a neutral training set and a mixed training set of these two. As proved in theory, the good linearity and external validation results indicated that the logKow ''-logkw relationships obtained from a neutral model training set were always reliable regardless of mobile phase pH. Afterwards, the above relationships were adopted to determine the logKow of harmaline, a weakly dissociable alkaloid. As far as we know, this is the first report on experimental logKow data for harmaline (logKow = 2.28 ± 0.08). Introducing neutral compounds into a basic model training set or using neutral model compounds alone is recommended to measure the lipophilicity of weakly ionizable basic compounds especially those with high hydrophobicity for the advantages of more suitable model compound choices and convenient mobile phase pH control. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Identifying finite-time coherent sets from limited quantities of Lagrangian data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Matthew O.; Rypina, Irina I.; Rowley, Clarence W.
A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods basedmore » on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.« less
Mars radar clutter and surface roughness characteristics from MARSIS data
NASA Astrophysics Data System (ADS)
Campbell, Bruce A.; Schroeder, Dustin M.; Whitten, Jennifer L.
2018-01-01
Radar sounder studies of icy, sedimentary, and volcanic settings can be affected by reflections from surface topography surrounding the sensor nadir location. These off-nadir ;clutter; returns appear at similar time delays to subsurface echoes and complicate geologic interpretation. Additionally, broadening of the radar echo in delay by surface returns sets a limit on the detectability of subsurface interfaces. We use MARSIS 4 MHz data to study variations in the nadir and off-nadir clutter echoes, from about 300 km to 1000 km altitude, R, for a wide range of surface roughness. This analysis uses a new method of characterizing ionospheric attenuation to merge observations over a range of solar zenith angle and date. Mirror-like reflections should scale as R-2, but the observed 4 MHz nadir echoes often decline by a somewhat smaller power-law factor because MARSIS on-board processing increases the number of summed pulses with altitude. Prior predictions of the contributions from clutter suggest a steeper decline with R than the nadir echoes, but in very rough areas the ratio of off-nadir returns to nadir echoes shows instead an increase of about R1/2 with altitude. This is likely due in part to an increase in backscatter from the surface as the radar incidence angle at some round-trip time delay declines with increasing R. It is possible that nadir and clutter echo properties in other planetary sounding observations, including RIME and REASON flyby data for Europa, will vary in the same way with altitude, but there may be differences in the nature and scale of target roughness (e.g., icy versus rocky surfaces). We present global maps of the ionosphere- and altitude-corrected nadir echo strength, and of a ;clutter; parameter based on the ratio of off-nadir to nadir echoes. The clutter map offers a view of surface roughness at ∼75 m length scale, bridging the spatial-scale gap between SHARAD roughness estimates and MOLA-derived parameters.
Grant, Andrew; Grant, Gwyneth; Gagné, Jean; Blanchette, Carl; Comeau, Émilie; Brodeur, Guillaume; Dionne, Jonathon; Ayite, Alphonse; Synak, Piotr; Wroblewski, Jakub; Apanowitz, Cas
2001-01-01
The patient centred electronic patient record enables retrospective analysis of practice patterns as one means to assist clinicians adjust and improve their practice. An interrogation of the data-warehouse linking test use to Diagnostic Related Group (DRG) of one years data of the Sherbrooke University Hospital showed that one-third of patients used two-thirds of these diagnostic tests. Using RoughSets analysis, zones of repeated tests were demonstrated where results remained within stable limits. It was concluded that 30% of fluid and electrolyte testing was probably unnecessary. These findings led to an endorsement of changing the test request formats in the hospital information system from profiles to individual tests requiring justification.
Enhanced Trajectory Based Similarity Prediction with Uncertainty Quantification
2014-10-02
challenge by obtaining the highest score by using a data-driven prognostics method to predict the RUL of a turbofan engine (Saxena & Goebel, PHM08...process for multi-regime health assessment. To illustrate multi-regime partitioning, the “ Turbofan Engine Degradation simulation” data set from...hence the name k- means. Figure 3 shows the results of the k-means clustering algorithm on the “ Turbofan Engine Degradation simulation” data set. As
Synthesis, Interdiction, and Protection of Layered Networks
2009-09-01
152 4.7 Al Qaeda Network from Sageman Database . . . . . . . . . . 157 4.8 Interdiction Resources versus Closeness Centrality . . . . . . 159...where S may be a polyhedron , a set with discrete variables, a set with nonlin- earities, or so on); and partitions it into two mutually exclusive subsets...p. vii]. However, this database is based on Dr. Sagemans’s 2004 publication and may be dated. Therefore, the analysis in this section is to
Measuring Constraint-Set Utility for Partitional Clustering Algorithms
NASA Technical Reports Server (NTRS)
Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato
2006-01-01
Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.
NASA Astrophysics Data System (ADS)
Juneja, Anurag; Brasseur, James G.
1999-10-01
Large-eddy simulation (LES) of the atmospheric boundary layer (ABL) using eddy viscosity subgrid-scale (SGS) models is known to poorly predict mean shear at the first few grid cells near the ground, a rough surface with no viscous sublayer. It has recently been shown that convective motions carry this localized error vertically to infect the entire ABL, and that the error is more a consequence of the SGS model than grid resolution in the near-surface inertial layer. Our goal was to determine what first-order errors in the predicted SGS terms lead to spurious expectation values, and what basic dynamics in the filtered equation for resolved scale (RS) velocity must be captured by SGS models to correct the deficiencies. Our analysis is of general relevance to LES of rough-wall high Reynolds number boundary layers, where the essential difficulty in the closure is the importance of the SGS acceleration terms, a consequence of necessary under-resolution of relevant energy-containing motions at the first few grid levels, leading to potentially strong couplings between the anisotropies in resolved velocity and predicted SGS dynamics. We analyze these two issues (under-resolution and anisotropy) in the absence of a wall using two direct numerical simulation datasets of homogeneous turbulence with very different anisotropic structure characteristic of the near-surface ABL: shear- and buoyancy-generated turbulence. We uncover three important issues which should be addressed in the design of SGS closures near rough walls and we provide a priori tests for the SGS model. First, we identify a strong spurious coupling between the anisotropic structure of the resolved velocity field and predicted SGS dynamics which can create a feedback loop to incorrectly enhance certain components of the predicted velocity field. Second, we find that eddy viscosity and "similarity" SGS models do not contain enough degrees of freedom to capture, at a sufficient level of accuracy, both RS-SGS energy flux and SGS-RS dynamics. Third, to correctly capture pressure transport near a wall, closures must be made more flexible to accommodate proper partitioning between SGS stress divergence and SGS pressure gradient.
NASA Astrophysics Data System (ADS)
Mehri, Tahar; Kemppinen, Osku; David, Grégory; Lindqvist, Hannakaisa; Tyynelä, Jani; Nousiainen, Timo; Rairoux, Patrick; Miffre, Alain
2018-05-01
Our understanding of the contribution of mineral dust to the Earth's radiative budget is limited by the complexity of these particles, which present a wide range of sizes, are highly-irregularly shaped, and are present in the atmosphere in the form of particle mixtures. To address the spatial distribution of mineral dust and atmospheric dust mass concentrations, polarization lidars are nowadays frequently used, with partitioning algorithms allowing to discern the contribution of mineral dust in two or three-component particle external mixtures. In this paper, we investigate the dependence of the retrieved dust backscattering (βd) vertical profiles with the dust particle size and shape. For that, new light-scattering numerical simulations are performed on real atmospheric mineral dust particles, having determined mineralogy (CAL, DOL, AGG, SIL), derived from stereogrammetry (stereo-particles), with potential surface roughness, which are compared to the widely-used spheroidal mathematical shape model. For each dust shape model (smooth stereo-particles, rough stereo-particles, spheroids), the dust depolarization, backscattering Ångström exponent, lidar ratio are computed for two size distributions representative of mineral dust after long-range transport. As an output, two Saharan dust outbreaks involving mineral dust in two, then three-component particle mixtures are studied with Lyon (France) UV-VIS polarization lidar. If the dust size matters most, under certain circumstances, βd can vary by approximately 67% when real dust stereo-particles are used instead of spheroids, corresponding to variations in the dust backscattering coefficient as large as 2 Mm- 1·sr- 1. Moreover, the influence of surface roughness in polarization lidar retrievals is for the first time discussed. Finally, dust mass-extinction conversion factors (ηd) are evaluated for each assigned shape model and dust mass concentrations are retrieved from polarization lidar measurements. From spheroids to stereo-particles, ηd increases by about 30%. We believe these results may be useful for our understanding of the spatial distribution of mineral dust contained in an aerosol external mixture and to better quantify dust mass concentrations from polarization lidar experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satake, Shin-ichi; Kanamori, Hiroyuki; Kunugi, Tomoaki
2007-02-01
We have developed a parallel algorithm for microdigital-holographic particle-tracking velocimetry. The algorithm is used in (1) numerical reconstruction of a particle image computer using a digital hologram, and (2) searching for particles. The numerical reconstruction from the digital hologram makes use of the Fresnel diffraction equation and the FFT (fast Fourier transform),whereas the particle search algorithm looks for local maximum graduation in a reconstruction field represented by a 3D matrix. To achieve high performance computing for both calculations (reconstruction and particle search), two memory partitions are allocated to the 3D matrix. In this matrix, the reconstruction part consists of horizontallymore » placed 2D memory partitions on the x-y plane for the FFT, whereas, the particle search part consists of vertically placed 2D memory partitions set along the z axes.Consequently, the scalability can be obtained for the proportion of processor elements,where the benchmarks are carried out for parallel computation by a SGI Altix machine.« less
Shell use and partitioning of two sympatric species of hermit crabs on a tropical mudflat
NASA Astrophysics Data System (ADS)
Teoh, Hong Wooi; Chong, Ving Ching
2014-02-01
Shell use and partitioning of two sympatric hermit crab species (Diogenes moosai and Diogenes lopochir), as determined by shell shape, size and availability, were examined from August 2009 to March 2011 in a tropical mudflat (Malaysia). Shells of 14 gastropod species were used but > 85% comprised shells of Cerithidea cingulata, Nassarius cf. olivaceus, Nassarius jacksonianus, and Thais malayensis. Shell partitioning between hermit crab species, sexes, and developmental stages was evident from occupied shells of different species, shapes, and sizes. Extreme bias in shell use pattern by male and female of both species of hermit crabs suggests that shell shape, which depends on shell species, is the major determinant of shell use. The hermit crab must however fit well into the shell so that compatibility between crab size and shell size becomes crucial. Although shell availability possibly influenced shell use and hermit crab distribution, this is not critical in a tropical setting of high gastropod diversity and abundance.
Partitioning an object-oriented terminology schema.
Gu, H; Perl, Y; Halper, M; Geller, J; Kuo, F; Cimino, J J
2001-07-01
Controlled medical terminologies are increasingly becoming strategic components of various healthcare enterprises. However, the typical medical terminology can be difficult to exploit due to its extensive size and high density. The schema of a medical terminology offered by an object-oriented representation is a valuable tool in providing an abstract view of the terminology, enhancing comprehensibility and making it more usable. However, schemas themselves can be large and unwieldy. We present a methodology for partitioning a medical terminology schema into manageably sized fragments that promote increased comprehension. Our methodology has a refinement process for the subclass hierarchy of the terminology schema. The methodology is carried out by a medical domain expert in conjunction with a computer. The expert is guided by a set of three modeling rules, which guarantee that the resulting partitioned schema consists of a forest of trees. This makes it easier to understand and consequently use the medical terminology. The application of our methodology to the schema of the Medical Entities Dictionary (MED) is presented.
NASA Astrophysics Data System (ADS)
Gibbard, Philip L.; Lewin, John
2016-11-01
We review the historical purposes and procedures for stratigraphical division and naming within the Quaternary, and summarize the current requirements for formal partitioning through the International Commission on Stratigraphy (ICS). A raft of new data and evidence has impacted traditional approaches: quasi-continuous records from ocean sediments and ice cores, new numerical dating techniques, and alternative macro-models, such as those provided through Sequence Stratigraphy and Earth-System Science. The practical usefulness of division remains, but there is now greater appreciation of complex Quaternary detail and the modelling of time continua, the latter also extending into the future. There are problems both of commission (what is done, but could be done better) and of omission (what gets left out) in partitioning the Quaternary. These include the challenge set by the use of unconformities as stage boundaries, how to deal with multiphase records in ocean and terrestrial sediments, what happened at the 'Early-Mid- (Middle) Pleistocene Transition', dealing with trends that cross phase boundaries, and the current controversial focus on how to subdivide the Holocene and formally define an 'Anthropocene'.
Huhn, Carolin; Pyell, Ute
2008-07-11
It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.
Estimating Grass-Soil Bioconcentration of Munitions Compounds from Molecular Structure.
Torralba Sanchez, Tifany L; Liang, Yuzhen; Di Toro, Dominic M
2017-10-03
A partitioning-based model is presented to estimate the bioconcentration of five munitions compounds and two munition-like compounds in grasses. The model uses polyparameter linear free energy relationships (pp-LFERs) to estimate the partition coefficients between soil organic carbon and interstitial water and between interstitial water and the plant cuticle, a lipid-like plant component. Inputs for the pp-LFERs are a set of numerical descriptors computed from molecular structure only that characterize the molecular properties that determine the interaction with soil organic carbon, interstitial water, and plant cuticle. The model is validated by predicting concentrations measured in the whole plant during independent uptake experiments with a root-mean-square error (log predicted plant concentration-log observed plant concentration) of 0.429. This highlights the dominant role of partitioning between the exposure medium and the plant cuticle in the bioconcentration of these compounds. The pp-LFERs can be used to assess the environmental risk of munitions compounds and munition-like compounds using only their molecular structure as input.
Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A
2018-02-01
Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.
Investigation of roughing machining simulation by using visual basic programming in NX CAM system
NASA Astrophysics Data System (ADS)
Hafiz Mohamad, Mohamad; Nafis Osman Zahid, Muhammed
2018-03-01
This paper outlines a simulation study to investigate the characteristic of roughing machining simulation in 4th axis milling processes by utilizing visual basic programming in NX CAM systems. The selection and optimization of cutting orientation in rough milling operation is critical in 4th axis machining. The main purpose of roughing operation is to approximately shape the machined parts into finished form by removing the bulk of material from workpieces. In this paper, the simulations are executed by manipulating a set of different cutting orientation to generate estimated volume removed from the machine parts. The cutting orientation with high volume removal is denoted as an optimum value and chosen to execute a roughing operation. In order to run the simulation, customized software is developed to assist the routines. Operations build-up instructions in NX CAM interface are translated into programming codes via advanced tool available in the Visual Basic Studio. The codes is customized and equipped with decision making tools to run and control the simulations. It permits the integration with any independent program files to execute specific operations. This paper aims to discuss about the simulation program and identifies optimum cutting orientations for roughing processes. The output of this study will broaden up the simulation routines performed in NX CAM systems.
A study on thick plate forming for hollow-partitioned steam turbine nozzle
NASA Astrophysics Data System (ADS)
Kwak, Bong-Seok; Kang, Byeong-Kwon; Yoon, Mahn-Jung; Jeon, Jae-Young; Kang, Beom-Soo; Ku, Tae-Wan
2017-10-01
In thermal and nuclear power plants, steam turbine system to generate electric power is composed of turbine rotor assemblies for high-pressure (HP) and low-pressure (LP) turbines, its main shaft, and turbine nozzle diaphragms, and so forth. Especially, the turbine nozzle diaphragm consists of many turbine nozzles with three-dimensional asymmetric shape and complicated surface curvatures at each turbine stage. In this study, main goal is tool design and fabrication, and its application to thick plate cold forming for replacing solid-type turbine nozzle manufactured by a series of metal forging process with hollow-partitioned one obtained from cold forming. The hollow-partitioned turbine nozzle (stator) has asymmetric curvature contours, so it is hard to adopt a series of draw-bead or blank holder. Thus, the thick plate as a thick blank experiences unstable and non-uniform contact on the tool surfaces in the die cavity. To easy this unstable positioning restraint in the thick plate forming, the shoulder angles of the forming punch and the lower die are selected as the geometric process parameter to control the blank position in the die cavity. The thick plate material is 409L stainless steel (SUS409L) with initial thickness of 5.00mm, and the dimensions are a length of about 980.00mm and a width of roughly 372.60mm. Uni-axial tensile tests for the initial blank material of SUS409L are performed to verify the mechanical properties including the anisotropic characteristics, and finite element simulations are carried out using ABAQUS Explicit/Implicit. As the obtained and summarized results, the suitable shoulder angle combinations of the lower die and the punch were verified as (30°, 90°) and (45°, 90°), and then the transverse blank direction (TD) of SUS409L thick plate was investigated to be well matched.
Frederiksen, Rikard; Boyer, Nicholas P; Nickle, Benjamin; Chakrabarti, Kalyan S; Koutalos, Yiannis; Crouch, Rosalie K; Oprian, Daniel; Cornwall, M Carter
2012-06-01
We report experiments designed to test the hypothesis that the aqueous solubility of 11-cis-retinoids plays a significant role in the rate of visual pigment regeneration. Therefore, we have compared the aqueous solubility and the partition coefficients in photoreceptor membranes of native 11-cis-retinal and an analogue retinoid, 11-cis 4-OH retinal, which has a significantly higher solubility in aqueous medium. We have then correlated these parameters with the rates of pigment regeneration and sensitivity recovery that are observed when bleached intact salamander rod photoreceptors are treated with physiological solutions containing these retinoids. We report the following results: (a) 11-cis 4-OH retinal is more soluble in aqueous buffer than 11-cis-retinal. (b) Both 11-cis-retinal and 11-cis 4-OH retinal have extremely high partition coefficients in photoreceptor membranes, though the partition coefficient of 11-cis-retinal is roughly 50-fold greater than that of 11-cis 4-OH retinal. (c) Intact bleached isolated rods treated with solutions containing equimolar amounts of 11-cis-retinal or 11-cis 4-OH retinal form functional visual pigments that promote full recovery of dark current, sensitivity, and response kinetics. However, rods treated with 11-cis 4-OH retinal regenerated on average fivefold faster than rods treated with 11-cis-retinal. (d) Pigment regeneration from recombinant and wild-type opsin in solution is slower when treated with 11-cis 4-OH retinal than with 11-cis-retinal. Based on these observations, we propose a model in which aqueous solubility of cis-retinoids within the photoreceptor cytosol can place a limit on the rate of visual pigment regeneration in vertebrate photoreceptors. We conclude that the cytosolic gap between the plasma membrane and the disk membranes presents a bottleneck for retinoid flux that results in slowed pigment regeneration and dark adaptation in rod photoreceptors.
Votano, Joseph R; Parham, Marc; Hall, L Mark; Hall, Lowell H; Kier, Lemont B; Oloff, Scott; Tropsha, Alexander
2006-11-30
Four modeling techniques, using topological descriptors to represent molecular structure, were employed to produce models of human serum protein binding (% bound) on a data set of 1008 experimental values, carefully screened from publicly available sources. To our knowledge, this data is the largest set on human serum protein binding reported for QSAR modeling. The data was partitioned into a training set of 808 compounds and an external validation test set of 200 compounds. Partitioning was accomplished by clustering the compounds in a structure descriptor space so that random sampling of 20% of the whole data set produced an external test set that is a good representative of the training set with respect to both structure and protein binding values. The four modeling techniques include multiple linear regression (MLR), artificial neural networks (ANN), k-nearest neighbors (kNN), and support vector machines (SVM). With the exception of the MLR model, the ANN, kNN, and SVM QSARs were ensemble models. Training set correlation coefficients and mean absolute error ranged from r2=0.90 and MAE=7.6 for ANN to r2=0.61 and MAE=16.2 for MLR. Prediction results from the validation set yielded correlation coefficients and mean absolute errors which ranged from r2=0.70 and MAE=14.1 for ANN to a low of r2=0.59 and MAE=18.3 for the SVM model. Structure descriptors that contribute significantly to the models are discussed and compared with those found in other published models. For the ANN model, structure descriptor trends with respect to their affects on predicted protein binding can assist the chemist in structure modification during the drug design process.
Bed roughness of palaeo-ice streams: insights and implications for contemporary ice sheet dynamics
NASA Astrophysics Data System (ADS)
Falcini, Francesca; Rippin, David; Selby, Katherine; Krabbendam, Maarten
2017-04-01
Bed roughness is the vertical variation of elevation along a horizontal transect. It is an important control on ice stream location and dynamics, with a correspondingly important role in determining the behaviour of ice sheets. Previous studies of bed roughness have been limited to insights derived from Radio Echo Sounding (RES) profiles across parts of Antarctica and Greenland. Such an approach has been necessary due to the inaccessibility of the underlying bed. This approach has led to important insights, such as identifying a general link between smooth beds and fast ice flow, as well as rough beds and slow ice flow. However, these insights are mainly derived from relatively coarse datasets, so that links between roughness and flow are generalised and rather simplistic. Here, we explore the use of DTMs from the well-preserved footprints of palaeo-ice streams, coupled with high resolution models of palaeo-ice flow, as a tool for investigating basal controls on the behaviour of contemporary, active ice streams in much greater detail. Initially, artificial transects were set up across the Minch palaeo-ice stream (NW Scotland) to mimic RES flight lines from past studies in Antarctica. We then explored how increasing data-resolution impacted upon the roughness measurements that were derived. Our work on the Minch palaeo-ice stream indicates that different roughness signatures are associated with different glacial landforms, and we discuss the potential for using these insights to infer, from RES-based roughness measurements, the occurrence of particular landform assemblages that may exist beneath contemporary ice sheets.
Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael
2015-05-15
We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rössler system and find that periodic dynamics translate tomore » ring structures whereas chaotic time series translate to band or tube-like structures—thereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.« less
Design of a Dual Waveguide Normal Incidence Tube (DWNIT) Utilizing Energy and Modal Methods
NASA Technical Reports Server (NTRS)
Betts, Juan F.; Jones, Michael G. (Technical Monitor)
2002-01-01
This report investigates the partition design of the proposed Dual Waveguide Normal Incidence Tube (DWNIT). Some advantages provided by the DWNIT are (1) Assessment of coupling relationships between resonators in close proximity, (2) Evaluation of "smart liners", (3) Experimental validation for parallel element models, and (4) Investigation of effects of simulated angles of incidence of acoustic waves. Energy models of the two chambers were developed to determine the Sound Pressure Level (SPL) drop across the two chambers, through the use of an intensity transmission function for the chamber's partition. The models allowed the chamber's lengthwise end samples to vary. The initial partition design (2" high, 16" long, 0.25" thick) was predicted to provide at least 160 dB SPL drop across the partition with a compressive model, and at least 240 dB SPL drop with a bending model using a damping loss factor of 0.01. The end chamber sample transmissions coefficients were set to 0.1. Since these results predicted more SPL drop than required, a plate thickness optimization algorithm was developed. The results of the algorithm routine indicated that a plate with the same height and length, but with a thickness of 0.1" and 0.05 structural damping loss, would provide an adequate SPL isolation between the chambers.
Effects of low urea concentrations on protein-water interactions.
Ferreira, Luisa A; Povarova, Olga I; Stepanenko, Olga V; Sulatskaya, Anna I; Madeira, Pedro P; Kuznetsova, Irina M; Turoverov, Konstantin K; Uversky, Vladimir N; Zaslavsky, Boris Y
2017-01-01
Solvent properties of aqueous media (dipolarity/polarizability, hydrogen bond donor acidity, and hydrogen bond acceptor basicity) were measured in the coexisting phases of Dextran-PEG aqueous two-phase systems (ATPSs) containing .5 and 2.0 M urea. The differences between the electrostatic and hydrophobic properties of the phases in the ATPSs were quantified by analysis of partitioning of the homologous series of sodium salts of dinitrophenylated amino acids with aliphatic alkyl side chains. Furthermore, partitioning of eleven different proteins in the ATPSs was studied. The analysis of protein partition behavior in a set of ATPSs with protective osmolytes (sorbitol, sucrose, trehalose, and TMAO) at the concentration of .5 M, in osmolyte-free ATPS, and in ATPSs with .5 or 2.0 M urea in terms of the solvent properties of the phases was performed. The results show unambiguously that even at the urea concentration of .5 M, this denaturant affects partitioning of all proteins (except concanavalin A) through direct urea-protein interactions and via its effect on the solvent properties of the media. The direct urea-protein interactions seem to prevail over the urea effects on the solvent properties of water at the concentration of .5 M urea and appear to be completely dominant at 2.0 M urea concentration.
Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping
2015-10-20
Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.
A set partitioning reformulation for the multiple-choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Voß, Stefan; Lalla-Ruiz, Eduardo
2016-05-01
The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.
Cumulants, free cumulants and half-shuffles
Ebrahimi-Fard, Kurusch; Patras, Frédéric
2015-01-01
Free cumulants were introduced as the proper analogue of classical cumulants in the theory of free probability. There is a mix of similarities and differences, when one considers the two families of cumulants. Whereas the combinatorics of classical cumulants is well expressed in terms of set partitions, that of free cumulants is described and often introduced in terms of non-crossing set partitions. The formal series approach to classical and free cumulants also largely differs. The purpose of this study is to put forward a different approach to these phenomena. Namely, we show that cumulants, whether classical or free, can be understood in terms of the algebra and combinatorics underlying commutative as well as non-commutative (half-)shuffles and (half-) unshuffles. As a corollary, cumulants and free cumulants can be characterized through linear fixed point equations. We study the exponential solutions of these linear fixed point equations, which display well the commutative, respectively non-commutative, character of classical and free cumulants. PMID:27547078
NASA Astrophysics Data System (ADS)
Zhang, X. Y.; Zhu, J. W.; Xie, J. C.; Liu, J. L.; Jiang, R. G.
2017-08-01
According to the characteristics and existing problems of water ecological civilization of water-shortage cities, the evaluation index system of water ecological civilization was established using a grey rough set. From six aspects of water resources, water security, water environment, water ecology, water culture and water management, this study established the prime frame of the evaluation system, including 28 items, and used rough set theory to undertake optimal selection of the index system. Grey correlation theory then was used for weightings in order that the integrated evaluation index system for water ecology civilization of water-shortage cities could be constituted. Xi’an City was taken as an example, for which the results showed that 20 evaluation indexes could be obtained after optimal selection of the preliminary framework of evaluation index. The most influential indices were the water-resource category index and water environment category index. The leakage rate of the public water supply pipe network, as well as the disposal, treatment and usage rate of polluted water, urban water surface area ratio, the water quality of the main rivers, and so on also are important. It was demonstrated that the evaluation index could provide an objectively reflection of regional features and key points for the development of water ecology civilization for cities with scarce water resources. It is considered that the application example has universal applicability.
Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong
2011-01-21
Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ductility normalized-strainrange partitioning life relations for creep-fatigue life predictions
NASA Technical Reports Server (NTRS)
Halford, G. R.; Saltsman, J. F.; Hirschberg, M. H.
1977-01-01
Procedures based on Strainrange Partitioning (SRP) are presented for estimating the effects of environment and other influences on the high temperature, low cycle, creep fatigue resistance of alloys. It is proposed that the plastic and creep, ductilities determined from conventional tensile and creep rupture tests conducted in the environment of interest be used in a set of ductility normalized equations for making a first order approximation of the four SRP inelastic strainrange life relations. Different levels of sophistication in the application of the procedures are presented by means of illustrative examples with several high temperature alloys. Predictions of cyclic lives generally agree with observed lives within factors of three.
Definitions and Omissions of Heroism
ERIC Educational Resources Information Center
Martens, Jeffrey W.
2005-01-01
This article presents comments on "The Heroism of Women and Men" by Selwyn W. Becker and Alice H. Eagly. Their article specifically addressed the "cultural association of heroism with men and masculinity . . . in natural settings." Becker and Eagly evidenced roughly equivalent rates of heroism by women and men in a variety of settings. However,…
Intrusion detection using rough set classification.
Zhang, Lian-hua; Zhang, Guan-hua; Zhang, Jie; Bai, Ying-cai
2004-09-01
Recently machine learning-based intrusion detection approaches have been subjected to extensive researches because they can detect both misuse and anomaly. In this paper, rough set classification (RSC), a modern learning algorithm, is used to rank the features extracted for detecting intrusions and generate intrusion detection models. Feature ranking is a very critical step when building the model. RSC performs feature ranking before generating rules, and converts the feature ranking to minimal hitting set problem addressed by using genetic algorithm (GA). This is done in classical approaches using Support Vector Machine (SVM) by executing many iterations, each of which removes one useless feature. Compared with those methods, our method can avoid many iterations. In addition, a hybrid genetic algorithm is proposed to increase the convergence speed and decrease the training time of RSC. The models generated by RSC take the form of "IF-THEN" rules, which have the advantage of explication. Tests and comparison of RSC with SVM on DARPA benchmark data showed that for Probe and DoS attacks both RSC and SVM yielded highly accurate results (greater than 99% accuracy on testing set).
Rough Set Theory based prognostication of life expectancy for terminally ill patients.
Gil-Herrera, Eleazar; Yalcin, Ali; Tsalatsanis, Athanasios; Barnes, Laura E; Djulbegovic, Benjamin
2011-01-01
We present a novel knowledge discovery methodology that relies on Rough Set Theory to predict the life expectancy of terminally ill patients in an effort to improve the hospice referral process. Life expectancy prognostication is particularly valuable for terminally ill patients since it enables them and their families to initiate end-of-life discussions and choose the most desired management strategy for the remainder of their lives. We utilize retrospective data from 9105 patients to demonstrate the design and implementation details of a series of classifiers developed to identify potential hospice candidates. Preliminary results confirm the efficacy of the proposed methodology. We envision our work as a part of a comprehensive decision support system designed to assist terminally ill patients in making end-of-life care decisions.
NASA Astrophysics Data System (ADS)
Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang
2017-11-01
To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.
The total position-spread tensor: Spin partition
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Khatib, Muammar, E-mail: elkhatib@irsamc.ups-tlse.fr; Evangelisti, Stefano, E-mail: stefano@irsamc.ups-tlse.fr; Leininger, Thierry, E-mail: Thierry.Leininger@irsamc.ups-tlse.fr
2015-03-07
The Total Position Spread (TPS) tensor, defined as the second moment cumulant of the position operator, is a key quantity to describe the mobility of electrons in a molecule or an extended system. In the present investigation, the partition of the TPS tensor according to spin variables is derived and discussed. It is shown that, while the spin-summed TPS gives information on charge mobility, the spin-partitioned TPS tensor becomes a powerful tool that provides information about spin fluctuations. The case of the hydrogen molecule is treated, both analytically, by using a 1s Slater-type orbital, and numerically, at Full Configuration Interactionmore » (FCI) level with a V6Z basis set. It is found that, for very large inter-nuclear distances, the partitioned tensor growths quadratically with the distance in some of the low-lying electronic states. This fact is related to the presence of entanglement in the wave function. Non-dimerized open chains described by a model Hubbard Hamiltonian and linear hydrogen chains H{sub n} (n ≥ 2), composed of equally spaced atoms, are also studied at FCI level. The hydrogen systems show the presence of marked maxima for the spin-summed TPS (corresponding to a high charge mobility) when the inter-nuclear distance is about 2 bohrs. This fact can be associated to the presence of a Mott transition occurring in this region. The spin-partitioned TPS tensor, on the other hand, has a quadratical growth at long distances, a fact that corresponds to the high spin mobility in a magnetic system.« less
Dubarry, Nelly; Pasta, Franck; Lane, David
2006-01-01
Most bacterial chromosomes carry an analogue of the parABS systems that govern plasmid partition, but their role in chromosome partition is ambiguous. parABS systems might be particularly important for orderly segregation of multipartite genomes, where their role may thus be easier to evaluate. We have characterized parABS systems in Burkholderia cenocepacia, whose genome comprises three chromosomes and one low-copy-number plasmid. A single parAB locus and a set of ParB-binding (parS) centromere sites are located near the origin of each replicon. ParA and ParB of the longest chromosome are phylogenetically similar to analogues in other multichromosome and monochromosome bacteria but are distinct from those of smaller chromosomes. The latter form subgroups that correspond to the taxa of their hosts, indicating evolution from plasmids. The parS sites on the smaller chromosomes and the plasmid are similar to the “universal” parS of the main chromosome but with a sequence specific to their replicon. In an Escherichia coli plasmid stabilization test, each parAB exhibits partition activity only with the parS of its own replicon. Hence, parABS function is based on the independent partition of individual chromosomes rather than on a single communal system or network of interacting systems. Stabilization by the smaller chromosome and plasmid systems was enhanced by mutation of parS sites and a promoter internal to their parAB operons, suggesting autoregulatory mechanisms. The small chromosome ParBs were found to silence transcription, a property relevant to autoregulation. PMID:16452432
Allowable SEM noise for unbiased LER measurement
NASA Astrophysics Data System (ADS)
Papavieros, George; Constantoudis, Vassilios; Gogolides, Evangelos
2018-03-01
Recently, a novel method for the calculation of unbiased Line Edge Roughness based on Power Spectral Density analysis has been proposed. In this paper first an alternative method is discussed and investigated, utilizing the Height-Height Correlation Function (HHCF) of edges. The HHCF-based method enables the unbiased determination of the whole triplet of LER parameters including besides rms the correlation length and roughness exponent. The key of both methods is the sensitivity of PSD and HHCF on noise at high frequencies and short distance respectively. Secondly, we elaborate a testbed of synthesized SEM images with controlled LER and noise to justify the effectiveness of the proposed unbiased methods. Our main objective is to find out the boundaries of the method in respect to noise levels and roughness characteristics, for which the method remains reliable, i.e the maximum amount of noise allowed, for which the output results cope with the controllable known inputs. At the same time, we will also set the extremes of roughness parameters for which the methods hold their accuracy.
Hajnal, A.
1971-01-01
If the continuum hypothesis is assumed, there is a graph G whose vertices form an ordered set of type ω12; G does not contain triangles or complete even graphs of form [[unk]0,[unk]0], and there is no independent subset of vertices of type ω12. PMID:16591893
Method development estimating ambient mercury concentration from monitored mercury wet deposition
NASA Astrophysics Data System (ADS)
Chen, S. M.; Qiu, X.; Zhang, L.; Yang, F.; Blanchard, P.
2013-05-01
Speciated atmospheric mercury data have recently been monitored at multiple locations in North America; but the spatial coverage is far less than the long-established mercury wet deposition network. The present study describes a first attempt linking ambient concentration with wet deposition using Beta distribution fitting of a ratio estimate. The mean, median, mode, standard deviation, and skewness of the fitted Beta distribution parameters were generated using data collected in 2009 at 11 monitoring stations. Comparing the normalized histogram and the fitted density function, the empirical and fitted Beta distribution of the ratio shows a close fit. The estimated ambient mercury concentration was further partitioned into reactive gaseous mercury and particulate bound mercury using linear regression model developed by Amos et al. (2012). The method presented here can be used to roughly estimate mercury ambient concentration at locations and/or times where such measurement is not available but where wet deposition is monitored.
Bioconcentration of chlorinated hydrocarbons from sediment by oligochaetes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connell, D.W.; Bowman, M.; Hawker, D.W.
1988-12-01
Previously published data on the accumulation of 15 chlorinated hydrocarbons from sediment by oligochaetes have been interpreted on the basis of bioconcentration from interstitial water. Calculation of the interstitial water concentration allowed determination of uptake and clearance rate constants together with bioconcentration factors (KB) for these compounds. These three factors each exhibited a systematic relationship to the octanol/water partition coefficient (KOW). The log KB versus log KOW relationship was roughly linear over the log KOW range from 4.4 to 6.4 and displayed an increasing nonlinear deviation for log KOW values greater than 6.4. These relationships are qualitatively similar to thosemore » established for other aquatic organisms where bioconcentration from water was the mechanism involved. This suggests that interstitial water may be the phase from which lipophilic compounds in sediment are bioconcentrated by oligochaetes. An expression relating the bioconcentration factor to the biotic concentration and various sediment characteristics has also been developed.« less
Benefits and Pitfalls of GRACE Terrestrial Water Storage Data Assimilation
NASA Technical Reports Server (NTRS)
Girotto, Manuela
2018-01-01
Satellite observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) mission have a coarse resolution in time (monthly) and space (roughly 150,000 sq km at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Nonetheless, data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This presentation illustrates some of the benefits and drawbacks of assimilating TWS observations from GRACE into a land surface model over the continental United States and India. The assimilation scheme yields improved skill metrics for groundwater compared to the no-assimilation simulations. A smaller impact is seen for surface and root-zone soil moisture. Further, GRACE observes TWS depletion associated with anthropogenic groundwater extraction. Results from the assimilation emphasize the importance of representing anthropogenic processes in land surface modeling and data assimilation systems.
Independence and totalness of subspaces in phase space methods
NASA Astrophysics Data System (ADS)
Vourdas, A.
2018-04-01
The concepts of independence and totalness of subspaces are introduced in the context of quasi-probability distributions in phase space, for quantum systems with finite-dimensional Hilbert space. It is shown that due to the non-distributivity of the lattice of subspaces, there are various levels of independence, from pairwise independence up to (full) independence. Pairwise totalness, totalness and other intermediate concepts are also introduced, which roughly express that the subspaces overlap strongly among themselves, and they cover the full Hilbert space. A duality between independence and totalness, that involves orthocomplementation (logical NOT operation), is discussed. Another approach to independence is also studied, using Rota's formalism on independent partitions of the Hilbert space. This is used to define informational independence, which is proved to be equivalent to independence. As an application, the pentagram (used in discussions on contextuality) is analysed using these concepts.
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Bashir, Mahmood Al; Dhar, Nikhil Ranjan
2016-07-01
Hard turning is gradually replacing the time consuming conventional turning process, which is typically followed by grinding, by producing surface quality compatible to grinding. The hard turned surface roughness depends on the cutting parameters, machining environments and tool insert configurations. In this article the variation of the surface roughness of the produced surfaces with the changes in tool insert configuration, use of coolant and different cutting parameters (cutting speed, feed rate) has been investigated. This investigation was performed in machining AISI 1060 steel, hardened to 56 HRC by heat treatment, using coated carbide inserts under two different machining environments. The depth of cut, fluid pressure and material hardness were kept constant. The Design of Experiment (DOE) was performed to determine the number and combination sets of different cutting parameters. A full factorial analysis has been performed to examine the effect of main factors as well as interaction effect of factors on surface roughness. A statistical analysis of variance (ANOVA) was employed to determine the combined effect of cutting parameters, environment and tool configuration. The result of this analysis reveals that environment has the most significant impact on surface roughness followed by feed rate and tool configuration respectively.
Effects of bio-inspired microscale roughness on macroscale flow structures
NASA Astrophysics Data System (ADS)
Bocanegra Evans, Humberto; Hamed, Ali M.; Gorumlu, Serdar; Doosttalab, Ali; Aksak, Burak; Chamorro, Leonardo P.; Castillo, Luciano
2016-11-01
The interaction between rough surfaces and flows is a complex physical situation that produces rich flow phenomena. While random roughness typically increases drag, properly engineered roughness patterns may produce positive results, e.g. dimples in a golf ball. Here we present a set of PIV measurements in an index matched facility of the effect of a bio-inspired surface that consists of an array of mushroom-shaped micro-pillars. The experiments are carried out-under fully wetted conditions-in a flow with adverse pressure gradient, triggering flow separation. The introduction of the micro-pillars dramatically decreases the size of the recirculation bubble; the area with backflow is reduced by approximately 60%. This suggests a positive impact on the form drag generated by the fluid. Furthermore, a negligible effect is seen on the turbulence production terms. The micro-pillars affect the flow by generating low and high pressure perturbations at the interface between the bulk and roughness layer, in a fashion comparable to that of synthetic jets. The passive approach, however, facilitates the implementation of this coating. As the mechanism does not rely on surface hydrophobicity, it is well suited for underwater applications and its functionality should not degrade over time.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.
Raster Data Partitioning for Supporting Distributed GIS Processing
NASA Astrophysics Data System (ADS)
Nguyen Thai, B.; Olasz, A.
2015-08-01
In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.
Assessment of Ice Shape Roughness Using a Self-Orgainizing Map Approach
NASA Technical Reports Server (NTRS)
Mcclain, Stephen T.; Kreeger, Richard E.
2013-01-01
Self-organizing maps are neural-network techniques for representing noisy, multidimensional data aligned along a lower-dimensional and nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. Prior investigations of ice shapes have focused on using self-organizing maps to characterize mean ice forms. The Icing Research Branch has recently acquired a high resolution three dimensional scanner system capable of resolving ice shape surface roughness. A method is presented for the evaluation of surface roughness variations using high-resolution surface scans based on a self-organizing map representation of the mean ice shape. The new method is demonstrated for 1) an 18-in. NACA 23012 airfoil 2 AOA just after the initial ice coverage of the leading 5 of the suction surface of the airfoil, 2) a 21-in. NACA 0012 at 0AOA following coverage of the leading 10 of the airfoil surface, and 3) a cold-soaked 21-in.NACA 0012 airfoil without ice. The SOM method resulted in descriptions of the statistical coverage limits and a quantitative representation of early stages of ice roughness formation on the airfoils. Limitations of the SOM method are explored, and the uncertainty limits of the method are investigated using the non-iced NACA 0012 airfoil measurements.
Mengucci, P; Gatto, A; Bassoli, E; Denti, L; Fiori, F; Girardin, E; Bastianoni, P; Rutkowski, B; Czyrska-Filemonowicz, A; Barucca, G
2017-07-01
Direct Metal Laser Sintering (DMLS) technology was used to produce tensile and flexural samples based on the Ti-6Al-4V biomedical composition. Tensile samples were produced in three different orientations in order to investigate the effect of building direction on the mechanical behavior. On the other hand, flexural samples were submitted to thermal treatments to simulate the firing cycle commonly used to veneer metallic devices with ceramics in dental applications. Roughness and hardness measurements as well as tensile and flexural mechanical tests were performed to study the mechanical response of the alloy while X-ray diffraction (XRD), electron microscopy (SEM, TEM, STEM) techniques and microanalysis (EDX) were used to investigate sample microstructure. Results evidenced a difference in the mechanical response of tensile samples built in orthogonal directions. In terms of microstructure, samples not submitted to the firing cycle show a single phase acicular α' (hcp) structure typical of metal parts subject to high cooling rates. After the firing cycle, samples show a reduction of hardness and strength due to the formation of laths of the β (bcc) phase at the boundaries of the primary formed α' plates as well as to lattice parameters variation of the hcp phase. Element partitioning during the firing cycle gives rise to high concentration of V atoms (up to 20wt%) at the plate boundaries where the β phase preferentially forms. Copyright © 2017 Elsevier Ltd. All rights reserved.
Distribution of Diverse Escherichia coli between Cattle and Pasture
NandaKafle, Gitanjali; Seale, Tarren; Flint, Toby; Nepal, Madhav; Venter, Stephanus N.; Brözel, Volker S.
2017-01-01
Escherichia coli is widely considered to not survive for extended periods outside the intestines of warm-blooded animals; however, recent studies demonstrated that E. coli strains maintain populations in soil and water without any known fecal contamination. The objective of this study was to investigate whether the niche partitioning of E. coli occurs between cattle and their pasture. We attempted to clarify whether E. coli from bovine feces differs phenotypically and genotypically from isolates maintaining a population in pasture soil over winter. Soil, bovine fecal, and run-off samples were collected before and after the introduction of cattle to the pasture. Isolates (363) were genotyped by uidA and mutS sequences and phylogrouping, and evaluated for curli formation (Rough, Dry, And Red, or RDAR). Three types of clusters emerged, viz. bovine-associated, clusters devoid of cattle isolates and representing isolates endemic to the pasture environment, and clusters with both. All isolates clustered with strains of E. coli sensu stricto, distinct from the cryptic species Clades I, III, IV, and V. Pasture soil endemic and bovine fecal populations had very different phylogroup distributions, indicating niche partitioning. The soil endemic population was largely comprised of phylogroup B1 and had a higher average RDAR score than other isolates. These results indicate the existence of environmental E. coli strains that are phylogenetically distinct from bovine fecal isolates, and that have the ability to maintain populations in the soil environment. PMID:28747587
Yan, Bo; Pan, Chongle; Olman, Victor N; Hettich, Robert L; Xu, Ying
2004-01-01
Mass spectrometry is one of the most popular analytical techniques for identification of individual proteins in a protein mixture, one of the basic problems in proteomics. It identifies a protein through identifying its unique mass spectral pattern. While the problem is theoretically solvable, it remains a challenging problem computationally. One of the key challenges comes from the difficulty in distinguishing the N- and C-terminus ions, mostly b- and y-ions respectively. In this paper, we present a graph algorithm for solving the problem of separating bfrom y-ions in a set of mass spectra. We represent each spectral peak as a node and consider two types of edges: a type-1 edge connects two peaks possibly of the same ion types and a type-2 edge connects two peaks possibly of different ion types, predicted based on local information. The ion-separation problem is then formulated and solved as a graph partition problem, which is to partition the graph into three subgraphs, namely b-, y-ions and others respectively, so to maximize the total weight of type-1 edges while minimizing the total weight of type-2 edges within each subgraph. We have developed a dynamic programming algorithm for rigorously solving this graph partition problem and implemented it as a computer program PRIME. We have tested PRIME on 18 data sets of high accurate FT-ICR tandem mass spectra and found that it achieved ~90% accuracy for separation of b- and y- ions.
Study on the rheological properties and volatile release of cold-set emulsion-filled protein gels.
Mao, Like; Roos, Yrjö H; Miao, Song
2014-11-26
Emulsion-filled protein gels (EFP gels) were prepared through a cold-set gelation process, and they were used to deliver volatile compounds. An increase in the whey protein isolate (WPI) content from 4 to 6% w/w did not show significant effect on the gelation time, whereas an increase in the oil content from 5 to 20% w/w resulted in an earlier onset of gelation. Gels with a higher WPI content had a higher storage modulus and water-holding capacity (WHC), and they presented a higher force and strain at breaking, indicating that a more compact gel network was formed. An increase in the oil content contributed to gels with a higher storage modulus and force at breaking; however, this increase did not affect the WHC of the gels, and gels with a higher oil content became more brittle, resulting in a decreased strain at breaking. GC headspace analysis showed that volatiles released at lower rates and had lower air-gel partition coefficients in EFP gels than those in ungelled counterparts. Gels with a higher WPI content had lower release rates and partition coefficients of the volatiles. A change in the oil content significantly modified the partition of volatiles at equilibrium, but it produced a minor effect on the release rate of the volatiles. The findings indicated that EFP gels could be potentially used to modulate volatile release by varying the rheological properties of the gel.
Dueri, Sibylle; Castro-Jiménez, Javier; Comenges, José-Manuel Zaldívar
2008-09-15
A review of experimental data has been performed to study the relationships between the concentration in water, pore water and sediments for different families of organic contaminants. The objective was to determine whether it is possible to set EQS for sediments from EQS defined for surface waters in the Daughter Directive of the European Parliament (COM (2006) 397). The analysis of experimental data showed that even though in some specific cases there is a coupling between water column and sediments, this coupling is rather the exception. Therefore it is not recommendable to use water column data to assess the chemical quality status of sediments and it is necessary to measure in both media. At the moment EQS have been defined for the water column and will assess only the compliance with good chemical status of surface waters. Since the sediment toxicity depends on the dissolved pore water concentration, the EQS developed for water could be applied to pore water (interstitial water); hence, there would be no need of developing another set of EQS. The partitioning approach has been proposed as a solution to calculate sediment EQS from water EQS, but the partitioning coefficient strongly depends on sediment characteristics and its use introduces an important uncertainty in the definition of sediment EQS. Therefore, the direct measurement of pore water concentration is regarded as a better option.
Hou, Tingjun; Xu, Xiaojie
2002-12-01
In this study, the relationships between the brain-blood concentration ratio of 96 structurally diverse compounds with a large number of structurally derived descriptors were investigated. The linear models were based on molecular descriptors that can be calculated for any compound simply from a knowledge of its molecular structure. The linear correlation coefficients of the models were optimized by genetic algorithms (GAs), and the descriptors used in the linear models were automatically selected from 27 structurally derived descriptors. The GA optimizations resulted in a group of linear models with three or four molecular descriptors with good statistical significance. The change of descriptor use as the evolution proceeds demonstrates that the octane/water partition coefficient and the partial negative solvent-accessible surface area multiplied by the negative charge are crucial to brain-blood barrier permeability. Moreover, we found that the predictions using multiple QSPR models from GA optimization gave quite good results in spite of the diversity of structures, which was better than the predictions using the best single model. The predictions for the two external sets with 37 diverse compounds using multiple QSPR models indicate that the best linear models with four descriptors are sufficiently effective for predictive use. Considering the ease of computation of the descriptors, the linear models may be used as general utilities to screen the blood-brain barrier partitioning of drugs in a high-throughput fashion.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
Parameterized Spectral Bathymetric Roughness Using the Nonequispaced Fast Fourier Transform
NASA Astrophysics Data System (ADS)
Fabre, David Hanks
The ocean and acoustic modeling community has specifically asked for roughness from bathymetry. An effort has been undertaken to provide what can be thought of as the high frequency content of bathymetry. By contrast, the low frequency content of bathymetry is the set of contours. The two-dimensional amplitude spectrum calculated with the nonequispaced fast Fourier transform (Kunis, 2006) is exploited as the statistic to provide several parameters of roughness following the method of Fox (1996). When an area is uniformly rough, it is termed isotropically rough. When an area exhibits lineation effects (like in a trough or a ridge line in the bathymetry), the term anisotropically rough is used. A predominant spatial azimuth of lineation summarizes anisotropic roughness. The power law model fit produces a roll-off parameter that also provides insight into the roughness of the area. These four parameters give rise to several derived parameters. Algorithmic accomplishments include reviving Fox's method (1985, 1996) and improving the method with the possibly geophysically more appropriate nonequispaced fast Fourier transform. A new composite parameter, simply the overall integral length of the nonlinear parameterizing function, is used to make within-dataset comparisons. A synthetic dataset and six multibeam datasets covering practically all depth regimes have been analyzed with the tools that have been developed. Data specific contributions include possibly discovering an aspect ratio isotropic cutoff level (less than 1.2), showing a range of spectral fall-off values between about -0.5 for a sandybottomed Gulf of Mexico area, to about -1.8 for a coral reef area just outside of the Saipan harbor. We also rank the targeted type of dataset, the best resolution gridded datasets, from smoothest to roughest using a factor based on the kernel dimensions, a percentage from the windowing operation, all multiplied by the overall integration length.
NASA Astrophysics Data System (ADS)
Reygondeau, Gabriel; Olivier Irisson, Jean; Guieu, Cecile; Gasparini, Stephane; Ayata, Sakina; Koubbi, Philippe
2013-04-01
In recent decades, it has been found useful to ecoregionalise the pelagic environment assuming that within each partition environmental conditions are distinguishable and unique. Indeed, each partition of the ocean that is proposed aimed to delineate the main oceanographical and ecological patterns to provide a geographical framework of marine ecosystems for ecological studies and management purposes. The aim of the present work is to integrate and process existing data on the pelagic environment of the Mediterranean Sea in order to define biogeochemical regions. Open access databases including remote sensing observations, oceanographic campaign data and physical modeling simulations are used. These various dataset allow the multidisciplinary view required to understand the interactions between climate and Mediterranean marine ecosystems. The first step of our study has consisted in a statistical selection of a set of crucial environmental factors to propose the most parsimonious biogeographical approach that allows detecting the main oceanographic structure of the Mediterranean Sea. Second, based on the identified set of environmental parameters, both non-hierarchical and hierarchical clustering algorithms have been tested. Outputs from each methodology are then inter-compared to propose a robust map of the biotopes (unique range of environmental parameters) of the area. Each biotope was then modeled using a non parametric environmental niche method to infer a dynamic biogeochemical partition. Last, the seasonal, inter annual and long term spatial changes of each biogeochemical regions were investigated. The future of this work will be to perform a second partition to subdivide the biogeochemical regions according to biotic features of the Mediterranean Sea (ecoregions). This second level of division will thus be used as a geographical framework to identify ecosystems that have been altered by human activities (i.e. pollution, fishery, invasive species) for the European project PERSEUS (Protecting EuRopean Seas and borders through the intelligence US of surveillance) and the French program MERMEX (Marine Ecosystems Response in the Mediterranean Experiment).
Chan, An-Wen; Fung, Kinwah; Tran, Jennifer M; Kitchen, Jessica; Austin, Peter C; Weinstock, Martin A; Rochon, Paula A
2016-10-01
Keratinocyte carcinoma (nonmelanoma skin cancer) accounts for substantial burden in terms of high incidence and health care costs but is excluded by most cancer registries in North America. Administrative health insurance claims databases offer an opportunity to identify these cancers using diagnosis and procedural codes submitted for reimbursement purposes. To apply recursive partitioning to derive and validate a claims-based algorithm for identifying keratinocyte carcinoma with high sensitivity and specificity. Retrospective study using population-based administrative databases linked to 602 371 pathology episodes from a community laboratory for adults residing in Ontario, Canada, from January 1, 1992, to December 31, 2009. The final analysis was completed in January 2016. We used recursive partitioning (classification trees) to derive an algorithm based on health insurance claims. The performance of the derived algorithm was compared with 5 prespecified algorithms and validated using an independent academic hospital clinic data set of 2082 patients seen in May and June 2011. Sensitivity, specificity, positive predictive value, and negative predictive value using the histopathological diagnosis as the criterion standard. We aimed to achieve maximal specificity, while maintaining greater than 80% sensitivity. Among 602 371 pathology episodes, 131 562 (21.8%) had a diagnosis of keratinocyte carcinoma. Our final derived algorithm outperformed the 5 simple prespecified algorithms and performed well in both community and hospital data sets in terms of sensitivity (82.6% and 84.9%, respectively), specificity (93.0% and 99.0%, respectively), positive predictive value (76.7% and 69.2%, respectively), and negative predictive value (95.0% and 99.6%, respectively). Algorithm performance did not vary substantially during the 18-year period. This algorithm offers a reliable mechanism for ascertaining keratinocyte carcinoma for epidemiological research in the absence of cancer registry data. Our findings also demonstrate the value of recursive partitioning in deriving valid claims-based algorithms.
NASA Astrophysics Data System (ADS)
Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.
2017-04-01
Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.
Accelerated aging effects on surface hardness and roughness of lingual retainer adhesives.
Ramoglu, Sabri Ilhan; Usumez, Serdar; Buyukyilmaz, Tamer
2008-01-01
To test the null hypothesis that accelerated aging has no effect on the surface microhardness and roughness of two light-cured lingual retainer adhesives. Ten samples of light-cured materials, Transbond Lingual Retainer (3M Unitek) and Light Cure Retainer (Reliance) were cured with a halogen light for 40 seconds. Vickers hardness and surface roughness were measured before and after accelerated aging of 300 hours in a weathering tester. Differences between mean values were analyzed for statistical significance using a t-test. The level of statistical significance was set at P < .05. The mean Vickers hardness of Transbond Lingual Retainer was 62.8 +/- 3.5 and 79.6 +/- 4.9 before and after aging, respectively. The mean Vickers hardness of Light Cure Retainer was 40.3 +/- 2.6 and 58.3 +/- 4.3 before and after aging, respectively. Differences in both groups were statistically significant (P < .001). Following aging, mean surface roughness was changed from 0.039 microm to 0.121 microm and from 0.021 microm to 0.031 microm for Transbond Lingual Retainer and Light Cure Retainer, respectively. The roughening of Transbond Lingual Retainer with aging was statistically significant (P < .05), while the change in the surface roughness of Light Cure Retainer was not (P > .05). Accelerated aging significantly increased the surface microhardness of both light-cured retainer adhesives tested. It also significantly increased the surface roughness of the Transbond Lingual Retainer.
Rough case-based reasoning system for continues casting
NASA Astrophysics Data System (ADS)
Su, Wenbin; Lei, Zhufeng
2018-04-01
The continuous casting occupies a pivotal position in the iron and steel industry. The rough set theory and the CBR (case based reasoning, CBR) were combined in the research and implementation for the quality assurance of continuous casting billet to improve the efficiency and accuracy in determining the processing parameters. According to the continuous casting case, the object-oriented method was applied to express the continuous casting cases. The weights of the attributes were calculated by the algorithm which was based on the rough set theory and the retrieval mechanism for the continuous casting cases was designed. Some cases were adopted to test the retrieval mechanism, by analyzing the results, the law of the influence of the retrieval attributes on determining the processing parameters was revealed. A comprehensive evaluation model was established by using the attribute recognition theory. According to the features of the defects, different methods were adopted to describe the quality condition of the continuous casting billet. By using the system, the knowledge was not only inherited but also applied to adjust the processing parameters through the case based reasoning method as to assure the quality of the continuous casting and improve the intelligent level of the continuous casting.
General view, south fourthfloor (attic) room, center block, looking northeast. ...
General view, south fourth-floor (attic) room, center block, looking northeast. Originally two rooms, the partition wall was likely removed when a cistern was installed, formerly set on the platform at the center of this view. - Lazaretto Quarantine Station, Wanamaker Avenue and East Second Street, Essington, Delaware County, PA
Team Training for Command and Control Systems. Volume II. Recommendations for Research Program.
1982-04-01
between individual and group goals and how they are set, the roles of hedonistic individual orientation and altruistic commitment to a group, and... boredom . They are not intended to be definitive, comprehensive, nor exhaustive. They are also not mutually exclusive and other partitionings of the
USDA-ARS?s Scientific Manuscript database
Fugacity and bioavailability concepts can be challenging topics to communicate effectively in the timeframe of an academic laboratory course setting. In this experiment, students observe partitioning of the residues over time into an artificial biological matrix. The three compounds utilized are o...
The Development of the Speaker Independent ARM Continuous Speech Recognition System
1992-01-01
spokeTi airborne reconnaissance reports u-ing a speech recognition system based on phoneme-level hidden Markov models (HMMs). Previous versions of the ARM...will involve automatic selection from multiple model sets, corresponding to different speaker types, and that the most rudimen- tary partition of a...The vocabulary size for the ARM task is 497 words. These words are related to the phoneme-level symbols corresponding to the models in the model set
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
Retrieval of Soil Moisture and Roughness from the Polarimetric Radar Response
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Ulaby, Fawwaz T.
1997-01-01
The main objective of this investigation was the characterization of soil moisture using imaging radars. In order to accomplish this task, a number of intermediate steps had to be undertaken. In this proposal, the theoretical, numerical, and experimental aspects of electromagnetic scattering from natural surfaces was considered with emphasis on remote sensing of soil moisture. In the general case, the microwave backscatter from natural surfaces is mainly influenced by three major factors: (1) the roughness statistics of the soil surface, (2) soil moisture content, and (3) soil surface cover. First the scattering problem from bare-soil surfaces was considered and a hybrid model that relates the radar backscattering coefficient to soil moisture and surface roughness was developed. This model is based on extensive experimental measurements of the radar polarimetric backscatter response of bare soil surfaces at microwave frequencies over a wide range of moisture conditions and roughness scales in conjunction with existing theoretical surface scattering models in limiting cases (small perturbation, physical optics, and geometrical optics models). Also a simple inversion algorithm capable of providing accurate estimates of soil moisture content and surface rms height from single-frequency multi-polarization radar observations was developed. The accuracy of the model and its inversion algorithm is demonstrated using independent data sets. Next the hybrid model for bare-soil surfaces is made fully polarimetric by incorporating the parameters of the co- and cross-polarized phase difference into the model. Experimental data in conjunction with numerical simulations are used to relate the soil moisture content and surface roughness to the phase difference statistics. For this purpose, a novel numerical scattering simulation for inhomogeneous dielectric random surfaces was developed. Finally the scattering problem of short vegetation cover above a rough soil surface was considered. A general scattering model for grass-blades of arbitrary cross section was developed and incorporated in a first order random media model. The vegetation model and the bare-soil model are combined and the accuracy of the combined model is evaluated against experimental observations from a wheat field over the entire growing season. A complete set of ground-truth data and polarimetric backscatter data were collected. Also an inversion algorithm for estimating soil moisture and surface roughness from multi-polarized multi-frequency observations of vegetation-covered ground is developed.
Multiscale 3D Shape Analysis using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data. PMID:16685992
Multiscale 3D shape analysis using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen R
2005-01-01
Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data.
Partitioning of functional gene expression data using principal points.
Kim, Jaehee; Kim, Haseong
2017-10-12
DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system. A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and Escherichia coli data. The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.
Fabbrizio, Alessandro; Stalder, Roland; Hametner, Kathrin; Günther, Detlef
2013-01-01
Cl partition coefficients between forsterite, enstatite and coexisting Cl-bearing aqueous fluids were determined in a series of high pressure and temperature piston cylinder experiments at 2 GPa between 900 and 1300 °C in the system MgO–SiO2–H2O–NaCl–BaO–C±CaCl2±TiO2±Al2O3±F. Diamond aggregates were added to the experimental capsule set-up in order to separate the fluid from the solid residue and enable in situ analysis of the quenched solute by LA-ICP-MS. The chlorine content of forsterite and enstatite was measured by electron microprobe, and the nature of hydrous defects was investigated by infrared spectroscopy. Partition coefficients show similar incompatibility for Cl in forsterite and enstatite, with DClfo/fl = 0.0012 ± 0.0006, DClen/fl = 0.0018 ± 0.0008 and DClfo/en = 1.43 ± 0.71. The values determined for mineral/fluid partitioning are very similar to previously determined values for mineral/melt. Applying the new mineral/fluid partition coefficients to fluids in subduction zones, a contribution between 0.15% and 20% of the total chlorine from the nominally anhydrous minerals is estimated. Infrared spectra of experimental forsterite show absorption bands at 3525 and 3572 cm−1 that are characteristic for hydroxyl point defects associated with trace Ti substitutions, and strongly suggest that the TiO2 content of the system can influence the chlorine and OH incorporation via the stabilization of Ti-clinohumite-like point defects. The water contents for coexisting forsterite and enstatite in some runs were determined using unpolarized IR spectra and calculated water partition coefficients DH2Ofo/en are between 0.01 and 0.5. PMID:25843971
Quantifying alkane emissions in the Eagle Ford Shale using boundary layer enhancement
NASA Astrophysics Data System (ADS)
Roest, Geoffrey; Schade, Gunnar
2017-09-01
The Eagle Ford Shale in southern Texas is home to a booming unconventional oil and gas industry, the climate and air quality impacts of which remain poorly quantified due to uncertain emission estimates. We used the atmospheric enhancement of alkanes from Texas Commission on Environmental Quality volatile organic compound monitors across the shale, in combination with back trajectory and dispersion modeling, to quantify C2-C4 alkane emissions for a region in southern Texas, including the core of the Eagle Ford, for a set of 68 days from July 2013 to December 2015. Emissions were partitioned into raw natural gas and liquid storage tank sources using gas and headspace composition data, respectively, and observed enhancement ratios. We also estimate methane emissions based on typical ethane-to-methane ratios in gaseous emissions. The median emission rate from raw natural gas sources in the shale, calculated as a percentage of the total produced natural gas in the upwind region, was 0.7 % with an interquartile range (IQR) of 0.5-1.3 %, below the US Environmental Protection Agency's (EPA) current estimates. However, storage tanks contributed 17 % of methane emissions, 55 % of ethane, 82 % percent of propane, 90 % of n-butane, and 83 % of isobutane emissions. The inclusion of liquid storage tank emissions results in a median emission rate of 1.0 % (IQR of 0.7-1.6 %) relative to produced natural gas, overlapping the current EPA estimate of roughly 1.6 %. We conclude that emissions from liquid storage tanks are likely a major source for the observed non-methane hydrocarbon enhancements in the Northern Hemisphere.
NASA Astrophysics Data System (ADS)
Weber, R. J.; Guo, H.; Russell, A. G.; Nenes, A.
2015-12-01
pH is a critical aerosol property that impacts many atmospheric processes, including biogenic secondary organic aerosol formation, gas-particle phase partitioning, and mineral dust or redox metal mobilization. Particle pH has also been linked to adverse health effects. Using a comprehensive data set from the Southern Oxidant and Aerosol Study (SOAS) as the basis for thermodynamic modeling, we have shown that particles are currently highly acidic in the southeastern US, with pH between 0 and 2. Sulfate and ammonium are the main acid-base components that determine particle pH in this region, however they have different sources and their concentrations are changing. Over 15 years of network data show that sulfur dioxide emission reductions have resulted in a roughly 70 percent decrease in sulfate, whereas ammonia emissions, mainly link to agricultural activities, have been largely steady, as have gas phase ammonia concentrations. This has led to the view that particles are becoming more neutralized. However, sensitivity analysis, based on thermodynamic modeling, to changing sulfate concentrations indicates that particles have remained highly acidic over the past decade, despite the large reductions in sulfate. Furthermore, anticipated continued reductions of sulfate and relatively constant ammonia emissions into the future will not significantly change particle pH until sulfate drops to clean continental background levels. The result reshapes our expectation of future particle pH and implies that atmospheric processes and adverse health effects linked to particle acidity will remain unchanged for some time into the future.
NASA Astrophysics Data System (ADS)
Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter
2011-12-01
At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS is the simplified computation, since BFBS are piecewise polynomial, which ERBS are not. One disadvantage of using BFBS in the place of ERBS in this construction is that the necessary selection of the degree of BFBS imposes constraints on the maximal possible multiplicity of the Hermite interpolation.
NASA Astrophysics Data System (ADS)
Ditsche, Petra; Hicks, Madeline; Truong, Lisa; Linkem, Christina; Summers, Adam
2017-04-01
The Northern clingfish is a small, Eastern North Pacific fish that can attach to rough, fouled rocks in the intertidal. Their ability to attach to surfaces has been measured previously in the laboratory, and in this study, we show the roughness and fouling of the natural habitat of these fish. We introduce a new method for measuring surface roughness of natural substrates with time-limited accessibility. We expect this method to be broadly applicable in studies of animal/substrate surface interactions in habitats difficult to characterize. Our roughness measurements demonstrate that the fish's ability to attach to very coarse roughness is required in its natural environment. Some of the rocks showed even coarser roughness than the fish could attach to in the lab setting. We also characterized the clingfish's preference for other habitat descriptors such as the size of the rocks, biofilm, and Aufwuchs (macroalgae, encrusting invertebrates) cover, as well as grain size of underlying substrate. Northern clingfish seek shelter under rocks of 15-45 cm in size. These rocks have variable Aufwuchs cover, and gravel is the main underlying substrate type. In the intertidal, environmental conditions change with the tides, and for clingfish, the daily time under water (DTUW%) was a key parameter explaining distribution. Rather than location being determined by intertidal zonation, an 80% DTUW, a finer scale concept of tidal inundation, was required by the fish. We expect that this is likely because the mobility of the fish allows them to more closely track the ideal inundation in the marine intertidal.
NASA Technical Reports Server (NTRS)
Schwandt, C. S.; McKay, G. A.
1996-01-01
Determining the petrogenesis of eucrites (basaltic achondrites) and diogenites (orthopyroxenites) and the possible links between the meteorite types was initiated 30 years ago by Mason. Since then, most investigators have worked on this question. A few contrasting theories have emerged, with the important distinction being whether or not there is a direct genetic link between eucrites and diogenites. One theory suggests that diogenites are cumulates resulting from the fractional crystallization of a parent magma with the eucrites crystallizing, from the residual magma after separation from the diogenite cumulates. Another model proposes that diogenites are cumulates formed from partial melts derived from a source region depleted by the prior generation of eucrite melts. It has also been proposed that the diogenites may not be directly linked to the eucrites and that they are cumulates derived from melts that are more orthopyroxene normative than the eucrites. This last theory has recently received more analytical and experimental support. One of the difficulties with petrogenetic modeling is that it requires appropriate partition coefficients for modeling because they are dependent on temperature, pressure, and composition. For this reason, we set out to determine minor- and trace-element partition coefficients for diogenite-like orthopyroxene. We have accomplished this task and now have enstatite/melt partition coefficients for Al, Cr, Ti, La, Ce, Nd, Sm, Eu, Dy, Er, Yb, and La.
Multiphase complete exchange: A theoretical analysis
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
Complete Exchange requires each of N processors to send a unique message to each of the remaining N-1 processors. For a circuit switched hypercube with N = 2(sub d) processors, the Direct and Standard algorithms for Complete Exchange are optimal for very large and very small message sizes, respectively. For intermediate sizes, a hybrid Multiphase algorithm is better. This carries out Direct exchanges on a set of subcubes whose dimensions are a partition of the integer d. The best such algorithm for a given message size m could hitherto only be found by enumerating all partitions of d. The Multiphase algorithm is analyzed assuming a high performance communication network. It is proved that only algorithms corresponding to equipartitions of d (partitions in which the maximum and minimum elements differ by at most 1) can possibly be optimal. The run times of these algorithms plotted against m form a hull of optimality. It is proved that, although there is an exponential number of partitions, (1) the number of faces on this hull is Theta(square root of d), (2) the hull can be found in theta(square root of d) time, and (3) once it has been found, the optimal algorithm for any given m can be found in Theta(log d) time. These results provide a very fast technique for minimizing communication overhead in many important applications, such as matrix transpose, Fast Fourier transform, and ADI.
Surface roughness evaluation on mandrels and mirror shells for future X-ray telescopes
NASA Astrophysics Data System (ADS)
Sironi, Giorgia; Spiga, D.
2008-07-01
More X-ray missions that will be operating in near future, like particular SIMBOL-X, e-Rosita, Con-X/HXT, SVOM/XIAO and Polar-X, will be based on focusing optics manufactured by means of the Ni electroforming replication technique. This production method has already been successfully exploited for SAX, XMM and Swift-XRT. Optical surfaces for X-ray reflection have to be as smooth as possible also at high spatial frequencies. Hence it will be crucial to take under control microroughness in order to reduce the scattering effects. A high rms microroughness would cause the degradation of the angular resolution and loss of effective area. Stringent requirements have therefore to be fixed for mirror shells surface roughness depending on the specific energy range investigated, and roughness evolution has to be carefully monitored during the subsequent steps of the mirror-shells realization. This means to study the roughness evolution in the chain mandrel, mirror shells, multilayer deposition and also the degradation of mandrel roughness following iterated replicas. Such a study allows inferring which phases of production are the major responsible of the roughness growth and could help to find solutions optimizing the involved processes. The exposed study is carried out in the context of the technological consolidation related to SIMBOL-X, along with a systematic metrological study of mandrels and mirror shells. To monitor the roughness increase following each replica, a multiinstrumental approach was adopted: microprofiles were analysed by means of their Power Spectral Density (PSD) in the spatial frequency range 1000-0.01 μm. This enables the direct comparison of roughness data taken with instruments characterized by different operative ranges of frequencies, and in particular optical interferometers and Atomic Force Microscopes. The performed analysis allowed us to set realistic specifications on the mandrel roughness to be achieved, and to suggest a limit for the maximum number of a replica a mandrel can undergo before being refurbished.
Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.
Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian
2015-03-01
We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling.
Integrated data lookup and replication scheme in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Chen, Kai; Nahrstedt, Klara
2001-11-01
Accessing remote data is a challenging task in mobile ad hoc networks. Two problems have to be solved: (1) how to learn about available data in the network; and (2) how to access desired data even when the original copy of the data is unreachable. In this paper, we develop an integrated data lookup and replication scheme to solve these problems. In our scheme, a group of mobile nodes collectively host a set of data to improve data accessibility for all members of the group. They exchange data availability information by broadcasting advertising (ad) messages to the group using an adaptive sending rate policy. The ad messages are used by other nodes to derive a local data lookup table, and to reduce data redundancy within a connected group. Our data replication scheme predicts group partitioning based on each node's current location and movement patterns, and replicates data to other partitions before partitioning occurs. Our simulations show that data availability information can quickly propagate throughout the network, and that the successful data access ratio of each node is significantly improved.
In this study, modeled gas- and aerosol phase ammonia, nitric acid, and hydrogen chloride are compared to measurements taken during a field campaign conducted in northern Colorado in February and March 2011. We compare the modeled and observed gas-particle partitioning, and assess potential reasons for discrepancies between the model and measurements. This data set contains scripts and data used for each figure in the associated manuscript. Figures are generated using the R project statistical programming language. Data files are in either comma-separated value (CSV) format or netCDF, a standard self-describing binary data format commonly used in the earth and atmospheric sciences. This dataset is associated with the following publication:Kelly , J., K. Baker , C. Nolte, S. Napelenok , W.C. Keene, and A.A.P. Pszenny. Simulating the phase partitioning of NH3, HNO3, and HCl with size-resolved particles over northern Colorado in winter. ATMOSPHERIC ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 131: 67-77, (2016).
A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.
Brusco, Michael J; Shireman, Emilie; Steinley, Douglas
2017-09-01
The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
New gap-filling and partitioning technique for H2O eddy fluxes measured over forests
NASA Astrophysics Data System (ADS)
Kang, Minseok; Kim, Joon; Malla Thakuri, Bindu; Chun, Junghwa; Cho, Chunho
2018-01-01
The continuous measurement of H2O fluxes using the eddy covariance (EC) technique is still challenging for forests because of large amounts of wet canopy evaporation (EWC), which occur during and following rain events when the EC systems rarely work correctly. We propose a new gap-filling and partitioning technique for the H2O fluxes: a model-statistics hybrid (MSH) method. It enables the recovery of the missing EWC in the traditional gap-filling method and the partitioning of the evapotranspiration (ET) into transpiration and (wet canopy) evaporation. We tested and validated the new method using the data sets from two flux towers, which are located at forests in hilly and complex terrains. The MSH reasonably recovered the missing EWC of 16-41 mm yr-1 and separated it from the ET (14-23 % of the annual ET). Additionally, we illustrated certain advantages of the proposed technique which enable us to understand better how ET responds to environmental changes and how the water cycle is connected to the carbon cycle in a forest ecosystem.
Black holes in higher spin supergravity
NASA Astrophysics Data System (ADS)
Datta, Shouvik; David, Justin R.
2013-07-01
We study black hole solutions in Chern-Simons higher spin supergravity based on the superalgebra sl(3|2). These black hole solutions have a U(1) gauge field and a spin 2 hair in addition to the spin 3 hair. These additional fields correspond to the R-symmetry charges of the supergroup sl(3|2). Using the relation between the bulk field equations and the Ward identities of a CFT with {N} = 2 super- {{{W}}_3} symmetry, we identify the bulk charges and chemical potentials with those of the boundary CFT. From these identifications we see that a suitable set of variables to study this black hole is in terms of the charges present in three decoupled bosonic sub-algebras of the {N} = 2 super- {{{W}}_3} algebra. The entropy and the partition function of these R-charged black holes are then evaluated in terms of the charges of the bulk theory as well as in terms of its chemical potentials. We then compute the partition function in the dual CFT and find exact agreement with the bulk partition function.
Clustering Financial Time Series by Network Community Analysis
NASA Astrophysics Data System (ADS)
Piccardi, Carlo; Calatroni, Lisa; Bertoni, Fabio
In this paper, we describe a method for clustering financial time series which is based on community analysis, a recently developed approach for partitioning the nodes of a network (graph). A network with N nodes is associated to the set of N time series. The weight of the link (i, j), which quantifies the similarity between the two corresponding time series, is defined according to a metric based on symbolic time series analysis, which has recently proved effective in the context of financial time series. Then, searching for network communities allows one to identify groups of nodes (and then time series) with strong similarity. A quantitative assessment of the significance of the obtained partition is also provided. The method is applied to two distinct case-studies concerning the US and Italy Stock Exchange, respectively. In the US case, the stability of the partitions over time is also thoroughly investigated. The results favorably compare with those obtained with the standard tools typically used for clustering financial time series, such as the minimal spanning tree and the hierarchical tree.
NASA Astrophysics Data System (ADS)
Faribault, Alexandre; Tschirhart, Hugo; Muller, Nicolas
2016-05-01
In this work we present a determinant expression for the domain-wall boundary condition partition function of rational (XXX) Richardson-Gaudin models which, in addition to N-1 spins \\frac{1}{2}, contains one arbitrarily large spin S. The proposed determinant representation is written in terms of a set of variables which, from previous work, are known to define eigenstates of the quantum integrable models belonging to this class as solutions to quadratic Bethe equations. Such a determinant can be useful numerically since systems of quadratic equations are much simpler to solve than the usual highly nonlinear Bethe equations. It can therefore offer significant gains in stability and computation speed.
NASA Astrophysics Data System (ADS)
Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel
2010-12-01
A double-atom partitioning of the molecular one-electron density matrix is used to describe atoms and bonds. All calculations are performed in Hilbert space. The concept of atomic weight functions (familiar from Hirshfeld analysis of the electron density) is extended to atomic weight matrices. These are constructed to be orthogonal projection operators on atomic subspaces, which has significant advantages in the interpretation of the bond contributions. In close analogy to the iterative Hirshfeld procedure, self-consistency is built in at the level of atomic charges and occupancies. The method is applied to a test set of about 67 molecules, representing various types of chemical binding. A close correlation is observed between the atomic charges and the Hirshfeld-I atomic charges.
Density of states, Potts zeros, and Fisher zeros of the Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seung-Yeon; Creswick, Richard J.
2001-06-01
The Q-state Potts model can be extended to noninteger and even complex Q by expressing the partition function in the Fortuin-Kasteleyn (F-K) representation. In the F-K representation the partition function Z(Q,a) is a polynomial in Q and v=a{minus}1 (a=e{sup {beta}J}) and the coefficients of this polynomial, {Phi}(b,c), are the number of graphs on the lattice consisting of b bonds and c connected clusters. We introduce the random-cluster transfer matrix to compute {Phi}(b,c) exactly on finite square lattices with several types of boundary conditions. Given the F-K representation of the partition function we begin by studying the critical Potts model Z{submore » CP}=Z(Q,a{sub c}(Q)), where a{sub c}(Q)=1+{radical}Q. We find a set of zeros in the complex w={radical}Q plane that map to (or close to) the Beraha numbers for real positive Q. We also identify {tilde Q}{sub c}(L), the value of Q for a lattice of width L above which the locus of zeros in the complex p=v/{radical}Q plane lies on the unit circle. By finite-size scaling we find that 1/{tilde Q}{sub c}(L){r_arrow}0 as L{r_arrow}{infinity}. We then study zeros of the antiferromagnetic (AF) Potts model in the complex Q plane and determine Q{sub c}(a), the largest value of Q for a fixed value of a below which there is AF order. We find excellent agreement with Baxter{close_quote}s conjecture Q{sub c}{sup AF}(a)=(1{minus}a)(a+3). We also investigate the locus of zeros of the ferromagnetic Potts model in the complex Q plane and confirm that Q{sub c}{sup FM}(a)=(a{minus}1){sup 2}. We show that the edge singularity in the complex Q plane approaches Q{sub c} as Q{sub c}(L){similar_to}Q{sub c}+AL{sup {minus}y{sub q}}, and determine the scaling exponent y{sub q} for several values of Q. Finally, by finite-size scaling of the Fisher zeros near the antiferromagnetic critical point we determine the thermal exponent y{sub t} as a function of Q in the range 2{le}Q{le}3. Using data for lattices of size 3{le}L{le}8 we find that y{sub t} is a smooth function of Q and is well fitted by y{sub t}=(1+Au+Bu{sup 2})/(C+Du) where u={minus}(2/{pi})cos{sup {minus}1}({radical}Q/2). For Q=3 we find y{sub t}{approx_equal}0.6; however if we include lattices up to L=12 we find y{sub t}{approx_equal}0.50(8) in rough agreement with a recent result of Ferreira and Sokal [J. Stat. Phys. >96, 461 (1999)].« less
Automated extraction of decision rules for leptin dynamics--a rough sets approach.
Brtka, Vladimir; Stokić, Edith; Srdić, Biljana
2008-08-01
A significant area in the field of medical informatics is concerned with the learning of medical models from low-level data. The goals of inducing models from data are twofold: analysis of the structure of the models so as to gain new insight into the unknown phenomena, and development of classifiers or outcome predictors for unseen cases. In this paper, we will employ approach based on the relation of indiscernibility and rough sets theory to study certain questions concerning the design of model based on if-then rules, from low-level data including 36 parameters, one of them leptin. To generate easy to read, interpret, and inspect model, we have used ROSETTA software system. The main goal of this work is to get new insight into phenomena of leptin levels while interplaying with other risk factors in obesity.
Impact of the ongoing Amazonian deforestation on local precipitation: A GCM simulation study
NASA Technical Reports Server (NTRS)
Walker, G. K.; Sud, Y. C.; Atlas, R.
1995-01-01
Numerical simulation experiments were conducted to delineate the influence of in situ deforestation data on episodic rainfall by comparing two ensembles of five 5-day integrations performed with a recent version of the Goddard Laboratory for Atmospheres General Circulation Model (GCM) that has a simple biosphere model (SiB). The first set, called control cases, used the standard SiB vegetation cover (comprising 12 biomes) and assumed a fully forested Amazonia, while the second set, called deforestation cases, distinguished the partially deforested regions of Amazonia as savanna. Except for this difference, all other initial and prescribed boundary conditions were kept identical in both sets of integrations. The differential analyses of these five cases show the following local effects of deforestation. (1) A discernible decrease in evapotranspiration of about 0.80 mm/d (roughly 18%) that is quite robust in the averages for 1-, 2-, and 5-day forecasts. (2) A decrease in precipitation of about 1.18 mm/d (roughly 8%) that begins to emerge even in 1-2 day averages and exhibits complex evolution that extends downstream with the winds. (3) A significant decrease in the surface drag force (as a consequence of reduced surface roughness of deforested regions) that, in turn, affects the dynamical structure of moisture convergence and circulation. The surface winds increase significantly during the first day, and thereafter the increase is well maintained even in the 2- and 5-day averages.
NASA Astrophysics Data System (ADS)
Vidal Vázquez, E.; Miranda, J. G. V.; Mirás-Avalos, J. M.; Díaz, M. C.; Paz-Ferreiro, J.
2009-04-01
Mathematical description of the spatial characteristics of soil surface microrelief still remains a challenge. Soil surface roughness parameters are required for modelling overland flow and erosion. The objective of this work was to evaluate the potential of multifractal for analyzing the decay of initial surface roughness induced by natural rainfall under different soil tillage systems. Field experiments were performed on an Oxisol at Campinas, São Paulo State (Brazil). Six tillage treatments, namely, disc harrow, disc plow, chisel plow, disc harrow + disc level, disc plow + disc level and chisel plow + disc level were tested. In each plot soil surface microrelief was measured for times, with increasing amounts of natural rainfall using a pinmeter. The sampling scheme was a square grid with 25 x 25 mm point spacing and the plot size was 1350 x 1350 mm, so that each data set consisted of 3025 individual elevation points. Duplicated measurements were taken per treatment and date, yielding a total of 48 experimental data sets. All the investigated microrelief data sets exhibited, in general, scale properties, and the degree of multifractality showed wide differences between them. Multifractal analysis distinguishes two different patterns of soil surface microrelief, the first one has features close to monofractal spectra and the second clearly indicates multifractal behavior. Both, singularity spectra and generalized dimension spectra allow differentiating between soil tillage systems. In general, changes in values of multifractal parameters under simulated rainfall showed no or little correspondence with the evolution of the vertical microrelief component described by indices such as the standard deviation of the point height measurements. Multifractal parameters provided valuable information for chararacterizing the spatial features of soil surface microrelief as they were able to discriminate data sets with similar values for the vertical component of roughness.
Efficient Interconnection Schemes for VLSI and Parallel Computation
1989-08-01
Definition: Let R be a routing network. A set S of wires in R is a (directed) cut if it partitions the network into two sets of processors A and B ...such that every path from a processor in A to a processor in B contains a wire in S. The capacity cap(S) is the number of wires in the cut. For a set of...messages M, define the load load(M, S) of M on a cut S to be the number of messages in M from a processor in A to a processor in B . The load factor
Procedure of Partitioning Data Into Number of Data Sets or Data Group - A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
The goal of clustering is to decompose a dataset into similar groups based on a objective function. Some already well established clustering algorithms are there for data clustering. Objective of these data clustering algorithms are to divide the data points of the feature space into a number of groups (or classes) so that a predefined set of criteria are satisfied. The article considers the comparative study about the effectiveness and efficiency of traditional data clustering algorithms. For evaluating the performance of the clustering algorithms, Minkowski score is used here for different data sets.
Ferguson, V L
2009-08-01
The relative contributions of elastic, plastic, and viscous material behavior are poorly described by the separate extraction and analysis of the plane strain modulus, E('), the contact hardness, H(c) (a hybrid parameter encompassing both elastic and plastic behavior), and various viscoelastic material constants. A multiple element mechanical model enables the partitioning of a single indentation response into its fundamental elastic, plastic, and viscous deformation components. The objective of this study was to apply deformation partitioning to explore the role of hydration, tissue type, and degree of mineralization in bone and calcified cartilage. Wet, ethanol-dehydrated, and PMMA-embedded equine cortical bone samples and PMMA-embedded human femoral head tissues were analyzed for contributions of elastic, plastic and viscous deformation to the overall nanoindentation response at each site. While the alteration of hydration state had little effect on any measure of deformation, unembedded tissues demonstrated significantly greater measures of resistance to plastic deformation than PMMA-embedded tissues. The PMMA appeared to mechanically stabilize the tissues and prevent extensive permanent deformation within the bone material. Increasing mineral volume fraction correlated with positive changes in E('), H(c), and resistance to plastic deformation, H; however, the partitioned deformation components were generally unaffected by mineralization. The contribution of viscous deformation was minimal and may only play a significant role in poorly mineralized tissues. Deformation partitioning enables a detailed interpretation of the elastic, plastic, and viscous contributions to the nanomechanical behavior of mineralized tissues that is not possible when examining modulus and contact hardness alone. Varying experimental or biological factors, such as hydration or mineralization level, enables the understanding of potential mechanisms for specific mechanical behavior patterns that would otherwise be hidden within a more complex set of material property parameters.
Robakowski, Piotr; Bielinis, Ernest; Sendall, Kerrie
2018-05-01
This study addressed whether competition under different light environments was reflected by changes in leaf absorbed light energy partitioning, photosynthetic efficiency, relative growth rate and biomass allocation in invasive and native competitors. Additionally, a potential allelopathic effect of mulching with invasive Prunus serotina leaves on native Quercus petraea growth and photosynthesis was tested. The effect of light environment on leaf absorbed light energy partitioning and photosynthetic characteristics was more pronounced than the effects of interspecific competition and allelopathy. The quantum yield of PSII of invasive P. serotina increased in the presence of a competitor, indicating a higher plasticity in energy partitioning for the invasive over the native Q. petraea, giving it a competitive advantage. The most striking difference between the two study species was the higher crown-level net CO 2 assimilation rates (A crown ) of P. serotina compared with Q. petraea. At the juvenile life stage, higher relative growth rate and higher biomass allocation to foliage allowed P. serotina to absorb and use light energy for photosynthesis more efficiently than Q. petraea. Species-specific strategies of growth, biomass allocation, light energy partitioning and photosynthetic efficiency varied with the light environment and gave an advantage to the invader over its native competitor in competition for light. However, higher biomass allocation to roots in Q. petraea allows for greater belowground competition for water and nutrients as compared to P. serotina. This niche differentiation may compensate for the lower aboveground competitiveness of the native species and explain its ability to co-occur with the invasive competitor in natural forest settings.
Experimental constraints on the sulfur content in the Earth's core
NASA Astrophysics Data System (ADS)
Fei, Y.; Huang, H.; Leng, C.; Hu, X.; Wang, Q.
2015-12-01
Any core formation models would lead to the incorporation of sulfur (S) into the Earth's core, based on the cosmochemical/geochemical constraints, sulfur's chemical affinity for iron (Fe), and low eutectic melting temperature in the Fe-FeS system. Preferential partitioning of S into the melt also provides petrologic constraint on the density difference between the liquid outer and solid inner cores. Therefore, the center issue is to constrain the amount of sulfur in the core. Geochemical constraints usually place 2-4 wt.% S in the core after accounting for its volatility, whereas more S is allowed in models based on mineral physics data. Here we re-examine the constraints on the S content in the core by both petrologic and mineral physics data. We have measured S partitioning between solid and liquid iron in the multi-anvil apparatus and the laser-heated diamond anvil cell, evaluating the effect of pressure on melting temperature and partition coefficient. In addition, we have conducted shockwave experiments on Fe-11.8wt%S using a two-stage light gas gun up to 211 GPa. The new shockwave experiments yield Hugoniot densities and the longitudinal sound velocities. The measurements provide the longitudinal sound velocity before melting and the bulk sound velocity of liquid. The measured sound velocities clearly show melting of the Fe-FeS mix with 11.8wt%S at a pressure between 111 and 129 GPa. The sound velocities at pressures above 129GPa represent the bulk sound velocities of Fe-11.8wt%S liquid. The combined data set including density, sound velocity, melting temperature, and S partitioning places a tight constraint on the required sulfur partition coefficient to produce the density and velocity jumps and the bulk sulfur content in the core.
NASA Astrophysics Data System (ADS)
Mann, Ute; Frost, Daniel J.; Rubie, David C.; Becker, Harry; Audétat, Andreas
2012-05-01
The apparent overabundance of the highly siderophile elements (HSEs: Pt-group elements, Re and Au) in the mantles of Earth, Moon and Mars has not been satisfactorily explained. Although late accretion of a chondritic component seems to provide the most plausible explanation, metal-silicate equilibration in a magma ocean cannot be ruled out due to a lack of HSE partitioning data suitable for extrapolations to the relevant high pressure and high temperature conditions. We provide a new data set of partition coefficients simultaneously determined for Ru, Rh, Pd, Re, Ir and Pt over a range of 3.5-18 GPa and 2423-2773 K. In multianvil experiments, molten peridotite was equilibrated in MgO single crystal capsules with liquid Fe-alloy that contained bulk HSE concentrations of 53.2-98.9 wt% (XFe = 0.03-0.67) such that oxygen fugacities of IW - 1.5 to IW + 1.6 (i.e. logarithmic units relative to the iron-wüstite buffer) were established at run conditions. To analyse trace concentrations of the HSEs in the silicate melt with LA-ICP-MS, two silicate glass standards (1-119 ppm Ru, Rh, Pd, Re, Ir, Pt) were produced and evaluated for this study. Using an asymmetric regular solution model we have corrected experimental partition coefficients to account for the differences between HSE metal activities in the multicomponent Fe-alloys and infinite dilution. Based on the experimental data, the P and T dependence of the partition coefficients (D) was parameterized. The partition coefficients of all HSEs studied decrease with increasing pressure and to a greater extent with increasing temperature. Except for Pt, the decrease with pressure is stronger below ˜6 GPa and much weaker in the range 6-18 GPa. This change might result from pressure induced coordination changes in the silicate liquid. Extrapolating the D values over a large range of potential P-T conditions in a terrestrial magma ocean (peridotite liquidus at P ⩽ 60-80 GPa) we conclude that the P-T-induced decrease of D would not have been sufficient to explain HSE mantle abundances by metal-silicate equilibration at a common set of P-T-oxygen fugacity conditions. Therefore, the mantle concentrations of most HSEs cannot have been established during core formation. The comparatively less siderophile Pd might have been partly retained in the magma ocean if effective equilibration pressures reached 35-50 GPa. To a much smaller extent this could also apply to Pt and Rh providing that equilibration pressures reached ⩾60 GPa in the late stage of accretion. With most of the HSE partition coefficients at 60 GPa still differing by 0.5-3 orders of magnitude, metal-silicate equilibration alone cannot have produced the observed near-chondritic HSE abundances of the mantles of the Earth as well as of the Moon or Mars. Our results show that an additional process, such as the accretion of a late veneer composed of some type of chondritic material, was required. The results, therefore, support recent hybrid models, which propose that the observed HSE signatures are a combined result of both metal-silicate partitioning as well as an overprint by late accretion.
Cortical Signatures of Heard and Imagined Speech Envelopes
2013-08-01
music has a great rhythm, and 6) The government sought authorization of his...a training set and a testing set. This partitioning was performed to prevent circular inference...and to facilitate the classification procedures described in the next few sections. The training
Multi-level trellis coded modulation and multi-stage decoding
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu
1990-01-01
Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.
Tribological Properties of PVD Ti/C-N Nanocoatnigs
NASA Astrophysics Data System (ADS)
Leitans, A.; Lungevics, J.; Rudzitis, J.; Filipovs, A.
2017-04-01
The present paper discusses and analyses tribological properties of various coatings that increase surface wear resistance. Four Ti/C-N nanocoatings with different coating deposition settings are analysed. Tribological and metrological tests on the samples are performed: 2D and 3D parameters of the surface roughness are measured with modern profilometer, and friction coefficient is measured with CSM Instruments equipment. Roughness parameters Ra, Sa, Sz, Str, Sds, Vmp, Vmc and friction coefficient at 6N load are determined during the experiment. The examined samples have many pores, which is the main reason for relatively large values of roughness parameter. A slight wear is identified in all four samples as well; its friction coefficient values range from 0,.21 to 0.29. Wear rate values are not calculated for the investigated coatings, as no expressed tribotracks are detected on the coating surface.
Reduction of Surface Roughness by Means of Laser Processing over Additive Manufacturing Metal Parts.
Alfieri, Vittorio; Argenio, Paolo; Caiazzo, Fabrizia; Sergi, Vincenzo
2016-12-31
Optimization of processing parameters and exposure strategies is usually performed in additive manufacturing to set up the process; nevertheless, standards for roughness may not be evenly matched on a single complex part, since surface features depend on the building direction of the part. This paper aims to evaluate post processing treating via laser surface modification by means of scanning optics and beam wobbling to process metal parts resulting from selective laser melting of stainless steel in order to improve surface topography. The results are discussed in terms of roughness, geometry of the fusion zone in the cross-section, microstructural modification, and microhardness so as to assess the effects of laser post processing. The benefits of beam wobbling over linear scanning processing are shown, as heat effects in the base metal are proven to be lower.
Reduction of Surface Roughness by Means of Laser Processing over Additive Manufacturing Metal Parts
Alfieri, Vittorio; Argenio, Paolo; Caiazzo, Fabrizia; Sergi, Vincenzo
2016-01-01
Optimization of processing parameters and exposure strategies is usually performed in additive manufacturing to set up the process; nevertheless, standards for roughness may not be evenly matched on a single complex part, since surface features depend on the building direction of the part. This paper aims to evaluate post processing treating via laser surface modification by means of scanning optics and beam wobbling to process metal parts resulting from selective laser melting of stainless steel in order to improve surface topography. The results are discussed in terms of roughness, geometry of the fusion zone in the cross-section, microstructural modification, and microhardness so as to assess the effects of laser post processing. The benefits of beam wobbling over linear scanning processing are shown, as heat effects in the base metal are proven to be lower. PMID:28772380
NASA Astrophysics Data System (ADS)
Garrett, S. J.; Cooper, A. J.; Harris, J. H.; Özkan, M.; Segalini, A.; Thomas, P. J.
2016-01-01
We summarise results of a theoretical study investigating the distinct convective instability properties of steady boundary-layer flow over rough rotating disks. A generic roughness pattern of concentric circles with sinusoidal surface undulations in the radial direction is considered. The goal is to compare predictions obtained by means of two alternative, and fundamentally different, modelling approaches for surface roughness for the first time. The motivating rationale is to identify commonalities and isolate results that might potentially represent artefacts associated with the particular methodologies underlying one of the two modelling approaches. The most significant result of practical relevance obtained is that both approaches predict overall stabilising effects on type I instability mode of rotating disk flow. This mode leads to transition of the rotating-disk boundary layer and, more generally, the transition of boundary-layers with a cross-flow profile. Stabilisation of the type 1 mode means that it may be possible to exploit surface roughness for laminar-flow control in boundary layers with a cross-flow component. However, we also find differences between the two sets of model predictions, some subtle and some substantial. These will represent criteria for establishing which of the two alternative approaches is more suitable to correctly describe experimental data when these become available.
Mad Tea Party Cyclic Partitions
ERIC Educational Resources Information Center
Bekes, Robert; Pedersen, Jean; Shao, Bin
2012-01-01
Martin Gardner's "The Annotated Alice," and Robin Wilson's "Lewis Carroll in Numberland" led the authors to put this article in a fantasy setting. Alice, the March Hare, the Hatter, and the Dormouse describe a straightforward, elementary algorithm for counting the number of ways to fit "n" identical objects into "k" cups arranged in a circle. The…
Minimum Sample Size Requirements for Mokken Scale Analysis
ERIC Educational Resources Information Center
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
Tsallis p, q-deformed Touchard polynomials and Stirling numbers
NASA Astrophysics Data System (ADS)
Herscovici, O.; Mansour, T.
2017-01-01
In this paper, we develop and investigate a new two-parametrized deformation of the Touchard polynomials, based on the definition of the NEXT q-exponential function of Tsallis. We obtain new generalizations of the Stirling numbers of the second kind and of the binomial coefficients and represent two new statistics for the set partitions.
A Hierarchical Bayesian Procedure for Two-Mode Cluster Analysis
ERIC Educational Resources Information Center
DeSarbo, Wayne S.; Fong, Duncan K. H.; Liechty, John; Saxton, M. Kim
2004-01-01
This manuscript introduces a new Bayesian finite mixture methodology for the joint clustering of row and column stimuli/objects associated with two-mode asymmetric proximity, dominance, or profile data. That is, common clusters are derived which partition both the row and column stimuli/objects simultaneously into the same derived set of clusters.…
NASA Astrophysics Data System (ADS)
Decker, K. T.; Everett, M. E.
2009-12-01
The Edwards aquifer lies in the structurally complex Balcones fault zone and supplies water to the growing city of San Antonio. To ensure that future demands for water are met, the hydrological and geophysical properties of the aquifer must be well-understood. In most settings, fracture lengths and displacements occur in power-law distributions. Fracture distribution plays an important role in determining electrical and hydraulic current flowpaths. 1-D synthetic models of the controlled-source electromagnetic (CSEM) response for layered models with a fractured layer at depth described by the roughness parameter βV, such that 0≤βV<1, associated with the power-law length-scale dependence of electrical conductivity are developed. A value of βV = 0 represents homogeneous, continuous media, while a value of 0<βV<1 shows that roughness exists. The Seco Creek frequency-domain helicopter electromagnetic survey data set is analyzed by introducing the similarly defined roughness parameter βH to detect lateral roughness along survey lines. Fourier transforming the apparent resistivity as a function of position along flight line into wavenumber domain using a 256-point sliding window gives the power spectral density (PSD) plot for each line. The value of βH is the slope of the least squares regression for the PSD in each 256-point window. Changes in βH with distance along the flight line are plotted. Large values of βH are found near well-known large fractures and maps of βH produced by interpolating values of βH along survey lines suggest previously undetected structure at depth.
SPH modelling of energy partitioning during impacts on Venus
NASA Technical Reports Server (NTRS)
Takata, T.; Ahrens, T. J.
1993-01-01
Impact cratering of the Venusian planetary surface by meteorites was investigated numerically using the Smoothed Particle Hydrodynamics (SPH) method. Venus presently has a dense atmosphere. Vigorous transfer of energy between impacting meteorites, the planetary surface, and the atmosphere is expected during impact events. The investigation concentrated on the effects of the atmosphere on energy partitioning and the flow of ejecta and gas. The SPH method is particularly suitable for studying complex motion, especially because of its ability to be extended to three dimensions. In our simulations, particles representing impactors and targets are initially set to a uniform density, and those of atmosphere are set to be in hydrostatic equilibrium. Target, impactor, and atmosphere are represented by 9800, 80, and 4200 particles, respectively. A Tillotson equation of state for granite is assumed for the target and impactor, and an ideal gas with constant specific heat ratio is used for the atmosphere. Two dimensional axisymmetric geometry was assumed and normal impacts of 10km diameter projectiles with velocities of 5, 10, 20, and 40 km/s, both with and without an atmosphere present were modeled.
Study of energy partitioning using a set of related explosive formulations
NASA Astrophysics Data System (ADS)
Lieber, Mark; Foster, Joseph C.; Stewart, D. Scott
2012-03-01
Condensed phase high explosives convert potential energy stored in the electro-magnetic field structure of complex molecules to high power output during the detonation process. Historically, the explosive design problem has focused on intramolecular energy storage. The molecules of interest are derived via molecular synthesis providing near stoichiometric balance on the physical scale of the molecule. This approach provides prompt reactions based on transport physics at the molecular scale. Modern material design has evolved to approaches that employ intermolecular ingredients to alter the spatial and temporal distribution of energy release. State of the art continuum methods have been used to study this approach to the materials design. Cheetah has been used to produce data for a set of fictitious explosive formulations based on C-4 to study the partitioning of the available energy between internal and kinetic energy in the detonation. The equation of state information from Cheetah has been used in ALE3D to develop an understanding of the relationship between variations in the formulation parameters and the internal energy cycle in the products.
Comparative study of feature selection with ensemble learning using SOM variants
NASA Astrophysics Data System (ADS)
Filali, Ameni; Jlassi, Chiraz; Arous, Najet
2017-03-01
Ensemble learning has succeeded in the growth of stability and clustering accuracy, but their runtime prohibits them from scaling up to real-world applications. This study deals the problem of selecting a subset of the most pertinent features for every cluster from a dataset. The proposed method is another extension of the Random Forests approach using self-organizing maps (SOM) variants to unlabeled data that estimates the out-of-bag feature importance from a set of partitions. Every partition is created using a various bootstrap sample and a random subset of the features. Then, we show that the process internal estimates are used to measure variable pertinence in Random Forests are also applicable to feature selection in unsupervised learning. This approach aims to the dimensionality reduction, visualization and cluster characterization at the same time. Hence, we provide empirical results on nineteen benchmark data sets indicating that RFS can lead to significant improvement in terms of clustering accuracy, over several state-of-the-art unsupervised methods, with a very limited subset of features. The approach proves promise to treat with very broad domains.
On the ordinary quiver of the symmetric group over a field of characteristic 2
NASA Astrophysics Data System (ADS)
Martin, Stuart; Russell, Lee
1997-11-01
Let [fraktur S]n and [fraktur A]n denote the symmetric and alternating groups of degree n[set membership][open face N] respectively. Let p be a prime number and let F be an arbitrary field of characteristic p. We say that a partition of n is p-regular if no p (non-zero) parts of it are equal; otherwise we call it p-singular. Let S[lambda]F denote the Specht module corresponding to [lambda]. For [lambda] a p-regular partition of n let D[lambda]F denote the unique irreducible top factor of S[lambda]F. Denote by [Delta][lambda]F =D[lambda]F [downward arrow][fraktur A]n its restriction to [fraktur A]n. Recall also that, over F, the ordinary quiver of the modular group algebra FG is a finite directed graph defined as follows: the vertices are labelled by the set of all simple FG-modules, L1, [ctdot], Lr, and the number of arrows from Li to Lj equals dimFExtFG(Li, Lj). The quiver gives important information about the block structure of G.
Improving RNA nearest neighbor parameters for helices by going beyond the two-state model.
Spasic, Aleksandar; Berger, Kyle D; Chen, Jonathan L; Seetin, Matthew G; Turner, Douglas H; Mathews, David H
2018-06-01
RNA folding free energy change nearest neighbor parameters are widely used to predict folding stabilities of secondary structures. They were determined by linear regression to datasets of optical melting experiments on small model systems. Traditionally, the optical melting experiments are analyzed assuming a two-state model, i.e. a structure is either complete or denatured. Experimental evidence, however, shows that structures exist in an ensemble of conformations. Partition functions calculated with existing nearest neighbor parameters predict that secondary structures can be partially denatured, which also directly conflicts with the two-state model. Here, a new approach for determining RNA nearest neighbor parameters is presented. Available optical melting data for 34 Watson-Crick helices were fit directly to a partition function model that allows an ensemble of conformations. Fitting parameters were the enthalpy and entropy changes for helix initiation, terminal AU pairs, stacks of Watson-Crick pairs and disordered internal loops. The resulting set of nearest neighbor parameters shows a 38.5% improvement in the sum of residuals in fitting the experimental melting curves compared to the current literature set.
NASA Astrophysics Data System (ADS)
Paz-Ferreiro, J.; Bertol, I.; Vidal Vázquez, E.
2008-07-01
Changes in soil surface microrelief with cumulative rainfall under different tillage systems and crop cover conditions were investigated in southern Brazil. Surface cover was none (fallow) or the crop succession maize followed by oats. Tillage treatments were: 1) conventional tillage on bare soil (BS), 2) conventional tillage (CT), 3) minimum tillage (MT) and 4) no tillage (NT) under maize and oats. Measurements were taken with a manual relief meter on small rectangular grids of 0.234 and 0.156 m2, throughout growing season of maize and oats, respectively. Each data set consisted of 200 point height readings, the size of the smallest cells being 3×5 cm during maize and 2×5 cm during oats growth periods. Random Roughness (RR), Limiting Difference (LD), Limiting Slope (LS) and two fractal parameters, fractal dimension (D) and crossover length (l) were estimated from the measured microtopographic data sets. Indices describing the vertical component of soil roughness such as RR, LD and l generally decreased with cumulative rain in the BS treatment, left fallow, and in the CT and MT treatments under maize and oats canopy. However, these indices were not substantially affected by cumulative rain in the NT treatment, whose surface was protected with previous crop residues. Roughness decay from initial values was larger in the BS treatment than in CT and MT treatments. Moreover, roughness decay generally tended to be faster under maize than under oats. The RR and LD indices decreased quadratically, while the l index decreased exponentially in the tilled, BS, CT and MT treatments. Crossover length was sensitive to differences in soil roughness conditions allowing a description of microrelief decay due to rainfall in the tilled treatments, although better correlations between cumulative rainfall and the most commonly used indices RR and LD were obtained. At the studied scale, parameters l and D have been found to be useful in interpreting the configuration properties of the soil surface microrelief.
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
Gowin, Ewelina; Januszkiewicz-Lewandowska, Danuta; Słowiński, Roman; Błaszczyński, Jerzy; Michalak, Michał; Wysocki, Jacek
2017-01-01
Abstract Differential Diagnosis of bacterial and viral meningitis remains an important clinical problem. A number of methods to assist in the diagnoses of meningitis have been developed, but none of them have been found to have high specificity with 100% sensitivity. We conducted a retrospective analysis of the medical records of 148 children hospitalized in St. Joseph Children's Hospital in Poznań. In this study, we applied for the first time the original methodology of dominance-based rough set approach (DRSA) to diagnostic patterns of meningitis data and represented them by decision rules useful in discriminating between bacterial and viral meningitis. The induction algorithm is called VC-DomLEM; it has been implemented as software package called jMAF (http://www.cs.put.poznan.pl/jblaszczynski/Site/jRS.html), based on java Rough Set (jRS) library. In the studied group, there were 148 patients (78 boys and 70 girls), and the mean age was 85 months. We analyzed 14 attributes, of which only 4 were used to generate the 6 rules, with C-reactive protein (CRP) being the most valuable. Factors associated with bacterial meningitis were: CRP level ≥86 mg/L, number of leukocytes in cerebrospinal fluid (CSF) ≥4481 μL−1, symptoms duration no longer than 2 days, or age less than 1 month. Factors associated with viral meningitis were CRP level not higher than 19 mg/L, or CRP level not higher than 84 mg/L in a patient older than 11 months with no more than 1100 μL−1 leukocytes in CSF. We established the minimum set of attributes significant for classification of patients with meningitis. This is new set of rules, which, although intuitively anticipated by some clinicians, has not been formally demonstrated until now. PMID:28796045
Gowin, Ewelina; Januszkiewicz-Lewandowska, Danuta; Słowiński, Roman; Błaszczyński, Jerzy; Michalak, Michał; Wysocki, Jacek
2017-08-01
Differential Diagnosis of bacterial and viral meningitis remains an important clinical problem. A number of methods to assist in the diagnoses of meningitis have been developed, but none of them have been found to have high specificity with 100% sensitivity.We conducted a retrospective analysis of the medical records of 148 children hospitalized in St. Joseph Children's Hospital in Poznań. In this study, we applied for the first time the original methodology of dominance-based rough set approach (DRSA) to diagnostic patterns of meningitis data and represented them by decision rules useful in discriminating between bacterial and viral meningitis. The induction algorithm is called VC-DomLEM; it has been implemented as software package called jMAF (http://www.cs.put.poznan.pl/jblaszczynski/Site/jRS.html), based on java Rough Set (jRS) library.In the studied group, there were 148 patients (78 boys and 70 girls), and the mean age was 85 months. We analyzed 14 attributes, of which only 4 were used to generate the 6 rules, with C-reactive protein (CRP) being the most valuable.Factors associated with bacterial meningitis were: CRP level ≥86 mg/L, number of leukocytes in cerebrospinal fluid (CSF) ≥4481 μL, symptoms duration no longer than 2 days, or age less than 1 month. Factors associated with viral meningitis were CRP level not higher than 19 mg/L, or CRP level not higher than 84 mg/L in a patient older than 11 months with no more than 1100 μL leukocytes in CSF.We established the minimum set of attributes significant for classification of patients with meningitis. This is new set of rules, which, although intuitively anticipated by some clinicians, has not been formally demonstrated until now.
Contact stiffness of regularly patterned multi-asperity interfaces
NASA Astrophysics Data System (ADS)
Li, Shen; Yao, Quanzhou; Li, Qunyang; Feng, Xi-Qiao; Gao, Huajian
2018-02-01
Contact stiffness is a fundamental mechanical index of solid surfaces and relevant to a wide range of applications. Although the correlation between contact stiffness, contact size and load has long been explored for single-asperity contacts, our understanding of the contact stiffness of rough interfaces is less clear. In this work, the contact stiffness of hexagonally patterned multi-asperity interfaces is studied using a discrete asperity model. We confirm that the elastic interaction among asperities is critical in determining the mechanical behavior of rough contact interfaces. More importantly, in contrast to the common wisdom that the interplay of asperities is solely dictated by the inter-asperity spacing, we show that the number of asperities in contact (or equivalently, the apparent size of contact) also plays an indispensable role. Based on the theoretical analysis, we propose a new parameter for gauging the closeness of asperities. Our theoretical model is validated by a set of experiments. To facilitate the application of the discrete asperity model, we present a general equation for contact stiffness estimation of regularly rough interfaces, which is further proved to be applicable for interfaces with single-scale random roughness.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726
NASA Technical Reports Server (NTRS)
Li, Fei; Choudhari, Meelan M.; Carpenter, Mark H.; Malik, Mujeeb R.; Eppink, Jenna; Chang, Chau-Lyan; Streett, Craig L.
2010-01-01
A high fidelity transition prediction methodology has been applied to a swept airfoil design at a Mach number of 0.75 and chord Reynolds number of approximately 17 million, with the dual goal of an assessment of the design for the implementation and testing of roughness based crossflow transition control and continued maturation of such methodology in the context of realistic aerodynamic configurations. Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes in order to weaken the growth of naturally occurring, linearly more unstable instability modes via a nonlinear modification of the mean boundary layer profiles. Therefore, a synthesis of receptivity, linear and nonlinear growth of crossflow disturbances, and high-frequency secondary instabilities becomes desirable to model this form of control. Because experimental data is currently unavailable for passive crossflow transition control for such high Reynolds number configurations, a holistic computational approach is used to assess the feasibility of roughness based control methodology. Potential challenges inherent to this control application as well as associated difficulties in modeling this form of control in a computational setting are highlighted. At high Reynolds numbers, a broad spectrum of stationary crossflow disturbances amplify and, while it may be possible to control a specific target mode using Discrete Roughness Elements (DREs), nonlinear interaction between the control and target modes may yield strong amplification of the difference mode that could have an adverse impact on the transition delay using spanwise periodic roughness elements.
Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O
2015-01-01
To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.
NASA Astrophysics Data System (ADS)
Lefebvre, A.; Thompson, C. E. L.; Amos, C. L.
2010-09-01
Seagrasses develop extensive or patchy underwater meadows in coastal areas around the world, forming complex, highly productive ecosystems. Seagrass canopies exert strong effects on water flow inside and around them, thereby affecting flow structure, sediment transport and benthic ecology. The influence of Zostera marina canopies on flow velocity, turbulence, hydraulic roughness and sediment movement was evaluated through laboratory experiments in 2 flumes and using live Z. marina and a mobile sand bed. Profiles of instantaneous velocities were measured and sediment movement was identified upstream, within and downstream of patches of different sizes and shoot density and at different free-stream velocities. Flow structure was characterised by time-averaged velocity, turbulence intensity and Turbulent Kinetic Energy (TKE). When velocity data were available above the canopy, they were fitted to the Law of the Wall and shear velocities and roughness lengths were calculated. When a seagrass canopy was present, three layers were distinguishable in the water column: (1) within canopy represented by low velocities and high turbulence; (2) transition zone around the height of the canopy, where velocities increased, turbulence decreased and TKE was high; and (3) above canopy where velocities were equal or higher than free-stream velocities and turbulence and TKE were lower than below. Shoot density and patch-width influenced this partitioning of the flow when the canopy was long enough (based on flume experiments, at least more than 1 m-long). The enhanced TKE observed at the canopy/water interface suggests that large-scale turbulence is generated at the canopy surface. These oscillations, likely to be related to the canopy undulations, are then broken down within the canopy and high-frequency turbulence takes place near the bed. This turbulence 'cascade' through the canopy may have an important impact on biogeochemical processes. The velocity above the canopy generally followed a logarithmic profile. Roughness lengths were higher above the canopy than over bare sand and increased with increasing distance from the leading edge of the canopy; however, they were still small (<1 cm) compared to other studies in the literature. Within and downstream of the canopy, sediment movement was observed at velocities below the threshold of motion. It was likely caused by the increased turbulence at those positions. This has large implications for sediment transport in coastal zones where seagrass beds develop.
Lin, Junfang; Cao, Wenxi; Wang, Guifeng; Hu, Shuibo
2013-06-20
Using a data set of 1333 samples, we assess the spectral absorption relationships of different wave bands for phytoplankton (ph) and particles. We find that a nonlinear model (second-order quadratic equations) delivers good performance in describing their spectral characteristics. Based on these spectral relationships, we develop a method for partitioning the total absorption coefficient into the contributions attributable to phytoplankton [a(ph)(λ)], colored dissolved organic material [CDOM; a(CDOM)(λ)], and nonalgal particles [NAP; a(NAP)(λ)]. This method is validated using a data set that contains 550 simultaneous measurements of phytoplankton, CDOM, and NAP from the NASA bio-Optical Marine Algorithm Dataset. We find that our method is highly efficient and robust, with significant accuracy: the relative root-mean-square errors (RMSEs) are 25.96%, 38.30%, and 19.96% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively. The performance is still satisfactory when the method is applied to water samples from the northern South China Sea as a regional case. The computed and measured absorption coefficients (167 samples) agree well with the RMSEs, i.e., 18.50%, 32.82%, and 10.21% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively. Finally, the partitioning method is applied directly to an independent data set (1160 samples) derived from the Bermuda Bio-Optics Project that contains relatively low absorption values, and we also obtain good inversion accuracy [RMSEs of 32.37%, 32.57%, and 11.52% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively]. Our results indicate that this partitioning method delivers satisfactory performance for the retrieval of a(ph), a(CDOM), and a(NAP). Therefore, this may be a useful tool for extracting absorption coefficients from in situ measurements or remotely sensed ocean-color data.
Abrasive wear of resin composites as related to finishing and polishing procedures.
Turssi, Cecilia P; Ferracane, Jack L; Serra, Mônica C
2005-07-01
Finishing and polishing procedures may cause topographical changes and introduce subsurface microcracks in dental composite restoratives. Since both of these effects may contribute toward the kinetics of wear, the purpose of this study was to assess and correlate the wear and surface roughness of minifilled and nanofilled composites finished and polished by different methods. Specimens (n=10) made of a minifilled and a nanofilled composite were finished and polished with one of the four sequences: (1) tungsten carbide burs plus Al(2)O(3)-impregnated brush (CbBr) or (2) tungsten carbide burs plus diamond-impregnated cup (CbCp), (3) diamond burs plus brush (DmBr) or (4) diamond burs plus cup (DmCp). As a control, abrasive papers were used. After surface roughness had been quantified, three-body abrasion was simulated using the OHSU wear machine. The wear facets were then scanned to measure wear depth and post-testing roughness. All sets of data were subjected to ANOVA and Tukey's tests (alpha=0.05). Pearson's correlation test was applied to check for the existence of a relationship between pre-testing roughness and wear. Significantly smoother surfaces were attained with the sequences CbBr and CbCp, whereas DmCp yielded the roughest surface. Regardless of the finishing/polishing technique, the nanofilled composite exhibited the lowest pre-testing roughness and wear. There was no correlation between the surface roughness achieved after finishing/polishing procedures and wear (p=0.3899). Nano-sized materials may have improved abrasive wear resistance over minifilled composites. The absence of correlation between wear and surface roughness produced by different finishing/polishing methods suggests that the latter negligibly influences material loss due to three-body abrasion.
Flury, Simon; Diebold, Elisabeth; Peutzfeldt, Anne; Lussi, Adrian
2017-06-01
Because of the different composition of resin-ceramic computer-aided design and computer-aided manufacturing (CAD-CAM) materials, their polishability and their micromechanical properties vary. Moreover, depending on the composition of the materials, their surface roughness and micromechanical properties are likely to change with time. The purpose of this in vitro study was to investigate the effect of artificial toothbrushing and water storage on the surface roughness (Ra and Rz) and the micromechanical properties, surface hardness (Vickers [VHN]) and indentation modulus (E IT ), of 5 different tooth-colored CAD-CAM materials when polished with 2 different polishing systems. Specimens (n=40 per material) were cut from a composite resin (Paradigm MZ100; 3M ESPE), a feldspathic ceramic (Vitablocs Mark II; Vita Zahnfabrik), a resin nanoceramic (Lava Ultimate; 3M ESPE), a hybrid dental ceramic (Vita Enamic; Vita Zahnfabrik), and a nanocomposite resin (Ambarino High-Class; Creamed). All specimens were roughened in a standardized manner and polished either with Sof-Lex XT discs or the Vita Polishing Set Clinical. Surface roughness, VHN, and E IT were measured after polishing and after storage for 6 months (tap water, 37°C) with periodic, artificial toothbrushing. The surface roughness, VHN, and E IT results were analyzed with a nonparametric ANOVA followed by Kruskal-Wallis and exact Wilcoxon rank sum tests (α=.05). Irrespective of polishing system and of artificial toothbrushing and storage, Lava Ultimate generally showed the lowest surface roughness and Vitablocs Mark II the highest. As regards micromechanical properties, the following ranking of the CAD-CAM materials was found (from highest VHN/E IT to lowest VHN/E IT ): Vitablocs Mark II > Vita Enamic > Paradigm MZ100 > Lava Ultimate > Ambarino High-Class. Irrespective of material and of artificial toothbrushing and storage, polishing with Sof-Lex XT discs resulted in lower surface roughness than the Vita Polishing Set Clinical (P≤.016). However, the polishing system generally had no influence on the micromechanical properties (P>.05). The effect of artificial toothbrushing and storage on surface roughness depended on the material and the polishing system: Ambarino High-Class was most sensitive to storage, Lava Ultimate and Vita Enamic were least sensitive. Artificial toothbrushing and storage generally resulted in a decrease in VHN and E IT for Paradigm MZ100, Lava Ultimate, and Ambarino High-Class but not for Vita Enamic and Vitablocs Mark II. Tooth-colored CAD-CAM materials with lower VHN and E IT generally showed better polishability. However, these materials were more prone to degradation by artificial toothbrushing and water storage than materials with higher VHN and E IT . Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Unal, Omer Kays; Poyanli, Oguz Sukru; Unal, Ulku Sur; Mutlu, Hasan Huseyin; Ozkut, Afsar Timucin; Esenkaya, Irfan
2018-05-16
We set out to reveal the effects of repeated sterilization, using different methods, on the carbon fiber rods of external fixator systems. We used a randomized set of forty-four unused, unsterilized, and identical carbon fiber rods (11 × 200 mm), randomly assigned to two groups: unsterilized (US) (4 rods) and sterilized (40 rods). The sterilized rods were divided into two groups, those sterilized in an autoclave (AC) and by hydrogen peroxide (HP). These were further divided into five subgroups based on the number of sterilization repetition to which the fibers were subjected (25-50-75-100-200). A bending test was conducted to measure the maximum bending force (MBF), maximum deflection (MD), flexural strength (FS), maximum bending moment (MBM) and bending rigidity (BR). We also measured the surface roughness of the rods. An increase in the number of sterilization repetition led to a decrease in MBF, MBM, FS, BR, but increased MD and surface roughness (p < 0.01). The effect of the number of sterilization repetition was more prominent in the HP group. This study revealed that the sterilization method and number of sterilization repetition influence the strength of the carbon fiber rods. Increasing the number of sterilization repetition degrades the strength and roughness of the rods.
Perspectives on condom breakage: a qualitative study of female sex workers in Bangalore, India.
Gurav, Kaveri; Bradley, Janet; Chandrashekhar Gowda, G; Alary, Michel
2014-01-01
A qualitative study was conducted to obtain a detailed understanding of two key determinants of condom breakage - 'rough sex' and poor condom fit - identified in a recent telephone survey of female sex workers, in Bangalore, India. Transcripts from six focus-group discussions involving 35 female sex workers who reported condom breakage during the telephone survey were analysed. Rough sex in different forms, from over-exuberance to violence, was often described by sex workers as a result of clients' inebriation and use of sexual stimulants, which, they report, cause tumescence, excessive thrusting and sex that lasts longer than usual, thereby increasing the risk of condom breakage. Condom breakage in this setting is the result of a complex set of social situations involving client behaviours and power dynamics that has the potential to put the health and personal lives of sex workers at risk. These findings and their implications for programme development are discussed.
Ye, Bixiong; E, Xueli; Zhang, Lan
2015-01-01
To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.
Preference Mining Using Neighborhood Rough Set Model on Two Universes.
Zeng, Kai
2016-01-01
Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.
Research on classified real-time flood forecasting framework based on K-means cluster and rough set.
Xu, Wei; Peng, Yong
2015-01-01
This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.
Processing and filtrating of driver fatigue characteristic parameters based on rough set
NASA Astrophysics Data System (ADS)
Ye, Wenwu; Zhao, Xuyang
2018-05-01
With the rapid development of economy, people become increasingly rich, and cars have become a common means of transportation in daily life. However, the problem of traffic safety is becoming more and more serious. And fatigue driving is one of the main causes of traffic accidents. Therefore, it is of great importance for us to study the detection of fatigue driving to improve traffic safety. In the cause of determining whether the driver is tired, the characteristic quantity related to the steering angle of the steering wheel and the characteristic quantity of the driver's pulse are all important indicators. The fuzzy c-means clustering is used to discretize the above indexes. Because the characteristic parameters are too miscellaneous, rough set is used to filtrate these characteristics. Finally, this paper finds out the highest correlation with fatigue driving. It is proved that these selected characteristics are of great significance to the evaluation of fatigue driving.
High-frequency Born synthetic seismograms based on coupled normal modes
Pollitz, Fred F.
2011-01-01
High-frequency and full waveform synthetic seismograms on a 3-D laterally heterogeneous earth model are simulated using the theory of coupled normal modes. The set of coupled integral equations that describe the 3-D response are simplified into a set of uncoupled integral equations by using the Born approximation to calculate scattered wavefields and the pure-path approximation to modulate the phase of incident and scattered wavefields. This depends upon a decomposition of the aspherical structure into smooth and rough components. The uncoupled integral equations are discretized and solved in the frequency domain, and time domain results are obtained by inverse Fourier transform. Examples show the utility of the normal mode approach to synthesize the seismic wavefields resulting from interaction with a combination of rough and smooth structural heterogeneities. This approach is applied to an ∼4 Hz shallow crustal wave propagation around the site of the San Andreas Fault Observatory at Depth (SAFOD).
Semi-active suspension for automotive application
NASA Astrophysics Data System (ADS)
Venhovens, Paul J. T.; Devlugt, Alex R.
The theoretical considerations for semi-active damping system evaluation, with respect to semi-active suspension and Kalman filtering, are discussed in terms of the software. Some prototype hardware developments are proposed. A significant improvement in ride comfort performance can be obtained, indicated by root mean square body acceleration values and frequency responses, using a switchable damper system with two settings. Nevertheless the improvement is accompanied by an increase in dynamic tire load variations. The main benefit of semi-active suspensions is the potential of changing the low frequency section of the transfer function. In practice this will support the impression of extra driving stability. It is advisable to apply an adaptive control strategy like the (extended) skyhook version switching more to the 'comfort' setting for straight (and smooth/moderate roughness) road running and switching to 'road holding' for handling maneuvers and possibly rough roads and discrete, severe events like potholes.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Data approximation using a blending type spline construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalmo, Rune; Bratlie, Jostein
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less
Standardized Sky Partitioning for the Next Generation Astronomy and Space Science Archives
NASA Technical Reports Server (NTRS)
Lal, Nand (Technical Monitor); McLean, Brian
2004-01-01
The Johns Hopkins University and Space Telescope Science Institute are working together on this project to develop a library of standard software for data archives that will benefit the wider astronomical community. The ultimate goal was to develop and distribute a software library aimed at providing a common system for partitioning and indexing the sky in manageable sized regions and provide complex queries on the objects stored in this system. Whilst ongoing maintenance work will continue the primary goal has been completed. Most of the next generation sky surveys in the different wavelengths like 2MASS, GALEX, SDSS, GSC-II, DPOSS and FIRST have agreed on this common set of utilities. In this final report, we summarize work on the work elements assigned to the STScI project team.
Mordell integrals and Giveon-Kutasov duality
NASA Astrophysics Data System (ADS)
Giasemidis, Georgios; Tierz, Miguel
2016-01-01
We solve, for finite N, the matrix model of supersymmetric U( N) Chern-Simons theory coupled to N f massive hypermultiplets of R-charge 1/2 , together with a Fayet-Iliopoulos term. We compute the partition function by identifying it with a determinant of a Hankel matrix, whose entries are parametric derivatives (of order N f - 1) of Mordell integrals. We obtain finite Gauss sums expressions for the partition functions. We also apply these results to obtain an exhaustive test of Giveon-Kutasov (GK) duality in the N=3 setting, by systematic computation of the matrix models involved. The phase factor that arises in the duality is then obtained explicitly. We give an expression characterized by modular arithmetic (mod 4) behavior that holds for all tested values of the parameters (checked up to N f = 12 flavours).
Task partitioning in a ponerine ant.
Theraulaz, Guy; Bonabeau, Eric; Sole, Ricard V; Schatz, Bertrand; Deneubourg, Jean-Louis
2002-04-21
This paper reports a study of the task partitioning observed in the ponerine ant Ectatomma ruidum, where prey-foraging behaviour can be subdivided into two categories: stinging and transporting. Stingers kill live prey and transporters carry prey corpses back to the nest. Stinging and transporting behaviours are released by certain stimuli through response thresholds; the respective stimuli for stinging and transporting appear to be the number of live prey and the number of prey corpses. A response threshold model, the parameters of which are all measured empirically, reproduces a set of non-trivial colony-level dynamical patterns observed in the experiments. This combination of modelling and empirical work connects explicitly the level of individual behaviour with colony-level patterns of work organization. Copyright 2002 Elsevier Science Ltd. All rights reserved.
Structural methodologies for auditing SNOMED.
Wang, Yue; Halper, Michael; Min, Hua; Perl, Yehoshua; Chen, Yan; Spackman, Kent A
2007-10-01
SNOMED is one of the leading health care terminologies being used worldwide. As such, quality assurance is an important part of its maintenance cycle. Methodologies for auditing SNOMED based on structural aspects of its organization are presented. In particular, automated techniques for partitioning SNOMED into smaller groups of concepts based primarily on relationships patterns are defined. Two abstraction networks, the area taxonomy and p-area taxonomy, are derived from the partitions. The high-level views afforded by these abstraction networks form the basis for systematic auditing. The networks tend to highlight errors that manifest themselves as irregularities at the abstract level. They also support group-based auditing, where sets of purportedly similar concepts are focused on for review. The auditing methodologies are demonstrated on one of SNOMED's top-level hierarchies. Errors discovered during the auditing process are reported.
Cenozoic tectonics of western North America controlled by evolving width of Farallon slab.
Schellart, W P; Stegman, D R; Farrington, R J; Freeman, J; Moresi, L
2010-07-16
Subduction of oceanic lithosphere occurs through two modes: subducting plate motion and trench migration. Using a global subduction zone data set and three-dimensional numerical subduction models, we show that slab width (W) controls these modes and the partitioning of subduction between them. Subducting plate velocity scales with W(2/3), whereas trench velocity scales with 1/W. These findings explain the Cenozoic slowdown of the Farallon plate and the decrease in subduction partitioning by its decreasing slab width. The change from Sevier-Laramide orogenesis to Basin and Range extension in North America is also explained by slab width; shortening occurred during wide-slab subduction and overriding-plate-driven trench retreat, whereas extension occurred during intermediate to narrow-slab subduction and slab-driven trench retreat.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Boron Partitioning Coefficient above Unity in Laser Crystallized Silicon.
Lill, Patrick C; Dahlinger, Morris; Köhler, Jürgen R
2017-02-16
Boron pile-up at the maximum melt depth for laser melt annealing of implanted silicon has been reported in numerous papers. The present contribution examines the boron accumulation in a laser doping setting, without dopants initially incorporated in the silicon wafer. Our numerical simulation models laser-induced melting as well as dopant diffusion, and excellently reproduces the secondary ion mass spectroscopy-measured boron profiles. We determine a partitioning coefficient k p above unity with k p = 1 . 25 ± 0 . 05 and thermally-activated diffusivity D B , with a value D B ( 1687 K ) = ( 3 . 53 ± 0 . 44 ) × 10 - 4 cm 2 ·s - 1 of boron in liquid silicon. For similar laser parameters and process conditions, our model predicts the anticipated boron profile of a laser doping experiment.
JTRS/SCA and Custom/SDR Waveform Comparison
NASA Technical Reports Server (NTRS)
Oldham, Daniel R.; Scardelletti, Maximilian C.
2007-01-01
This paper compares two waveform implementations generating the same RF signal using the same SDR development system. Both waveforms implement a satellite modem using QPSK modulation at 1M BPS data rates with one half rate convolutional encoding. Both waveforms are partitioned the same across the general purpose processor (GPP) and the field programmable gate array (FPGA). Both waveforms implement the same equivalent set of radio functions on the GPP and FPGA. The GPP implements the majority of the radio functions and the FPGA implements the final digital RF modulator stage. One waveform is implemented directly on the SDR development system and the second waveform is implemented using the JTRS/SCA model. This paper contrasts the amount of resources to implement both waveforms and demonstrates the importance of waveform partitioning across the SDR development system.
Applications of CCSDS recommendations to Integrated Ground Data Systems (IGDS)
NASA Technical Reports Server (NTRS)
Mizuta, Hiroshi; Martin, Daniel; Kato, Hatsuhiko; Ihara, Hirokazu
1993-01-01
This paper describes an application of the CCSDS Principle Network (CPH) service model to communications network elements of a postulated Integrated Ground Data System (IGDS). Functions are drawn principally from COSMICS (Cosmic Information and Control System), an integrated space control infrastructure, and the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). From functional requirements, this paper derives a set of five communications network partitions which, taken together, support proposed space control infrastructures and data distribution systems. Our functional analysis indicates that the five network partitions derived in this paper should effectively interconnect the users, centers, processors, and other architectural elements of an IGDS. This paper illustrates a useful application of the CCSDS (Consultive Committee for Space Data Systems) Recommendations to ground data system development.
NASA Astrophysics Data System (ADS)
Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud
2017-08-01
Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.
Normalized Cut Algorithm for Automated Assignment of Protein Domains
NASA Technical Reports Server (NTRS)
Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.
Experimental test of an online ion-optics optimizer
NASA Astrophysics Data System (ADS)
Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.
2018-07-01
A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.