Sample records for partitioning analysis classification

  1. Overlapped Partitioning for Ensemble Classifiers of P300-Based Brain-Computer Interfaces

    PubMed Central

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance. PMID:24695550

  2. Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.

    PubMed

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.

  3. Target Detection and Classification Using Seismic and PIR Sensors

    DTIC Science & Technology

    2012-06-01

    time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The

  4. Unsupervised hierarchical partitioning of hyperspectral images: application to marine algae identification

    NASA Astrophysics Data System (ADS)

    Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.

    2015-10-01

    In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.

  5. Recursive Partitioning Analysis for New Classification of Patients With Esophageal Cancer Treated by Chemoradiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Motoo, E-mail: excell@hkg.odn.ne.jp; Department of Clinical Oncology, Aichi Cancer Center Hospital, Nagoya; Department of Radiation Oncology, Aichi Cancer Center Hospital, Nagoya

    2012-11-01

    Background: The 7th edition of the American Joint Committee on Cancer staging system does not include lymph node size in the guidelines for staging patients with esophageal cancer. The objectives of this study were to determine the prognostic impact of the maximum metastatic lymph node diameter (ND) on survival and to develop and validate a new staging system for patients with esophageal squamous cell cancer who were treated with definitive chemoradiotherapy (CRT). Methods: Information on 402 patients with esophageal cancer undergoing CRT at two institutions was reviewed. Univariate and multivariate analyses of data from one institution were used to assessmore » the impact of clinical factors on survival, and recursive partitioning analysis was performed to develop the new staging classification. To assess its clinical utility, the new classification was validated using data from the second institution. Results: By multivariate analysis, gender, T, N, and ND stages were independently and significantly associated with survival (p < 0.05). The resulting new staging classification was based on the T and ND. The four new stages led to good separation of survival curves in both the developmental and validation datasets (p < 0.05). Conclusions: Our results showed that lymph node size is a strong independent prognostic factor and that the new staging system, which incorporated lymph node size, provided good prognostic power, and discriminated effectively for patients with esophageal cancer undergoing CRT.« less

  6. Surveillance system and method having an operating mode partitioned fault classification model

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  7. An Introduction to Recursive Partitioning: Rationale, Application, and Characteristics of Classification and Regression Trees, Bagging, and Random Forests

    ERIC Educational Resources Information Center

    Strobl, Carolin; Malley, James; Tutz, Gerhard

    2009-01-01

    Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and…

  8. Predicting cannabis abuse screening test (CAST) scores: a recursive partitioning analysis using survey data from Czech Republic, Italy, the Netherlands and Sweden.

    PubMed

    Blankers, Matthijs; Frijns, Tom; Belackova, Vendula; Rossi, Carla; Svensson, Bengt; Trautmann, Franz; van Laar, Margriet

    2014-01-01

    Cannabis is Europe's most commonly used illicit drug. Some users do not develop dependence or other problems, whereas others do. Many factors are associated with the occurrence of cannabis-related disorders. This makes it difficult to identify key risk factors and markers to profile at-risk cannabis users using traditional hypothesis-driven approaches. Therefore, the use of a data-mining technique called binary recursive partitioning is demonstrated in this study by creating a classification tree to profile at-risk users. 59 variables on cannabis use and drug market experiences were extracted from an internet-based survey dataset collected in four European countries (Czech Republic, Italy, Netherlands and Sweden), n = 2617. These 59 potential predictors of problematic cannabis use were used to partition individual respondents into subgroups with low and high risk of having a cannabis use disorder, based on their responses on the Cannabis Abuse Screening Test. Both a generic model for the four countries combined and four country-specific models were constructed. Of the 59 variables included in the first analysis step, only three variables were required to construct a generic partitioning model to classify high risk cannabis users with 65-73% accuracy. Based on the generic model for the four countries combined, the highest risk for cannabis use disorder is seen in participants reporting a cannabis use on more than 200 days in the last 12 months. In comparison to the generic model, the country-specific models led to modest, non-significant improvements in classification accuracy, with an exception for Italy (p = 0.01). Using recursive partitioning, it is feasible to construct classification trees based on only a few variables with acceptable performance to classify cannabis users into groups with low or high risk of meeting criteria for cannabis use disorder. The number of cannabis use days in the last 12 months is the most relevant variable. The identified variables may be considered for use in future screeners for cannabis use disorders.

  9. Brain Network Regional Synchrony Analysis in Deafness

    PubMed Central

    Xu, Lei; Liang, Mao-Jin

    2018-01-01

    Deafness, the most common auditory disease, has greatly affected people for a long time. The major treatment for deafness is cochlear implantation (CI). However, till today, there is still a lack of objective and precise indicator serving as evaluation of the effectiveness of the cochlear implantation. The goal of this EEG-based study is to effectively distinguish CI children from those prelingual deafened children without cochlear implantation. The proposed method is based on the functional connectivity analysis, which focuses on the brain network regional synchrony. Specifically, we compute the functional connectivity between each channel pair first. Then, we quantify the brain network synchrony among regions of interests (ROIs), where both intraregional synchrony and interregional synchrony are computed. And finally the synchrony values are concatenated to form the feature vector for the SVM classifier. What is more, we develop a new ROI partition method of 128-channel EEG recording system. That is, both the existing ROI partition method and the proposed ROI partition method are used in the experiments. Compared with the existing EEG signal classification methods, our proposed method has achieved significant improvements as large as 87.20% and 86.30% when the existing ROI partition method and the proposed ROI partition method are used, respectively. It further demonstrates that the new ROI partition method is comparable to the existing ROI partition method. PMID:29854776

  10. Molecular phylogeny of the aquatic beetle family Noteridae (Coleoptera: Adephaga) with an emphasis on data partitioning strategies.

    PubMed

    Baca, Stephen M; Toussaint, Emmanuel F A; Miller, Kelly B; Short, Andrew E Z

    2017-02-01

    The first molecular phylogenetic hypothesis for the aquatic beetle family Noteridae is inferred using DNA sequence data from five gene fragments (mitochondrial and nuclear): COI, H3, 16S, 18S, and 28S. Our analysis is the most comprehensive phylogenetic reconstruction of Noteridae to date, and includes 53 species representing all subfamilies, tribes and 16 of the 17 genera within the family. We examine the impact of data partitioning on phylogenetic inference by comparing two different algorithm-based partitioning strategies: one using predefined subsets of the dataset, and another recently introduced method, which uses the k-means algorithm to iteratively divide the dataset into clusters of sites evolving at similar rates across sampled loci. We conducted both maximum likelihood and Bayesian inference analyses using these different partitioning schemes. Resulting trees are strongly incongruent with prior classifications of Noteridae. We recover variant tree topologies and support values among the implemented partitioning schemes. Bayes factors calculated with marginal likelihoods of Bayesian analyses support a priori partitioning over k-means and unpartitioned data strategies. Our study substantiates the importance of data partitioning in phylogenetic inference, and underscores the use of comparative analyses to determine optimal analytical strategies. Our analyses recover Noterini Thomson to be paraphyletic with respect to three other tribes. The genera Suphisellus Crotch and Hydrocanthus Say are also recovered as paraphyletic. Following the results of the preferred partitioning scheme, we here propose a revised classification of Noteridae, comprising two subfamilies, three tribes and 18 genera. The following taxonomic changes are made: Notomicrinae sensu n. (= Phreatodytinae syn. n.) is expanded to include the tribe Phreatodytini; Noterini sensu n. (= Neohydrocoptini syn. n., Pronoterini syn. n., Tonerini syn. n.) is expanded to include all genera of the Noterinae; The genus Suphisellus Crotch is expanded to include species of Pronoterus Sharp syn. n.; and the former subgenus Sternocanthus Guignot stat. rev. is resurrected from synonymy and elevated to genus rank. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Systems and methods for knowledge discovery in spatial data

    DOEpatents

    Obradovic, Zoran; Fiez, Timothy E.; Vucetic, Slobodan; Lazarevic, Aleksandar; Pokrajac, Dragoljub; Hoskinson, Reed L.

    2005-03-08

    Systems and methods are provided for knowledge discovery in spatial data as well as to systems and methods for optimizing recipes used in spatial environments such as may be found in precision agriculture. A spatial data analysis and modeling module is provided which allows users to interactively and flexibly analyze and mine spatial data. The spatial data analysis and modeling module applies spatial data mining algorithms through a number of steps. The data loading and generation module obtains or generates spatial data and allows for basic partitioning. The inspection module provides basic statistical analysis. The preprocessing module smoothes and cleans the data and allows for basic manipulation of the data. The partitioning module provides for more advanced data partitioning. The prediction module applies regression and classification algorithms on the spatial data. The integration module enhances prediction methods by combining and integrating models. The recommendation module provides the user with site-specific recommendations as to how to optimize a recipe for a spatial environment such as a fertilizer recipe for an agricultural field.

  12. Prediction of passive blood-brain partitioning: straightforward and effective classification models based on in silico derived physicochemical descriptors

    PubMed Central

    Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano

    2010-01-01

    The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of log BB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (log P), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental log BB data had been determined in vivo. In particular, since molecules with log BB > 0.3 cross the blood-brain barrier (BBB) readily while molecules with log BB < −1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the log BB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. PMID:20427217

  13. Prediction of passive blood-brain partitioning: straightforward and effective classification models based on in silico derived physicochemical descriptors.

    PubMed

    Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano

    2010-06-01

    The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.

  14. Color Image Classification Using Block Matching and Learning

    NASA Astrophysics Data System (ADS)

    Kondo, Kazuki; Hotta, Seiji

    In this paper, we propose block matching and learning for color image classification. In our method, training images are partitioned into small blocks. Given a test image, it is also partitioned into small blocks, and mean-blocks corresponding to each test block are calculated with neighbor training blocks. Our method classifies a test image into the class that has the shortest total sum of distances between mean blocks and test ones. We also propose a learning method for reducing memory requirement. Experimental results show that our classification outperforms other classifiers such as support vector machine with bag of keypoints.

  15. Scalable clustering algorithms for continuous environmental flow cytometry.

    PubMed

    Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill

    2016-02-01

    Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Recursive partitioning analysis (RPA) classification predicts survival in patients with brain metastases from sarcoma.

    PubMed

    Grossman, Rachel; Ram, Zvi

    2014-12-01

    Sarcoma rarely metastasizes to the brain, and there are no specific treatment guidelines for these tumors. The recursive partitioning analysis (RPA) classification is a well-established prognostic scale used in many malignancies. In this study we assessed the clinical characteristics of metastatic sarcoma to the brain and the validity of the RPA classification system in a subset of 21 patients who underwent surgical resection of metastatic sarcoma to the brain We retrospectively analyzed the medical, radiological, surgical, pathological, and follow-up clinical records of 21 patients who were operated for metastatic sarcoma to the brain between 1996 and 2012. Gliosarcomas, sarcomas of the head and neck with local extension into the brain, and metastatic sarcomas to the spine were excluded from this reported series. The patients' mean age was 49.6 ± 14.2 years (range, 25-75 years) at the time of diagnosis. Sixteen patients had a known history of systemic sarcoma, mostly in the extremities, and had previously received systemic chemotherapy and radiation therapy for their primary tumor. The mean maximal tumor diameter in the brain was 4.9 ± 1.7 cm (range 1.7-7.2 cm). The group's median preoperative Karnofsky Performance Scale was 80, with 14 patients presenting with Karnofsky Performance Scale of 70 or greater. The median overall survival was 7 months (range 0.2-204 months). The median survival time stratified by the Radiation Therapy Oncology Group RPA classes were 31, 7, and 2 months for RPA class I, II, and III, respectively (P = 0.0001). This analysis is the first to support the prognostic utility of the Radiation Therapy Oncology Group RPA classification for sarcoma brain metastases and may be used as a treatment guideline tool in this rare disease. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Computer-based classification of bacteria species by analysis of their colonies Fresnel diffraction patterns

    NASA Astrophysics Data System (ADS)

    Suchwalko, Agnieszka; Buzalewicz, Igor; Podbielska, Halina

    2012-01-01

    In the presented paper the optical system with converging spherical wave illumination for classification of bacteria species, is proposed. It allows for compression of the observation space, observation of Fresnel patterns, diffraction pattern scaling and low level of optical aberrations, which are not possessed by other optical configurations. Obtained experimental results have shown that colonies of specific bacteria species generate unique diffraction signatures. Analysis of Fresnel diffraction patterns of bacteria colonies can be fast and reliable method for classification and recognition of bacteria species. To determine the unique features of bacteria colonies diffraction patterns the image processing analysis was proposed. Classification can be performed by analyzing the spatial structure of diffraction patterns, which can be characterized by set of concentric rings. The characteristics of such rings depends on the bacteria species. In the paper, the influence of basic features and ring partitioning number on the bacteria classification, is analyzed. It is demonstrated that Fresnel patterns can be used for classification of following species: Salmonella enteritidis, Staplyococcus aureus, Proteus mirabilis and Citrobacter freundii. Image processing is performed by free ImageJ software, for which a special macro with human interaction, was written. LDA classification, CV method, ANOVA and PCA visualizations preceded by image data extraction were conducted using the free software R.

  18. The process and utility of classification and regression tree methodology in nursing research

    PubMed Central

    Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda

    2014-01-01

    Aim This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Background Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Design Discussion paper. Data sources English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984–2013. Discussion Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Implications for Nursing Research Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Conclusion Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. PMID:24237048

  19. The process and utility of classification and regression tree methodology in nursing research.

    PubMed

    Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda

    2014-06-01

    This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Discussion paper. English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984-2013. Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. © 2013 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  20. Aneurysmal subarachnoid hemorrhage prognostic decision-making algorithm using classification and regression tree analysis.

    PubMed

    Lo, Benjamin W Y; Fukuda, Hitoshi; Angle, Mark; Teitelbaum, Jeanne; Macdonald, R Loch; Farrokhyar, Forough; Thabane, Lehana; Levine, Mitchell A H

    2016-01-01

    Classification and regression tree analysis involves the creation of a decision tree by recursive partitioning of a dataset into more homogeneous subgroups. Thus far, there is scarce literature on using this technique to create clinical prediction tools for aneurysmal subarachnoid hemorrhage (SAH). The classification and regression tree analysis technique was applied to the multicenter Tirilazad database (3551 patients) in order to create the decision-making algorithm. In order to elucidate prognostic subgroups in aneurysmal SAH, neurologic, systemic, and demographic factors were taken into account. The dependent variable used for analysis was the dichotomized Glasgow Outcome Score at 3 months. Classification and regression tree analysis revealed seven prognostic subgroups. Neurological grade, occurrence of post-admission stroke, occurrence of post-admission fever, and age represented the explanatory nodes of this decision tree. Split sample validation revealed classification accuracy of 79% for the training dataset and 77% for the testing dataset. In addition, the occurrence of fever at 1-week post-aneurysmal SAH is associated with increased odds of post-admission stroke (odds ratio: 1.83, 95% confidence interval: 1.56-2.45, P < 0.01). A clinically useful classification tree was generated, which serves as a prediction tool to guide bedside prognostication and clinical treatment decision making. This prognostic decision-making algorithm also shed light on the complex interactions between a number of risk factors in determining outcome after aneurysmal SAH.

  1. Study design in high-dimensional classification analysis.

    PubMed

    Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen

    2016-10-01

    Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. An updated evolutionary classification of CRISPR–Cas systems

    PubMed Central

    Makarova, Kira S.; Wolf, Yuri I.; Alkhnbashi, Omer S.; Costa, Fabrizio; Shah, Shiraz A.; Saunders, Sita J.; Barrangou, Rodolphe; Brouns, Stan J. J.; Charpentier, Emmanuelle; Haft, Daniel H.; Horvath, Philippe; Moineau, Sylvain; Mojica, Francisco J. M.; Terns, Rebecca M.; Terns, Michael P.; White, Malcolm F.; Yakunin, Alexander F.; Garrett, Roger A.; van der Oost, John; Backofen, Rolf; Koonin, Eugene V.

    2017-01-01

    The evolution of CRISPR–cas loci, which encode adaptive immune systems in archaea and bacteria, involves rapid changes, in particular numerous rearrangements of the locus architecture and horizontal transfer of complete loci or individual modules. These dynamics complicate straightforward phylogenetic classification, but here we present an approach combining the analysis of signature protein families and features of the architecture of cas loci that unambiguously partitions most CRISPR–cas loci into distinct classes, types and subtypes. The new classification retains the overall structure of the previous version but is expanded to now encompass two classes, five types and 16 subtypes. The relative stability of the classification suggests that the most prevalent variants of CRISPR–Cas systems are already known. However, the existence of rare, currently unclassifiable variants implies that additional types and subtypes remain to be characterized. PMID:26411297

  3. Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery

    NASA Astrophysics Data System (ADS)

    Mahdianpari, Masoud; Salehi, Bahram; Mohammadimanesh, Fariba; Motagh, Mahdi

    2017-08-01

    Wetlands are important ecosystems around the world, although they are degraded due both to anthropogenic and natural process. Newfoundland is among the richest Canadian province in terms of different wetland classes. Herbaceous wetlands cover extensive areas of the Avalon Peninsula, which are the habitat of a number of animal and plant species. In this study, a novel hierarchical object-based Random Forest (RF) classification approach is proposed for discriminating between different wetland classes in a sub-region located in the north eastern portion of the Avalon Peninsula. Particularly, multi-polarization and multi-frequency SAR data, including X-band TerraSAR-X single polarized (HH), L-band ALOS-2 dual polarized (HH/HV), and C-band RADARSAT-2 fully polarized images, were applied in different classification levels. First, a SAR backscatter analysis of different land cover types was performed by training data and used in Level-I classification to separate water from non-water classes. This was followed by Level-II classification, wherein the water class was further divided into shallow- and deep-water classes, and the non-water class was partitioned into herbaceous and non-herbaceous classes. In Level-III classification, the herbaceous class was further divided into bog, fen, and marsh classes, while the non-herbaceous class was subsequently partitioned into urban, upland, and swamp classes. In Level-II and -III classifications, different polarimetric decomposition approaches, including Cloude-Pottier, Freeman-Durden, Yamaguchi decompositions, and Kennaugh matrix elements were extracted to aid the RF classifier. The overall accuracy and kappa coefficient were determined in each classification level for evaluating the classification results. The importance of input features was also determined using the variable importance obtained by RF. It was found that the Kennaugh matrix elements, Yamaguchi, and Freeman-Durden decompositions were the most important parameters for wetland classification in this study. Using this new hierarchical RF classification approach, an overall accuracy of up to 94% was obtained for classifying different land cover types in the study area.

  4. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  5. Gender, Race, and Survival: A Study in Non-Small-Cell Lung Cancer Brain Metastases Patients Utilizing the Radiation Therapy Oncology Group Recursive Partitioning Analysis Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Videtic, Gregory M.M., E-mail: videtig@ccf.or; Reddy, Chandana A.; Chao, Samuel T.

    Purpose: To explore whether gender and race influence survival in non-small-cell lung cancer (NSCLC) in patients with brain metastases, using our large single-institution brain tumor database and the Radiation Therapy Oncology Group recursive partitioning analysis (RPA) brain metastases classification. Methods and materials: A retrospective review of a single-institution brain metastasis database for the interval January 1982 to September 2004 yielded 835 NSCLC patients with brain metastases for analysis. Patient subsets based on combinations of gender, race, and RPA class were then analyzed for survival differences. Results: Median follow-up was 5.4 months (range, 0-122.9 months). There were 485 male patients (M)more » (58.4%) and 346 female patients (F) (41.6%). Of the 828 evaluable patients (99%), 143 (17%) were black/African American (B) and 685 (83%) were white/Caucasian (W). Median survival time (MST) from time of brain metastasis diagnosis for all patients was 5.8 months. Median survival time by gender (F vs. M) and race (W vs. B) was 6.3 months vs. 5.5 months (p = 0.013) and 6.0 months vs. 5.2 months (p = 0.08), respectively. For patients stratified by RPA class, gender, and race, MST significantly favored BFs over BMs in Class II: 11.2 months vs. 4.6 months (p = 0.021). On multivariable analysis, significant variables were gender (p = 0.041, relative risk [RR] 0.83) and RPA class (p < 0.0001, RR 0.28 for I vs. III; p < 0.0001, RR 0.51 for II vs. III) but not race. Conclusions: Gender significantly influences NSCLC brain metastasis survival. Race trended to significance in overall survival but was not significant on multivariable analysis. Multivariable analysis identified gender and RPA classification as significant variables with respect to survival.« less

  6. Various forms of indexing HDMR for modelling multivariate classification problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksu, Çağrı; Tunga, M. Alper

    2014-12-10

    The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less

  7. Modular Digital Missile Guidance, Phase I2

    DTIC Science & Technology

    1976-01-28

    at OMR. The revised study plan was fornally approved by ONR on 29 May 1975, confining the sinulatlon analysis work to a Class II missile with...functions analyzed in tne Phase I and Phase 11 studies . It can be seen thatf tnere practicable» (based on the results of function partitioning trade...llaillAU AS a result of this study » three generic missile families have oeen established and» relative to this classification» on

  8. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    NASA Astrophysics Data System (ADS)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  9. Classifying Measures of Biological Variation

    PubMed Central

    Gregorius, Hans-Rolf; Gillet, Elizabeth M.

    2015-01-01

    Biological variation is commonly measured at two basic levels: variation within individual communities, and the distribution of variation over communities or within a metacommunity. We develop a classification for the measurement of biological variation on both levels: Within communities into the categories of dispersion and diversity, and within metacommunities into the categories of compositional differentiation and partitioning of variation. There are essentially two approaches to characterizing the distribution of trait variation over communities in that individuals with the same trait state or type tend to occur in the same community (describes differentiation tendencies), and individuals with different types tend to occur in different communities (describes apportionment tendencies). Both approaches can be viewed from the dual perspectives of trait variation distributed over communities (CT perspective) and community membership distributed over trait states (TC perspective). This classification covers most of the relevant descriptors (qualified measures) of biological variation, as is demonstrated with the help of major families of descriptors. Moreover, the classification is shown to open ways to develop new descriptors that meet current needs. Yet the classification also reveals the misclassification of some prominent and widely applied descriptors: Dispersion is often misclassified as diversity, particularly in cases where dispersion descriptor allow for the computation of effective numbers; the descriptor GST of population genetics is commonly misclassified as compositional differentiation and confused with partitioning-oriented differentiation, whereas it actually measures partitioning-oriented apportionment; descriptors of β-diversity are ambiguous about the differentiation effects they are supposed to represent and therefore require conceptual reconsideration. PMID:25807558

  10. Prognostic Classification Factors Associated With Development of Multiple Autoantibodies, Dysglycemia, and Type 1 Diabetes—A Recursive Partitioning Analysis

    PubMed Central

    Krischer, Jeffrey P.

    2016-01-01

    OBJECTIVE To define prognostic classification factors associated with the progression from single to multiple autoantibodies, multiple autoantibodies to dysglycemia, and dysglycemia to type 1 diabetes onset in relatives of individuals with type 1 diabetes. RESEARCH DESIGN AND METHODS Three distinct cohorts of subjects from the Type 1 Diabetes TrialNet Pathway to Prevention Study were investigated separately. A recursive partitioning analysis (RPA) was used to determine the risk classes. Clinical characteristics, including genotype, antibody titers, and metabolic markers were analyzed. RESULTS Age and GAD65 autoantibody (GAD65Ab) titers defined three risk classes for progression from single to multiple autoantibodies. The 5-year risk was 11% for those subjects >16 years of age with low GAD65Ab titers, 29% for those ≤16 years of age with low GAD65Ab titers, and 45% for those subjects with high GAD65Ab titers regardless of age. Progression to dysglycemia was associated with islet antigen 2 Ab titers, and 2-h glucose and fasting C-peptide levels. The 5-year risk is 28%, 39%, and 51% for respective risk classes defined by the three predictors. Progression to type 1 diabetes was associated with the number of positive autoantibodies, peak C-peptide level, HbA1c level, and age. Four risk classes defined by RPA had a 5-year risk of 9%, 33%, 62%, and 80%, respectively. CONCLUSIONS The use of RPA offered a new classification approach that could predict the timing of transitions from one preclinical stage to the next in the development of type 1 diabetes. Using these RPA classes, new prevention techniques can be tailored based on the individual prognostic risk characteristics at different preclinical stages. PMID:27208341

  11. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    PubMed

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  12. Segmentation schema for enhancing land cover identification: A case study using Sentinel 2 data

    NASA Astrophysics Data System (ADS)

    Mongus, Domen; Žalik, Borut

    2018-04-01

    Land monitoring is performed increasingly using high and medium resolution optical satellites, such as the Sentinel-2. However, optical data is inevitably subjected to the variable operational conditions under which it was acquired. Overlapping of features caused by shadows, soft transitions between shadowed and non-shadowed regions, and temporal variability of the observed land-cover types require radiometric corrections. This study examines a new approach to enhancing the accuracy of land cover identification that resolves this problem. The proposed method constructs an ensemble-type classification model with weak classifiers tuned to the particular operational conditions under which the data was acquired. Iterative segmentation over the learning set is applied for this purpose, where feature space is partitioned according to the likelihood of misclassifications introduced by the classification model. As these are a consequence of overlapping features, such partitioning avoids the need for radiometric corrections of the data, and divides land cover types implicitly into subclasses. As a result, improved performance of all tested classification approaches were measured during the validation that was conducted on Sentinel-2 data. The highest accuracies in terms of F1-scores were achieved using the Naive Bayes Classifier as the weak classifier, while supplementing original spectral signatures with normalised difference vegetation index and texture analysis features, namely, average intensity, contrast, homogeneity, and dissimilarity. In total, an F1-score of nearly 95% was achieved in this way, with F1-scores of each particular land cover type reaching above 90%.

  13. Classification of bacteria by simultaneous methylation-solid phase microextraction and gas chromatography/mass spectrometry analysis of fatty acid methyl esters.

    PubMed

    Lu, Yao; Harrington, Peter B

    2010-08-01

    Direct methylation and solid-phase microextraction (SPME) were used as a sample preparation technique for classification of bacteria based on fatty acid methyl ester (FAME) profiles. Methanolic tetramethylammonium hydroxide was applied as a dual-function reagent to saponify and derivatize whole-cell bacterial fatty acids into FAMEs in one step, and SPME was used to extract the bacterial FAMEs from the headspace. Compared with traditional alkaline saponification and sample preparation using liquid-liquid extraction, the method presented in this work avoids using comparatively large amounts of inorganic and organic solvents and greatly decreases the sample preparation time as well. Characteristic gas chromatography/mass spectrometry (GC/MS) of FAME profiles was achieved for six bacterial species. The difference between Gram-positive and Gram-negative bacteria was clearly visualized with the application of principal component analysis of the GC/MS data of bacterial FAMEs. A cross-validation study using ten bootstrap Latin partitions and the fuzzy rule building expert system demonstrated 87 +/- 3% correct classification efficiency.

  14. Classification, disease, and diagnosis.

    PubMed

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  15. Discovery of novel SERCA inhibitors by virtual screening of a large compound library.

    PubMed

    Elam, Christopher; Lape, Michael; Deye, Joel; Zultowsky, Jodie; Stanton, David T; Paula, Stefan

    2011-05-01

    Two screening protocols based on recursive partitioning and computational ligand docking methodologies, respectively, were employed for virtual screens of a compound library with 345,000 entries for novel inhibitors of the enzyme sarco/endoplasmic reticulum calcium ATPase (SERCA), a potential target for cancer chemotherapy. A total of 72 compounds that were predicted to be potential inhibitors of SERCA were tested in bioassays and 17 displayed inhibitory potencies at concentrations below 100 μM. The majority of these inhibitors were composed of two phenyl rings tethered to each other by a short link of one to three atoms. Putative interactions between SERCA and the inhibitors were identified by inspection of docking-predicted poses and some of the structural features required for effective SERCA inhibition were determined by analysis of the classification pattern employed by the recursive partitioning models. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  16. Recursive partitioning identifies greater than 4 U of packed red blood cells per hour as an improved massive transfusion definition.

    PubMed

    Moren, Alexis Marika; Hamptom, David; Diggs, Brian; Kiraly, Laszlo; Fox, Erin E; Holcomb, John B; Rahbar, Mohammad Hossein; Brasel, Karen J; Cohen, Mitchell Jay; Bulger, Eileen M; Schreiber, Martin A

    2015-12-01

    Massive transfusion (MT) is classically defined as greater than 10 U of packed red blood cells (PRBCs) in 24 hours. This fails to capture the most severely injured patients. Extending the previous work of Savage and Rahbar, a rolling hourly rate-based definition of MT may more accurately define critically injured patients requiring early, aggressive resuscitation. The Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) trial collected data from 10 Level 1 trauma centers. Patients were placed into rate-based transfusion groups by maximal number of PRBCs transfused in any hour within the first 6 hours. A nonparametric analysis using classification trees partitioned data according to mortality at 24 hours using a predictor variable of maximum number PRBC units transfused in an hour. Dichotomous variables significant in previous scores and models as predictors of MT were used to identify critically ill patients: a positive finding on Focused Assessment with Sonography in Trauma (FAST) examination, Glasgow Coma Scale (GCS) score less than 8, heart rate greater than 120 beats/min, systolic blood pressure less than 90 mm Hg, penetrating mechanism of injury, international normalized ratio greater than 1.5, hemoglobin less than 11, and base deficit greater than 5. These critical indicators were then compared among the nodes of the classification tree. Patients omitted included those who did not receive PRBCs (n = 24) and those who did not have all eight critical indicators reported (n = 449). In a population of 1,245 patients, the classification tree included 772 patients. Analysis by recursive partitioning showed increased mortality among patients receiving greater than 13 U/h (73.9%, p < 0.01). In those patients receiving less than or equal to 13 U/h, mortality was greater in patients who received more than 4 U/h (16.7% vs. 6.0%, p < 0.01) (Fig. 1). Nodal analysis showed that the median number of critical indicators for each node was 3 (2-4) (≤4 U/h), 4 (3-5) (>4 U/h and ≤13 U/h), and 5 (4-5.5) (>13 U/h). A rate-based transfusion definition identifies a difference in mortality in patients who receive greater than 4 U/h of PRBCs. Redefining MT to greater than 4 U/h allows early identification of patients with a significant mortality risk who may be missed by the current definition. Prognostic/epidemiologic study, level III.

  17. Relation between financial market structure and the real economy: comparison between clustering methods.

    PubMed

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.

    Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less

  19. Relation between Financial Market Structure and the Real Economy: Comparison between Clustering Methods

    PubMed Central

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T.

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover, we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging. PMID:25786703

  20. Using local climate zone classifications to assess the influence of urban morphology on the urban heat island effect

    NASA Astrophysics Data System (ADS)

    Satcher, P. S.; Brunsell, N. A.

    2017-12-01

    Alterations to land cover resulting from urbanization interact with the atmospheric boundary layer inducing elevated surface and air temperatures, changes to the surface energy balance (SEB), and modifications to regional circulations and climates. These changes pose risks to public health, ecological systems, and have the potential to affect economic interests. We used Google Earth Engine's Landsat archive to classify local climate zones (LCZ) that consist of ten urban and seven non-urban classes to examine the influence of urban morphology on the urban heat island (UHI) effect. We used geostatistical methods to determine the significance of the spatial distributions of LCZs to land surface temperatures (LST) and normalized difference vegetation index (NDVI) Moderate Resolution Imaging Spectroradiometer (MODIS) products. We used the triangle method to assess the variability of SEB partitioning in relation to high, medium, and low density LCZ classes. Fractional vegetation cover (Fr) was calculated using NDVI data. Linear regressions of observations in Fr-LST space for select LCZ classes were compared for selected eight-day periods to determine changes in energy partitioning and relative soil moisture availability. The magnitude of each flux is not needed to determine changes to the SEB. The regressions will examine near surface soil moisture, which is indicative of how much radiation is partitioned into evaporation. To compare changes occurring over one decade, we used MODIS NDVI and LST data from 2005 and 2015. Results indicated that variations in the SEB can be detected using the LCZ classification method. The results from analysis in Fr-LST space of the annual cycles over several years can be used to detect changes in the SEB as urbanization increases.

  1. A Classification of Designated Logic Systems

    DTIC Science & Technology

    1988-02-01

    Introduction to Logic. New York: Macmillan Publishing, 1972. Grimaldi , Ralph P . Discrete and Combinatorial Mathematics. Reading: Addison-Wesley...sound basis for understanding non-classical logic systems. I would like to thank the Air Force Institute of Technology for funding this research. vi p ...6 p ILLUSTRATIONS Figure Page 1. Two Classifications of Designated Logic Systems 13 2. Two Partitions of Two-valued Logic Systems 14 3. Two

  2. Associating Liver Partition and Portal Vein Ligation for Primary Hepatobiliary Malignancies and Non-Colorectal Liver Metastases.

    PubMed

    Björnsson, B; Sparrelid, E; Hasselgren, K; Gasslander, T; Isaksson, B; Sandström, P

    2016-09-01

    Associating liver partition and portal vein ligation for staged hepatectomy may increase the possibility of radical resection in the case of liver malignancy. Concerns have been raised about the high morbidity and mortality associated with the procedure, particularly when applied for diagnoses other than colorectal liver metastases. The aim of this study was to analyze the initial experience with associating liver partition and portal vein ligation for staged hepatectomy in cases of non-colorectal liver metastases and primary hepatobiliary malignancies in Scandinavia. A retrospective analysis of all associating liver partition and portal vein ligation for staged hepatectomy procedures performed at two Swedish university hospitals for non-colorectal liver metastases and primary hepatobiliary malignancies was performed. The primary focus was on the safety of the procedure. Ten patients were included: four had hepatocellular cancer, three had intrahepatic cholangiocarcinoma, one had a Klatskin tumor, one had ocular melanoma metastasis, and one had a metastasis from a Wilms' tumor. All patients completed both operations, and the highest grade of complication (according to the Clavien-Dindo classification) was 3A, which was observed in one patient. No 90-day mortality was observed. Radical resection (R0) was achieved in nine patients, while the resection was R2 in one patient. The low morbidity and mortality observed in this cohort compared with those of earlier reports on associating liver partition and portal vein ligation for staged hepatectomy for diagnoses other than colorectal liver metastases may be related to the selection of patients with limited comorbidity. In addition, procedures other than associating liver partition and portal vein ligation for staged hepatectomy had been avoided in most of the patients. In conclusion, associating liver partition and portal vein ligation for staged hepatectomy can be applied to primary hepatobiliary malignancies and non-colorectal liver metastases with acceptable rates of morbidity and mortality. © The Finnish Surgical Society 2016.

  3. Verification of hydrologic landscape derived basin-scale classifications in the Pacific Northwest

    Treesearch

    Keith Sawicz

    2016-01-01

    The interaction between the physical and climatic attributes of a basin (form) control how water is partitioned, stored, and conveyed through a catchment (function). Hydrologic Landscapes (HLs) were previously...

  4. Prognostic Classification Factors Associated With Development of Multiple Autoantibodies, Dysglycemia, and Type 1 Diabetes-A Recursive Partitioning Analysis.

    PubMed

    Xu, Ping; Krischer, Jeffrey P

    2016-06-01

    To define prognostic classification factors associated with the progression from single to multiple autoantibodies, multiple autoantibodies to dysglycemia, and dysglycemia to type 1 diabetes onset in relatives of individuals with type 1 diabetes. Three distinct cohorts of subjects from the Type 1 Diabetes TrialNet Pathway to Prevention Study were investigated separately. A recursive partitioning analysis (RPA) was used to determine the risk classes. Clinical characteristics, including genotype, antibody titers, and metabolic markers were analyzed. Age and GAD65 autoantibody (GAD65Ab) titers defined three risk classes for progression from single to multiple autoantibodies. The 5-year risk was 11% for those subjects >16 years of age with low GAD65Ab titers, 29% for those ≤16 years of age with low GAD65Ab titers, and 45% for those subjects with high GAD65Ab titers regardless of age. Progression to dysglycemia was associated with islet antigen 2 Ab titers, and 2-h glucose and fasting C-peptide levels. The 5-year risk is 28%, 39%, and 51% for respective risk classes defined by the three predictors. Progression to type 1 diabetes was associated with the number of positive autoantibodies, peak C-peptide level, HbA1c level, and age. Four risk classes defined by RPA had a 5-year risk of 9%, 33%, 62%, and 80%, respectively. The use of RPA offered a new classification approach that could predict the timing of transitions from one preclinical stage to the next in the development of type 1 diabetes. Using these RPA classes, new prevention techniques can be tailored based on the individual prognostic risk characteristics at different preclinical stages. © 2016 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  5. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  6. Partition functions for heterotic WZW conformal field theories

    NASA Astrophysics Data System (ADS)

    Gannon, Terry

    1993-08-01

    Thus far in the search for, and classification of, "physical" modular invariant partition functions ΣN LRχ Lχ R∗ the attention has been focused on the symmetric case where the holomorphic and anti-holomorphic sectors, and hence the characters χLand χR, are associated with the same Kac-Moody algebras ĝL = ĝR and levels κ L = κ R. In this paper we consider the more general possibility where ( ĝL, κ L) may not equal ( ĝR, κ R). We discuss which choices of algebras and levels may correspond to well-defined conformal field theories, we find the "smallest" such heterotic (i.e. asymmetric) partition functions, and we give a method, generalizing the Roberts-Terao-Warner lattice method, for explicitly constructing many other modular invariants. We conclude the paper by proving that this new lattice method will succeed in generating all the heterotic partition functions, for all choices of algebras and levels.

  7. Non-crystallographic nets: characterization and first steps towards a classification.

    PubMed

    Moreira de Oliveira, Montauban; Eon, Jean Guillaume

    2014-05-01

    Non-crystallographic (NC) nets are periodic nets characterized by the existence of non-trivial bounded automorphisms. Such automorphisms cannot be associated with any crystallographic symmetry in realizations of the net by crystal structures. It is shown that bounded automorphisms of finite order form a normal subgroup F(N) of the automorphism group of NC nets (N, T). As a consequence, NC nets are unstable nets (they display vertex collisions in any barycentric representation) and, conversely, stable nets are crystallographic nets. The labelled quotient graphs of NC nets are characterized by the existence of an equivoltage partition (a partition of the vertex set that preserves label vectors over edges between cells). A classification of NC nets is proposed on the basis of (i) their relationship to the crystallographic net with a homeomorphic barycentric representation and (ii) the structure of the subgroup F(N).

  8. Environmental clustering of lakes to evaluate performance of a macrophyte index of biotic integrity

    USGS Publications Warehouse

    Vondracek, Bruce C.; Vondracek, Bruce; Hatch, Lorin K.

    2013-01-01

    Proper classification of sites is critical for the use of biological indices that can distinguish between natural and human-induced variation in biological response. The macrophyte-based index of biotic integrity was developed to assess the condition of Minnesota lakes in relation to anthropogenic stressors, but macrophyte community composition varies naturally across the state. The goal of the study was to identify environmental characteristics that naturally influence macrophyte index response and establish a preliminary lake classification scheme for biological assessment (bioassessment). Using a comprehensive set of environmental variables, we identified similar groups of lakes by clustering using flexible beta classification. Variance partitioning analysis of IBI response indicated that evaluating similar lake clusters could improve the ability of the macrophyte index to identify community change to anthropogenic stressors, although lake groups did not fully account for the natural variation in macrophyte composition. Diagnostic capabilities of the index could be improved when evaluating lakes with similar environmental characteristics, suggesting the index has potential for accurate bioassessment provided comparable groups of lakes are evaluated.

  9. Using near infrared spectroscopy to classify soybean oil according to expiration date.

    PubMed

    da Costa, Gean Bezerra; Fernandes, David Douglas Sousa; Gomes, Adriano A; de Almeida, Valber Elias; Veras, Germano

    2016-04-01

    A rapid and non-destructive methodology is proposed for the screening of edible vegetable oils according to conservation state expiration date employing near infrared (NIR) spectroscopy and chemometric tools. A total of fifty samples of soybean vegetable oil, of different brands andlots, were used in this study; these included thirty expired and twenty non-expired samples. The oil oxidation was measured by peroxide index. NIR spectra were employed in raw form and preprocessed by offset baseline correction and Savitzky-Golay derivative procedure, followed by PCA exploratory analysis, which showed that NIR spectra would be suitable for the classification task of soybean oil samples. The classification models were based in SPA-LDA (Linear Discriminant Analysis coupled with Successive Projection Algorithm) and PLS-DA (Discriminant Analysis by Partial Least Squares). The set of samples (50) was partitioned into two groups of training (35 samples: 15 non-expired and 20 expired) and test samples (15 samples 5 non-expired and 10 expired) using sample-selection approaches: (i) Kennard-Stone, (ii) Duplex, and (iii) Random, in order to evaluate the robustness of the models. The obtained results for the independent test set (in terms of correct classification rate) were 96% and 98% for SPA-LDA and PLS-DA, respectively, indicating that the NIR spectra can be used as an alternative to evaluate the degree of oxidation of soybean oil samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Learning semantic histopathological representation for basal cell carcinoma classification

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo

    2013-03-01

    Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.

  11. Computation offloading for real-time health-monitoring devices.

    PubMed

    Kalantarian, Haik; Sideris, Costas; Tuan Le; Hosseini, Anahita; Sarrafzadeh, Majid

    2016-08-01

    Among the major challenges in the development of real-time wearable health monitoring systems is to optimize battery life. One of the major techniques with which this objective can be achieved is computation offloading, in which portions of computation can be partitioned between the device and other resources such as a server or cloud. In this paper, we describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data between the wearable device and mobile application as a function of desired classification accuracy.

  12. Cluster Analysis in Nursing Research: An Introduction, Historical Perspective, and Future Directions.

    PubMed

    Dunn, Heather; Quinn, Laurie; Corbridge, Susan J; Eldeirawi, Kamal; Kapella, Mary; Collins, Eileen G

    2017-05-01

    The use of cluster analysis in the nursing literature is limited to the creation of classifications of homogeneous groups and the discovery of new relationships. As such, it is important to provide clarity regarding its use and potential. The purpose of this article is to provide an introduction to distance-based, partitioning-based, and model-based cluster analysis methods commonly utilized in the nursing literature, provide a brief historical overview on the use of cluster analysis in nursing literature, and provide suggestions for future research. An electronic search included three bibliographic databases, PubMed, CINAHL and Web of Science. Key terms were cluster analysis and nursing. The use of cluster analysis in the nursing literature is increasing and expanding. The increased use of cluster analysis in the nursing literature is positioning this statistical method to result in insights that have the potential to change clinical practice.

  13. Congenital portosystemic shunts in children: a new anatomical classification correlated with surgical strategy.

    PubMed

    Blanc, Thomas; Guerin, Florent; Franchi-Abella, Stéphanie; Jacquemin, Emmanuel; Pariente, Danièle; Soubrane, Olivier; Branchereau, Sophie; Gauthier, Frédéric

    2014-07-01

    To propose an anatomical classification of congenital portosystemic shunts (CPSs) correlating with conservative surgery. CPSs entail a risk of life-threatening complications because of poor portal inflow, which may be prevented or cured by their closure. Current classifications based on portal origin of the shunt are not helpful for planning conservative surgery. Twenty-three patients who underwent at least 1 surgical procedure to close the CPSs were included in this retrospective study (1997-2012). We designed a classification according to the ending of the shunt in the caval system. We analyzed the results and outcomes of surgery according to this classification. Two patients had an extrahepatic portosystemic shunt, 17 had a portacaval shunt [subdivided in 5 end-to-side-like portal-caval, 7 side-to-side-like portal-caval, and 5 H-shaped (H-type portal-caval)], 2 had portal-to-hepatic vein shunts (portohepatic), and 2 had a persistent ductus venosus. All extrahepatic portosystemic shunts, H-type portal-caval, portohepatic, and patent ductus venosus patients had a successful 1-stage ligation. All 5 end-to-side-like portal-caval patients had a threadlike intrahepatic portal venous system; a 2-stage complete closure was successfully achieved for 4 and a partial closure for 1. The first 2 side-to-side-like portal-caval patients had a successful 2-stage closure whereas the 5 others had a 1-stage longitudinal caval partition. All patients are alive and none needed a liver transplantation. Our classification correlates the anatomy of CPSs and the surgical strategy: outcomes are good provided end-to-side-like portal-caval shunts patients have a 2-stage closure, side-to-side portal-caval shunts patients have a 1-stage caval partition, and the others have a 1-stage ligation.

  14. Hyperspectral image segmentation using a cooperative nonparametric approach

    NASA Astrophysics Data System (ADS)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  15. Inference of the Activity Timeline of Cattle Foraging on a Mediterranean Woodland Using GPS and Pedometry

    PubMed Central

    Ungar, Eugene D.; Schoenbaum, Iris; Henkin, Zalmen; Dolev, Amit; Yehuda, Yehuda; Brosh, Arieh

    2011-01-01

    The advent of the Global Positioning System (GPS) has transformed our ability to track livestock on rangelands. However, GPS data use would be greatly enhanced if we could also infer the activity timeline of an animal. We tested how well animal activity could be inferred from data provided by Lotek GPS collars, alone or in conjunction with IceRobotics IceTag pedometers. The collars provide motion and head position data, as well as location. The pedometers count steps, measure activity levels, and differentiate between standing and lying positions. We gathered synchronized data at 5-min resolution, from GPS collars, pedometers, and human observers, for free-grazing cattle (n = 9) at the Hatal Research Station in northern Israel. Equations for inferring activity during 5-min intervals (n = 1,475), classified as Graze, Rest (or Lie and Stand separately), and Travel were derived by discriminant and partition (classification tree) analysis of data from each device separately and from both together. When activity was classified as Graze, Rest and Travel, the lowest overall misclassification rate (10%) was obtained when data from both devices together were subjected to partition analysis; separate misclassification rates were 8, 12, and 3% for Graze, Rest and Travel, respectively. When Rest was subdivided into Lie and Stand, the lowest overall misclassification rate (10%) was again obtained when data from both devices together were subjected to partition analysis; misclassification rates were 6, 1, 26, and 17% for Graze, Lie, Stand, and Travel, respectively. The primary problem was confusion between Rest (or Stand) and Graze. Overall, the combination of Lotek GPS collars with IceRobotics IceTag pedometers was found superior to either device alone in inferring animal activity. PMID:22346582

  16. Verification of Hydrologic Landscape Derived Basin-Scale Classifications in the Pacific Northwest

    EPA Science Inventory

    The interaction between the physical properties of a catchment (form) and climatic forcing of precipitation and energy control how water is partitioned, stored, and conveyed through a catchment (function). Hydrologic Landscapes (HLs) were previously developed across Oregon and de...

  17. SLOCC classification of n qubits invoking the proportional relationships for spectrums and standard Jordan normal forms

    NASA Astrophysics Data System (ADS)

    Li, Dafa

    2018-01-01

    We investigate the proportional relationships for spectrums and standard Jordan normal forms (SJNFs) of the 4 by 4 matrices constructed from coefficient matrices of two SLOCC (stochastic local operations and classical communication) equivalent states of n qubits. The proportional relationships permit a reduction of SLOCC classification of n (≥ 4) qubits to a classification of 4 by 4 complex matrices. Invoking the proportional relationships for spectrums and SJNFs, pure states of n (≥ 4) qubits are partitioned into 12 groups or less and 34 families or less under SLOCC, respectively. Specially, it is true for four qubits.

  18. Analyzing Sub-Classifications of Glaucoma via SOM Based Clustering of Optic Nerve Images.

    PubMed

    Yan, Sanjun; Abidi, Syed Sibte Raza; Artes, Paul Habib

    2005-01-01

    We present a data mining framework to cluster optic nerve images obtained by Confocal Scanning Laser Tomography (CSLT) in normal subjects and patients with glaucoma. We use self-organizing maps and expectation maximization methods to partition the data into clusters that provide insights into potential sub-classification of glaucoma based on morphological features. We conclude that our approach provides a first step towards a better understanding of morphological features in optic nerve images obtained from glaucoma patients and healthy controls.

  19. Semantic Classification of Diseases in Discharge Summaries Using a Context-aware Rule-based Classifier

    PubMed Central

    Solt, Illés; Tikk, Domonkos; Gál, Viktor; Kardkovács, Zsolt T.

    2009-01-01

    Objective Automated and disease-specific classification of textual clinical discharge summaries is of great importance in human life science, as it helps physicians to make medical studies by providing statistically relevant data for analysis. This can be further facilitated if, at the labeling of discharge summaries, semantic labels are also extracted from text, such as whether a given disease is present, absent, questionable in a patient, or is unmentioned in the document. The authors present a classification technique that successfully solves the semantic classification task. Design The authors introduce a context-aware rule-based semantic classification technique for use on clinical discharge summaries. The classification is performed in subsequent steps. First, some misleading parts are removed from the text; then the text is partitioned into positive, negative, and uncertain context segments, then a sequence of binary classifiers is applied to assign the appropriate semantic labels. Measurement For evaluation the authors used the documents of the i2b2 Obesity Challenge and adopted its evaluation measures: F1-macro and F1-micro for measurements. Results On the two subtasks of the Obesity Challenge (textual and intuitive classification) the system performed very well, and achieved a F1-macro = 0.80 for the textual and F1-macro = 0.67 for the intuitive tasks, and obtained second place at the textual and first place at the intuitive subtasks of the challenge. Conclusions The authors show in the paper that a simple rule-based classifier can tackle the semantic classification task more successfully than machine learning techniques, if the training data are limited and some semantic labels are very sparse. PMID:19390101

  20. Identification and classification of traditional Chinese medicine syndrome types among senior patients with vascular mild cognitive impairment using latent tree analysis.

    PubMed

    Fu, Chen; Zhang, Nevin Lianwen; Chen, Bao-Xin; Chen, Zhou Rong; Jin, Xiang Lan; Guo, Rong-Juan; Chen, Zhi-Gang; Zhang, Yun-Ling

    2017-05-01

    To treat patients with vascular mild cognitive impairment (VMCI) using traditional Chinese medicine (TCM), it is necessary to classify the patients into TCM syndrome types and to apply different treatments to different types. In this paper, we investigate how to properly carry out the classification for patients with VMCI aged 50 or above using a novel data-driven method known as latent tree analysis (LTA). A cross-sectional survey on VMCI was carried out in several regions in Northern China between February 2008 and February 2012 which resulted in a data set that involves 803 patients and 93 symptoms. LTA was performed on the data to reveal symptom co-occurrence patterns, and the patients were partitioned into clusters in multiple ways based on the patterns. The patient clusters were matched up with syndrome types, and population statistics of the clusters are used to quantify the syndrome types and to establish classification rules. Eight syndrome types are identified: Qi deficiency, Qi stagnation, Blood deficiency, Blood stasis, Phlegm-dampness, Fire-heat, Yang deficiency, and Yin deficiency. The prevalence and symptom occurrence characteristics of each syndrome type are determined. Quantitative classification rules are established for determining whether a patient belongs to each of the syndrome types. A solution for the TCM syndrome classification problem for patients with VMCI and aged 50 or above is established based on the LTA of unlabeled symptom survey data. The results can be used as a reference in clinic practice to improve the quality of syndrome differentiation and to reduce diagnosis variances across physicians. They can also be used for patient selection in research projects aimed at finding biomarkers for the syndrome types and in randomized control trials aimed at determining the efficacy of TCM treatments of VMCI.

  1. Ensemble Sparse Classification of Alzheimer’s Disease

    PubMed Central

    Liu, Manhua; Zhang, Daoqiang; Shen, Dinggang

    2012-01-01

    The high-dimensional pattern classification methods, e.g., support vector machines (SVM), have been widely investigated for analysis of structural and functional brain images (such as magnetic resonance imaging (MRI)) to assist the diagnosis of Alzheimer’s disease (AD) including its prodromal stage, i.e., mild cognitive impairment (MCI). Most existing classification methods extract features from neuroimaging data and then construct a single classifier to perform classification. However, due to noise and small sample size of neuroimaging data, it is challenging to train only a global classifier that can be robust enough to achieve good classification performance. In this paper, instead of building a single global classifier, we propose a local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification. Specifically, to capture the local spatial consistency, each brain image is partitioned into a number of local patches and a subset of patches is randomly selected from the patch pool to build a weak classifier. Here, the sparse representation-based classification (SRC) method, which has shown effective for classification of image data (e.g., face), is used to construct each weak classifier. Then, multiple weak classifiers are combined to make the final decision. We evaluate our method on 652 subjects (including 198 AD patients, 225 MCI and 229 normal controls) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using MR images. The experimental results show that our method achieves an accuracy of 90.8% and an area under the ROC curve (AUC) of 94.86% for AD classification and an accuracy of 87.85% and an AUC of 92.90% for MCI classification, respectively, demonstrating a very promising performance of our method compared with the state-of-the-art methods for AD/MCI classification using MR images. PMID:22270352

  2. Efficiency of International Classification of Diseases, Ninth Revision, Billing Code Searches to Identify Emergency Department Visits for Blood or Body Fluid Exposures through a Statewide Multicenter Database

    PubMed Central

    Rosen, Lisa M.; Liu, Tao; Merchant, Roland C.

    2016-01-01

    BACKGROUND Blood and body fluid exposures are frequently evaluated in emergency departments (EDs). However, efficient and effective methods for estimating their incidence are not yet established. OBJECTIVE Evaluate the efficiency and accuracy of estimating statewide ED visits for blood or body fluid exposures using International Classification of Diseases, Ninth Revision (ICD-9), code searches. DESIGN Secondary analysis of a database of ED visits for blood or body fluid exposure. SETTING EDs of 11 civilian hospitals throughout Rhode Island from January 1, 1995, through June 30, 2001. PATIENTS Patients presenting to the ED for possible blood or body fluid exposure were included, as determined by prespecified ICD-9 codes. METHODS Positive predictive values (PPVs) were estimated to determine the ability of 10 ICD-9 codes to distinguish ED visits for blood or body fluid exposure from ED visits that were not for blood or body fluid exposure. Recursive partitioning was used to identify an optimal subset of ICD-9 codes for this purpose. Random-effects logistic regression modeling was used to examine variations in ICD-9 coding practices and styles across hospitals. Cluster analysis was used to assess whether the choice of ICD-9 codes was similar across hospitals. RESULTS The PPV for the original 10 ICD-9 codes was 74.4% (95% confidence interval [CI], 73.2%–75.7%), whereas the recursive partitioning analysis identified a subset of 5 ICD-9 codes with a PPV of 89.9% (95% CI, 88.9%–90.8%) and a misclassification rate of 10.1%. The ability, efficiency, and use of the ICD-9 codes to distinguish types of ED visits varied across hospitals. CONCLUSIONS Although an accurate subset of ICD-9 codes could be identified, variations across hospitals related to hospital coding style, efficiency, and accuracy greatly affected estimates of the number of ED visits for blood or body fluid exposure. PMID:22561713

  3. Identification of extremely premature infants at high risk of rehospitalization.

    PubMed

    Ambalavanan, Namasivayam; Carlo, Waldemar A; McDonald, Scott A; Yao, Qing; Das, Abhik; Higgins, Rosemary D

    2011-11-01

    Extremely low birth weight infants often require rehospitalization during infancy. Our objective was to identify at the time of discharge which extremely low birth weight infants are at higher risk for rehospitalization. Data from extremely low birth weight infants in Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network centers from 2002-2005 were analyzed. The primary outcome was rehospitalization by the 18- to 22-month follow-up, and secondary outcome was rehospitalization for respiratory causes in the first year. Using variables and odds ratios identified by stepwise logistic regression, scoring systems were developed with scores proportional to odds ratios. Classification and regression-tree analysis was performed by recursive partitioning and automatic selection of optimal cutoff points of variables. A total of 3787 infants were evaluated (mean ± SD birth weight: 787 ± 136 g; gestational age: 26 ± 2 weeks; 48% male, 42% black). Forty-five percent of the infants were rehospitalized by 18 to 22 months; 14.7% were rehospitalized for respiratory causes in the first year. Both regression models (area under the curve: 0.63) and classification and regression-tree models (mean misclassification rate: 40%-42%) were moderately accurate. Predictors for the primary outcome by regression were shunt surgery for hydrocephalus, hospital stay of >120 days for pulmonary reasons, necrotizing enterocolitis stage II or higher or spontaneous gastrointestinal perforation, higher fraction of inspired oxygen at 36 weeks, and male gender. By classification and regression-tree analysis, infants with hospital stays of >120 days for pulmonary reasons had a 66% rehospitalization rate compared with 42% without such a stay. The scoring systems and classification and regression-tree analysis models identified infants at higher risk of rehospitalization and might assist planning for care after discharge.

  4. Identification of Extremely Premature Infants at High Risk of Rehospitalization

    PubMed Central

    Carlo, Waldemar A.; McDonald, Scott A.; Yao, Qing; Das, Abhik; Higgins, Rosemary D.

    2011-01-01

    OBJECTIVE: Extremely low birth weight infants often require rehospitalization during infancy. Our objective was to identify at the time of discharge which extremely low birth weight infants are at higher risk for rehospitalization. METHODS: Data from extremely low birth weight infants in Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network centers from 2002–2005 were analyzed. The primary outcome was rehospitalization by the 18- to 22-month follow-up, and secondary outcome was rehospitalization for respiratory causes in the first year. Using variables and odds ratios identified by stepwise logistic regression, scoring systems were developed with scores proportional to odds ratios. Classification and regression-tree analysis was performed by recursive partitioning and automatic selection of optimal cutoff points of variables. RESULTS: A total of 3787 infants were evaluated (mean ± SD birth weight: 787 ± 136 g; gestational age: 26 ± 2 weeks; 48% male, 42% black). Forty-five percent of the infants were rehospitalized by 18 to 22 months; 14.7% were rehospitalized for respiratory causes in the first year. Both regression models (area under the curve: 0.63) and classification and regression-tree models (mean misclassification rate: 40%–42%) were moderately accurate. Predictors for the primary outcome by regression were shunt surgery for hydrocephalus, hospital stay of >120 days for pulmonary reasons, necrotizing enterocolitis stage II or higher or spontaneous gastrointestinal perforation, higher fraction of inspired oxygen at 36 weeks, and male gender. By classification and regression-tree analysis, infants with hospital stays of >120 days for pulmonary reasons had a 66% rehospitalization rate compared with 42% without such a stay. CONCLUSIONS: The scoring systems and classification and regression-tree analysis models identified infants at higher risk of rehospitalization and might assist planning for care after discharge. PMID:22007016

  5. Healthcare Text Classification System and its Performance Evaluation: A Source of Better Intelligence by Characterizing Healthcare Text.

    PubMed

    Srivastava, Saurabh Kumar; Singh, Sandeep Kumar; Suri, Jasjit S

    2018-04-13

    A machine learning (ML)-based text classification system has several classifiers. The performance evaluation (PE) of the ML system is typically driven by the training data size and the partition protocols used. Such systems lead to low accuracy because the text classification systems lack the ability to model the input text data in terms of noise characteristics. This research study proposes a concept of misrepresentation ratio (MRR) on input healthcare text data and models the PE criteria for validating the hypothesis. Further, such a novel system provides a platform to amalgamate several attributes of the ML system such as: data size, classifier type, partitioning protocol and percentage MRR. Our comprehensive data analysis consisted of five types of text data sets (TwitterA, WebKB4, Disease, Reuters (R8), and SMS); five kinds of classifiers (support vector machine with linear kernel (SVM-L), MLP-based neural network, AdaBoost, stochastic gradient descent and decision tree); and five types of training protocols (K2, K4, K5, K10 and JK). Using the decreasing order of MRR, our ML system demonstrates the mean classification accuracies as: 70.13 ± 0.15%, 87.34 ± 0.06%, 93.73 ± 0.03%, 94.45 ± 0.03% and 97.83 ± 0.01%, respectively, using all the classifiers and protocols. The corresponding AUC is 0.98 for SMS data using Multi-Layer Perceptron (MLP) based neural network. All the classifiers, the best accuracy of 91.84 ± 0.04% is shown to be of MLP-based neural network and this is 6% better over previously published. Further we observed that as MRR decreases, the system robustness increases and validated by standard deviations. The overall text system accuracy using all data types, classifiers, protocols is 89%, thereby showing the entire ML system to be novel, robust and unique. The system is also tested for stability and reliability.

  6. Innovative Bayesian and Parsimony Phylogeny of Dung Beetles (Coleoptera, Scarabaeidae, Scarabaeinae) Enhanced by Ontology-Based Partitioning of Morphological Characters

    PubMed Central

    Tarasov, Sergei; Génier, François

    2015-01-01

    Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a Cenozoic origin. PMID:25781019

  7. Greedy feature selection for glycan chromatography data with the generalized Dirichlet distribution

    PubMed Central

    2013-01-01

    Background Glycoproteins are involved in a diverse range of biochemical and biological processes. Changes in protein glycosylation are believed to occur in many diseases, particularly during cancer initiation and progression. The identification of biomarkers for human disease states is becoming increasingly important, as early detection is key to improving survival and recovery rates. To this end, the serum glycome has been proposed as a potential source of biomarkers for different types of cancers. High-throughput hydrophilic interaction liquid chromatography (HILIC) technology for glycan analysis allows for the detailed quantification of the glycan content in human serum. However, the experimental data from this analysis is compositional by nature. Compositional data are subject to a constant-sum constraint, which restricts the sample space to a simplex. Statistical analysis of glycan chromatography datasets should account for their unusual mathematical properties. As the volume of glycan HILIC data being produced increases, there is a considerable need for a framework to support appropriate statistical analysis. Proposed here is a methodology for feature selection in compositional data. The principal objective is to provide a template for the analysis of glycan chromatography data that may be used to identify potential glycan biomarkers. Results A greedy search algorithm, based on the generalized Dirichlet distribution, is carried out over the feature space to search for the set of “grouping variables” that best discriminate between known group structures in the data, modelling the compositional variables using beta distributions. The algorithm is applied to two glycan chromatography datasets. Statistical classification methods are used to test the ability of the selected features to differentiate between known groups in the data. Two well-known methods are used for comparison: correlation-based feature selection (CFS) and recursive partitioning (rpart). CFS is a feature selection method, while recursive partitioning is a learning tree algorithm that has been used for feature selection in the past. Conclusions The proposed feature selection method performs well for both glycan chromatography datasets. It is computationally slower, but results in a lower misclassification rate and a higher sensitivity rate than both correlation-based feature selection and the classification tree method. PMID:23651459

  8. Deep Learning Accurately Predicts Estrogen Receptor Status in Breast Cancer Metabolomics Data.

    PubMed

    Alakwaa, Fadhl M; Chaudhary, Kumardeep; Garmire, Lana X

    2018-01-05

    Metabolomics holds the promise as a new technology to diagnose highly heterogeneous diseases. Conventionally, metabolomics data analysis for diagnosis is done using various statistical and machine learning based classification methods. However, it remains unknown if deep neural network, a class of increasingly popular machine learning methods, is suitable to classify metabolomics data. Here we use a cohort of 271 breast cancer tissues, 204 positive estrogen receptor (ER+), and 67 negative estrogen receptor (ER-) to test the accuracies of feed-forward networks, a deep learning (DL) framework, as well as six widely used machine learning models, namely random forest (RF), support vector machines (SVM), recursive partitioning and regression trees (RPART), linear discriminant analysis (LDA), prediction analysis for microarrays (PAM), and generalized boosted models (GBM). DL framework has the highest area under the curve (AUC) of 0.93 in classifying ER+/ER- patients, compared to the other six machine learning algorithms. Furthermore, the biological interpretation of the first hidden layer reveals eight commonly enriched significant metabolomics pathways (adjusted P-value <0.05) that cannot be discovered by other machine learning methods. Among them, protein digestion and absorption and ATP-binding cassette (ABC) transporters pathways are also confirmed in integrated analysis between metabolomics and gene expression data in these samples. In summary, deep learning method shows advantages for metabolomics based breast cancer ER status classification, with both the highest prediction accuracy (AUC = 0.93) and better revelation of disease biology. We encourage the adoption of feed-forward networks based deep learning method in the metabolomics research community for classification.

  9. Classification and regression tree analysis vs. multivariable linear and logistic regression methods as statistical tools for studying haemophilia.

    PubMed

    Henrard, S; Speybroeck, N; Hermans, C

    2015-11-01

    Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.

  10. Authenticity identification and classification of Rhodiola species in traditional Tibetan medicine based on Fourier transform near-infrared spectroscopy and chemometrics analysis.

    PubMed

    Li, Tao; Su, Chen

    2018-06-02

    Rhodiola is an increasingly widely used traditional Tibetan medicine and traditional Chinese medicine in China. The composition profiles of bioactive compounds are somewhat jagged according to different species, which makes it crucial to identify authentic Rhodiola species accurately so as to ensure clinical application of Rhodiola. In this paper, a nondestructive, rapid, and efficient method in classification of Rhodiola was developed by Fourier transform near-infrared (FT-NIR) spectroscopy combined with chemometrics analysis. A total of 160 batches of raw spectra were obtained from four different species of Rhodiola by FT-NIR, such as Rhodiola crenulata, Rhodiola fastigiata, Rhodiola kirilowii, and Rhodiola brevipetiolata. After excluding the outliers, different performances of 3 sample dividing methods, 12 spectral preprocessing methods, 2 wavelength selection methods, and 2 modeling evaluation methods were compared. The results indicated that this combination was superior than others in the authenticity identification analysis, which was FT-NIR combined with sample set partitioning based on joint x-y distances (SPXY), standard normal variate transformation (SNV) + Norris-Williams (NW) + 2nd derivative, competitive adaptive reweighted sampling (CARS), and kernel extreme learning machine (KELM). The accuracy (ACCU), sensitivity (SENS), and specificity (SPEC) of the optimal model were all 1, which showed that this combination of FT-NIR and chemometrics methods had the optimal authenticity identification performance. The classification performance of the partial least squares discriminant analysis (PLS-DA) model was slightly lower than KELM model, and PLS-DA model results were ACCU = 0.97, SENS = 0.93, and SPEC = 0.98, respectively. It can be concluded that FT-NIR combined with chemometrics analysis has great potential in authenticity identification and classification of Rhodiola, which can provide a valuable reference for the safety and effectiveness of clinical application of Rhodiola. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Perceived Organizational Support for Enhancing Welfare at Work: A Regression Tree Model

    PubMed Central

    Giorgi, Gabriele; Dubin, David; Perez, Javier Fiz

    2016-01-01

    When trying to examine outcomes such as welfare and well-being, research tends to focus on main effects and take into account limited numbers of variables at a time. There are a number of techniques that may help address this problem. For example, many statistical packages available in R provide easy-to-use methods of modeling complicated analysis such as classification and tree regression (i.e., recursive partitioning). The present research illustrates the value of recursive partitioning in the prediction of perceived organizational support in a sample of more than 6000 Italian bankers. Utilizing the tree function party package in R, we estimated a regression tree model predicting perceived organizational support from a multitude of job characteristics including job demand, lack of job control, lack of supervisor support, training, etc. The resulting model appears particularly helpful in pointing out several interactions in the prediction of perceived organizational support. In particular, training is the dominant factor. Another dimension that seems to influence organizational support is reporting (perceived communication about safety and stress concerns). Results are discussed from a theoretical and methodological point of view. PMID:28082924

  12. Fragment-based prediction of skin sensitization using recursive partitioning

    NASA Astrophysics Data System (ADS)

    Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2011-09-01

    Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.

  13. A classification tree for the prediction of benign versus malignant disease in patients with small renal masses.

    PubMed

    Rendon, Ricardo A; Mason, Ross J; Kirkland, Susan; Lawen, Joseph G; Abdolell, Mohamed

    2014-08-01

    To develop a classification tree for the preoperative prediction of benign versus malignant disease in patients with small renal masses. This is a retrospective study including 395 consecutive patients who underwent surgical treatment for a renal mass < 5 cm in maximum diameter between July 1st 2001 and June 30th 2010. A classification tree to predict the risk of having a benign renal mass preoperatively was developed using recursive partitioning analysis for repeated measures outcomes. Age, sex, volume on preoperative imaging, tumor location (central/peripheral), degree of endophytic component (1%-100%), and tumor axis position were used as potential predictors to develop the model. Forty-five patients (11.4%) were found to have a benign mass postoperatively. A classification tree has been developed which can predict the risk of benign disease with an accuracy of 88.9% (95% CI: 85.3 to 91.8). The significant prognostic factors in the classification tree are tumor volume, degree of endophytic component and symptoms at diagnosis. As an example of its utilization, a renal mass with a volume of < 5.67 cm3 that is < 45% endophytic has a 52.6% chance of having benign pathology. Conversely, a renal mass with a volume ≥ 5.67 cm3 that is ≥ 35% endophytic has only a 5.3% possibility of being benign. A classification tree to predict the risk of benign disease in small renal masses has been developed to aid the clinician when deciding on treatment strategies for small renal masses.

  14. Stroke Risk Stratification and its Validation using Ultrasonic Echolucent Carotid Wall Plaque Morphology: A Machine Learning Paradigm.

    PubMed

    Araki, Tadashi; Jain, Pankaj K; Suri, Harman S; Londhe, Narendra D; Ikeda, Nobutaka; El-Baz, Ayman; Shrivastava, Vimal K; Saba, Luca; Nicolaides, Andrew; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Suri, Jasjit S

    2017-01-01

    Stroke risk stratification based on grayscale morphology of the ultrasound carotid wall has recently been shown to have a promise in classification of high risk versus low risk plaque or symptomatic versus asymptomatic plaques. In previous studies, this stratification has been mainly based on analysis of the far wall of the carotid artery. Due to the multifocal nature of atherosclerotic disease, the plaque growth is not restricted to the far wall alone. This paper presents a new approach for stroke risk assessment by integrating assessment of both the near and far walls of the carotid artery using grayscale morphology of the plaque. Further, this paper presents a scientific validation system for stroke risk assessment. Both these innovations have never been presented before. The methodology consists of an automated segmentation system of the near wall and far wall regions in grayscale carotid B-mode ultrasound scans. Sixteen grayscale texture features are computed, and fed into the machine learning system. The training system utilizes the lumen diameter to create ground truth labels for the stratification of stroke risk. The cross-validation procedure is adapted in order to obtain the machine learning testing classification accuracy through the use of three sets of partition protocols: (5, 10, and Jack Knife). The mean classification accuracy over all the sets of partition protocols for the automated system in the far and near walls is 95.08% and 93.47%, respectively. The corresponding accuracies for the manual system are 94.06% and 92.02%, respectively. The precision of merit of the automated machine learning system when compared against manual risk assessment system are 98.05% and 97.53% for the far and near walls, respectively. The ROC of the risk assessment system for the far and near walls is close to 1.0 demonstrating high accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Short-Term Global Horizontal Irradiance Forecasting Based on Sky Imaging and Pattern Recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Feng, Cong; Cui, Mingjian

    Accurate short-term forecasting is crucial for solar integration in the power grid. In this paper, a classification forecasting framework based on pattern recognition is developed for 1-hour-ahead global horizontal irradiance (GHI) forecasting. Three sets of models in the forecasting framework are trained by the data partitioned from the preprocessing analysis. The first two sets of models forecast GHI for the first four daylight hours of each day. Then the GHI values in the remaining hours are forecasted by an optimal machine learning model determined based on a weather pattern classification model in the third model set. The weather pattern ismore » determined by a support vector machine (SVM) classifier. The developed framework is validated by the GHI and sky imaging data from the National Renewable Energy Laboratory (NREL). Results show that the developed short-term forecasting framework outperforms the persistence benchmark by 16% in terms of the normalized mean absolute error and 25% in terms of the normalized root mean square error.« less

  16. A 250 plastome phylogeny of the grass family (Poaceae): topological support under different data partitions

    PubMed Central

    Burke, Sean V.; Wysocki, William P.; Clark, Lynn G.

    2018-01-01

    The systematics of grasses has advanced through applications of plastome phylogenomics, although studies have been largely limited to subfamilies or other subgroups of Poaceae. Here we present a plastome phylogenomic analysis of 250 complete plastomes (179 genera) sampled from 44 of the 52 tribes of Poaceae. Plastome sequences were determined from high throughput sequencing libraries and the assemblies represent over 28.7 Mbases of sequence data. Phylogenetic signal was characterized in 14 partitions, including (1) complete plastomes; (2) protein coding regions; (3) noncoding regions; and (4) three loci commonly used in single and multi-gene studies of grasses. Each of the four main partitions was further refined, alternatively including or excluding positively selected codons and also the gaps introduced by the alignment. All 76 protein coding plastome loci were found to be predominantly under purifying selection, but specific codons were found to be under positive selection in 65 loci. The loci that have been widely used in multi-gene phylogenetic studies had among the highest proportions of positively selected codons, suggesting caution in the interpretation of these earlier results. Plastome phylogenomic analyses confirmed the backbone topology for Poaceae with maximum bootstrap support (BP). Among the 14 analyses, 82 clades out of 309 resolved were maximally supported in all trees. Analyses of newly sequenced plastomes were in agreement with current classifications. Five of seven partitions in which alignment gaps were removed retrieved Panicoideae as sister to the remaining PACMAD subfamilies. Alternative topologies were recovered in trees from partitions that included alignment gaps. This suggests that ambiguities in aligning these uncertain regions might introduce a false signal. Resolution of these and other critical branch points in the phylogeny of Poaceae will help to better understand the selective forces that drove the radiation of the BOP and PACMAD clades comprising more than 99.9% of grass diversity. PMID:29416954

  17. Partition dataset according to amino acid type improves the prediction of deleterious non-synonymous SNPs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jing; Li, Yuan-Yuan; Shanghai Center for Bioinformation Technology, Shanghai 200235

    2012-03-02

    Highlights: Black-Right-Pointing-Pointer Proper dataset partition can improve the prediction of deleterious nsSNPs. Black-Right-Pointing-Pointer Partition according to original residue type at nsSNP is a good criterion. Black-Right-Pointing-Pointer Similar strategy is supposed promising in other machine learning problems. -- Abstract: Many non-synonymous SNPs (nsSNPs) are associated with diseases, and numerous machine learning methods have been applied to train classifiers for sorting disease-associated nsSNPs from neutral ones. The continuously accumulated nsSNP data allows us to further explore better prediction approaches. In this work, we partitioned the training data into 20 subsets according to either original or substituted amino acid type at the nsSNPmore » site. Using support vector machine (SVM), training classification models on each subset resulted in an overall accuracy of 76.3% or 74.9% depending on the two different partition criteria, while training on the whole dataset obtained an accuracy of only 72.6%. Moreover, the dataset was also randomly divided into 20 subsets, but the corresponding accuracy was only 73.2%. Our results demonstrated that partitioning the whole training dataset into subsets properly, i.e., according to the residue type at the nsSNP site, will improve the performance of the trained classifiers significantly, which should be valuable in developing better tools for predicting the disease-association of nsSNPs.« less

  18. Recent developments in tissue-type imaging (TTI) for planning and monitoring treatment of prostate cancer.

    PubMed

    Feleppa, Ernest J; Porter, Christopher R; Ketterling, Jeffrey; Lee, Paul; Dasgupta, Shreedevi; Urban, Stella; Kalisz, Andrew

    2004-07-01

    Because current methods of imaging prostate cancer are inadequate, biopsies cannot be effectively guided and treatment cannot be effectively planned and targeted. Therefore, our research is aimed at ultrasonically characterizing cancerous prostate tissue so that we can image it more effectively and thereby provide improved means of detecting, treating and monitoring prostate cancer. We base our characterization methods on spectrum analysis of radiofrequency (rf) echo signals combined with clinical variables such as prostate-specific antigen (PSA). Tissue typing using these parameters is performed by artificial neural networks. We employed and evaluated different approaches to data partitioning into training, validation, and test sets and different neural network configuration options. In this manner, we sought to determine what neural network configuration is optimal for these data and also to assess possible bias that might exist due to correlations among different data entries among the data for a given patient. The classification efficacy of each neural network configuration and data-partitioning method was measured using relative-operating-characteristic (ROC) methods. Neural network classification based on spectral parameters combined with clinical data generally produced ROC-curve areas of 0.80 compared to curve areas of 0.64 for conventional transrectal ultrasound imaging combined with clinical data. We then used the optimal neural network configuration to generate lookup tables that translate local spectral parameter values and global clinical-variable values into pixel values in tissue-type images (TTIs). TTIs continue to show cancerous regions successfully, and may prove to be particularly useful clinically in combination with other ultrasonic and nonultrasonic methods, e.g., magnetic-resonance spectroscopy.

  19. Recent Developments in Tissue-type Imaging(TTI) for Planning and Monitoring Treatment of Prostate Cancer

    PubMed Central

    Feleppa, Ernest J.; Porter, Christopher R.; Ketterling, Jeffrey; Lee, Paul; Dasgupta, Shreedevi; Urban, Stella; Kalisz, Andrew

    2006-01-01

    Because current methods of imaging prostate cancer are inadequate, biopsies cannot be effectively guided and treatment cannot be effectively planned and targeted. Therefore, our research is aimed at ultrasonically characterizing cancerous prostate tissue so that we can image it more effectively and thereby provide improved means of detecting, treating and monitoring prostate cancer. We base our characterization methods on spectrum analysis of radio frequency (rf) echo signals combined with clinical variables such as prostate-specific antigen (PSA). Tissue typing using these parameters is performed by artificial neural networks. We employedand evaluated different approaches to data partitioning into training, validation, and test sets and different neural network configuration options. In this manner, we sought to determine what neural network configuration is optimal for these data and also to assess possible bias that might exist due to correlations among different data entries among the data for a given patient. The classification efficacy of each neural network configuration and data-partitioning method was measured using relative-operating-characteristic (ROC) methods. Neural network classification based on spectral parameters combined with clinical data generally produced ROC-curve areas of 0.80 compared to curve areas of 0.64 for conventional transrectal ultrasound imaging combined with clinical data. We then used the optimal neural network configuration to generate lookup tables that translate local spectral parameter values and global clinical-variable values into pixel values in tissue-type images (TTIs). TTIs continue to show can cerous regions successfully, and may prove to be particularly useful clinically in combination with other ultrasonic and nonultrasonic methods, e.g., magnetic-resonance spectroscopy. PMID:15754797

  20. ISS groups: are we speaking the same language?

    PubMed

    Rozenfeld, Michael; Radomislensky, Irina; Freedman, Laurence; Givon, Adi; Novikov, Iliya; Peleg, Kobi

    2014-10-01

    Despite ISS being a widely accepted tool for measuring injury severity, many researchers and practitioners use different partition of ISS into severity groups. The lack of uniformity in ISS use inhibits proper comparisons between different studies. Creation of ISS group boundaries based on single AIS value squares and their sums was proposed in 1988 during Major Trauma Study (MTOS) in the USA, but was not validated by analysis of large databases. A validation study analysing 316,944 patients in the Israeli National Trauma registry (INTR) and 249,150 patients in the American National Trauma Data Bases (NTDB). A binary algorithm (Classification and Regression Trees (CART)) was used to detect the most significantly different ISS groups and was also applied to original MTOS data. The division of ISS into groups by the CART algorithm was identical in both Trauma Registries and very similar to original division in the MTOS. For most samples, the recommended groups are 1-8, 9-14, 16-24 and 25-75, while in very large samples or in studies specifically targeting critical patients there is a possibility to divide the last group into 25-48 and 50-75 groups, with an option for further division into 50-66 and 75 groups. Using a statistical analysis of two very large databases of trauma patients, we have found that partitioning of ISS into groups based on their association with patient mortality enables us to establish clear cut-off points for these groups. We propose that the suggested partition of ISS into severity groups would be adopted as a standard in order to have a common language when discussing injury severity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Implementation of spectral clustering with partitioning around medoids (PAM) algorithm on microarray data of carcinoma

    NASA Astrophysics Data System (ADS)

    Cahyaningrum, Rosalia D.; Bustamam, Alhadi; Siswantining, Titin

    2017-03-01

    Technology of microarray became one of the imperative tools in life science to observe the gene expression levels, one of which is the expression of the genes of people with carcinoma. Carcinoma is a cancer that forms in the epithelial tissue. These data can be analyzed such as the identification expressions hereditary gene and also build classifications that can be used to improve diagnosis of carcinoma. Microarray data usually served in large dimension that most methods require large computing time to do the grouping. Therefore, this study uses spectral clustering method which allows to work with any object for reduces dimension. Spectral clustering method is a method based on spectral decomposition of the matrix which is represented in the form of a graph. After the data dimensions are reduced, then the data are partitioned. One of the famous partition method is Partitioning Around Medoids (PAM) which is minimize the objective function with exchanges all the non-medoid points into medoid point iteratively until converge. Objectivity of this research is to implement methods spectral clustering and partitioning algorithm PAM to obtain groups of 7457 genes with carcinoma based on the similarity value. The result in this study is two groups of genes with carcinoma.

  2. K-Partite RNA Secondary Structures

    NASA Astrophysics Data System (ADS)

    Jiang, Minghui; Tejada, Pedro J.; Lasisi, Ramoni O.; Cheng, Shanhong; Fechser, D. Scott

    RNA secondary structure prediction is a fundamental problem in structural bioinformatics. The prediction problem is difficult because RNA secondary structures may contain pseudoknots formed by crossing base pairs. We introduce k-partite secondary structures as a simple classification of RNA secondary structures with pseudoknots. An RNA secondary structure is k-partite if it is the union of k pseudoknot-free sub-structures. Most known RNA secondary structures are either bipartite or tripartite. We show that there exists a constant number k such that any secondary structure can be modified into a k-partite secondary structure with approximately the same free energy. This offers a partial explanation of the prevalence of k-partite secondary structures with small k. We give a complete characterization of the computational complexities of recognizing k-partite secondary structures for all k ≥ 2, and show that this recognition problem is essentially the same as the k-colorability problem on circle graphs. We present two simple heuristics, iterated peeling and first-fit packing, for finding k-partite RNA secondary structures. For maximizing the number of base pair stackings, our iterated peeling heuristic achieves a constant approximation ratio of at most k for 2 ≤ k ≤ 5, and at most frac6{1-(1-6/k)^k} le frac6{1-e^{-6}} < 6.01491 for k ≥ 6. Experiment on sequences from PseudoBase shows that our first-fit packing heuristic outperforms the leading method HotKnots in predicting RNA secondary structures with pseudoknots. Source code, data set, and experimental results are available at http://www.cs.usu.edu/ mjiang/rna/kpartite/.

  3. An Unequal Secure Encryption Scheme for H.264/AVC Video Compression Standard

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Wang, Jidong; Ikenaga, Takeshi; Tsunoo, Yukiyasu; Goto, Satoshi

    H.264/AVC is the newest video coding standard. There are many new features in it which can be easily used for video encryption. In this paper, we propose a new scheme to do video encryption for H.264/AVC video compression standard. We define Unequal Secure Encryption (USE) as an approach that applies different encryption schemes (with different security strength) to different parts of compressed video data. This USE scheme includes two parts: video data classification and unequal secure video data encryption. Firstly, we classify the video data into two partitions: Important data partition and unimportant data partition. Important data partition has small size with high secure protection, while unimportant data partition has large size with low secure protection. Secondly, we use AES as a block cipher to encrypt the important data partition and use LEX as a stream cipher to encrypt the unimportant data partition. AES is the most widely used symmetric cryptography which can ensure high security. LEX is a new stream cipher which is based on AES and its computational cost is much lower than AES. In this way, our scheme can achieve both high security and low computational cost. Besides the USE scheme, we propose a low cost design of hybrid AES/LEX encryption module. Our experimental results show that the computational cost of the USE scheme is low (about 25% of naive encryption at Level 0 with VEA used). The hardware cost for hybrid AES/LEX module is 4678 Gates and the AES encryption throughput is about 50Mbps.

  4. Beating the Odds: Trees to Success in Different Countries

    ERIC Educational Resources Information Center

    Finch, W. Holmes; Marchant, Gregory J.

    2017-01-01

    A recursive partitioning model approach in the form of classification and regression trees (CART) was used with 2012 PISA data for five countries (Canada, Finland, Germany, Singapore-China, and the Unites States). The objective of the study was to determine demographic and educational variables that differentiated between low SES student that were…

  5. Entanglement in General Multipartite Quantum Systems and Its Role in Quantum Information Processing Tasks

    NASA Astrophysics Data System (ADS)

    Gielerak, Roman

    A major role playing by entanglement of quantum states in several, present day applications of genuine quantum technologies is briefly reviewed. Additionally, the notion and classification of multipartite entanglement has been presented. A new, monotone under (S)LOCC-operations measures of many-partite entanglement are defined and discussed briefly.

  6. Consensus embedding: theory, algorithms and application to segmentation and classification of biomedical data

    PubMed Central

    2012-01-01

    Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103

  7. Multifractal Analysis for Nutritional Assessment

    PubMed Central

    Park, Youngja; Lee, Kichun; Ziegler, Thomas R.; Martin, Greg S.; Hebbar, Gautam; Vidakovic, Brani; Jones, Dean P.

    2013-01-01

    The concept of multifractality is currently used to describe self-similar and complex scaling properties observed in numerous biological signals. Fractals are geometric objects or dynamic variations which exhibit some degree of similarity (irregularity) to the original object in a wide range of scales. This approach determines irregularity of biologic signal as an indicator of adaptability, the capability to respond to unpredictable stress, and health. In the present work, we propose the application of multifractal analysis of wavelet-transformed proton nuclear magnetic resonance (1H NMR) spectra of plasma to determine nutritional insufficiency. For validation of this method on 1H NMR signal of human plasma, standard deviation from classical statistical approach and Hurst exponent (H), left slope and partition function from multifractal analysis were extracted from 1H NMR spectra to test whether multifractal indices could discriminate healthy subjects from unhealthy, intensive care unit patients. After validation, the multifractal approach was applied to spectra of plasma from a modified crossover study of sulfur amino acid insufficiency and tested for associations with blood lipids. The results showed that standard deviation and H, but not left slope, were significantly different for sulfur amino acid sufficiency and insufficiency. Quadratic discriminant analysis of H, left slope and the partition function showed 78% overall classification accuracy according to sulfur amino acid status. Triglycerides and apolipoprotein C3 were significantly correlated with a multifractal model containing H, left slope, and standard deviation, and cholesterol and high-sensitivity C-reactive protein were significantly correlated to H. In conclusion, multifractal analysis of 1H NMR spectra provides a new approach to characterize nutritional status. PMID:23990878

  8. Postoperative staging of the neck dissection using extracapsular spread and lymph node ratio as prognostic factors in HPV-negative head and neck squamous cell carcinoma patients.

    PubMed

    Majercakova, Katarina; Valero, Cristina; López, Montserrat; García, Jacinto; Farré, Nuria; Quer, Miquel; León, Xavier

    2018-02-01

    The presence of nodes with extracapsular spread (ECS) and the lymph node ratio (LNR) have prognostic competence in the pathologic evaluation of patients with a head and neck squamous cell carcinoma (HNSCC) treated with a neck dissection. The purpose of this study is to assess the effect of ECS & LNR on prognosis of HPV negative HNSCC patients treated with neck dissection and to compare to 8th edition TNM/AJCC classification. We carried out a retrospective study of 1383 patients with HNSCC treated with a neck dissection between 1985 and 2013. We developed a classification of the patients according to the presence of nodes with ECS and the LNR value with a recursive partitioning analysis (RPA) model. We obtained a classification tree with four terminal nodes: for patients without ECS (including patients pN0) the cut-off point for LNR was 1.6%, while for patients with lymph nodes with ECS it was 11.4%. The 5-year disease-specific survival for patients without ECS/LNR < 1.6% was 83.3%; for patients without ECS/LNR ≥ 1.6% it was 61.5%; for patients with ECS/LNR < 11.4% it was 33.7%; and for patients with ECS/LNR ≥ 11.4% it was 18.5%. The classification obtained with RPA had better discrimination between categories than the 8th edition of the TNM/AJCC classification. ECS status and LNR value proved high prognostic capacity in the pathological evaluation of the neck dissection. The combination of ECS and LNR improved the predictive capacity of the 8th edition of the TNM/AJCC classification in HPV-negative HNSCC patients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Molecular classification of pesticides including persistent organic pollutants, phenylurea and sulphonylurea herbicides.

    PubMed

    Torrens, Francisco; Castellano, Gloria

    2014-06-05

    Pesticide residues in wine were analyzed by liquid chromatography-tandem mass spectrometry. Retentions are modelled by structure-property relationships. Bioplastic evolution is an evolutionary perspective conjugating effect of acquired characters and evolutionary indeterminacy-morphological determination-natural selection principles; its application to design co-ordination index barely improves correlations. Fractal dimensions and partition coefficient differentiate pesticides. Classification algorithms are based on information entropy and its production. Pesticides allow a structural classification by nonplanarity, and number of O, S, N and Cl atoms and cycles; different behaviours depend on number of cycles. The novelty of the approach is that the structural parameters are related to retentions. Classification algorithms are based on information entropy. When applying procedures to moderate-sized sets, excessive results appear compatible with data suffering a combinatorial explosion. However, equipartition conjecture selects criterion resulting from classification between hierarchical trees. Information entropy permits classifying compounds agreeing with principal component analyses. Periodic classification shows that pesticides in the same group present similar properties; those also in equal period, maximum resemblance. The advantage of the classification is to predict the retentions for molecules not included in the categorization. Classification extends to phenyl/sulphonylureas and the application will be to predict their retentions.

  10. Automated structural classification of lipids by machine learning.

    PubMed

    Taylor, Ryan; Miller, Ryan H; Miller, Ryan D; Porter, Michael; Dalgleish, James; Prince, John T

    2015-03-01

    Modern lipidomics is largely dependent upon structural ontologies because of the great diversity exhibited in the lipidome, but no automated lipid classification exists to facilitate this partitioning. The size of the putative lipidome far exceeds the number currently classified, despite a decade of work. Automated classification would benefit ongoing classification efforts by decreasing the time needed and increasing the accuracy of classification while providing classifications for mass spectral identification algorithms. We introduce a tool that automates classification into the LIPID MAPS ontology of known lipids with >95% accuracy and novel lipids with 63% accuracy. The classification is based upon simple chemical characteristics and modern machine learning algorithms. The decision trees produced are intelligible and can be used to clarify implicit assumptions about the current LIPID MAPS classification scheme. These characteristics and decision trees are made available to facilitate alternative implementations. We also discovered many hundreds of lipids that are currently misclassified in the LIPID MAPS database, strongly underscoring the need for automated classification. Source code and chemical characteristic lists as SMARTS search strings are available under an open-source license at https://www.github.com/princelab/lipid_classifier. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Understanding continental megathrust earthquake potential through geological mountain building processes: an example in Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Zhang, Zhen; Wang, Liangshu; Leroy, Yves; shi, Yaolin

    2017-04-01

    How to reconcile continent megathrust earthquake characteristics, for instances, mapping the large-great earthquake sequences into geological mountain building process, as well as partitioning the seismic-aseismic slips, is fundamental and unclear. Here, we scope these issues by focusing a typical continental collisional belt, the great Nepal Himalaya. We first prove that refined Nepal Himalaya thrusting sequences, with accurately defining of large earthquake cycle scale, provide new geodynamical hints on long-term earthquake potential in association with, either seismic-aseismic slip partition up to the interpretation of the binary interseismic coupling pattern on the Main Himalayan Thrust (MHT), or the large-great earthquake classification via seismic cycle patterns on MHT. Subsequently, sequential limit analysis is adopted to retrieve the detailed thrusting sequences of Nepal Himalaya mountain wedge. Our model results exhibit apparent thrusting concentration phenomenon with four thrusting clusters, entitled as thrusting 'families', to facilitate the development of sub-structural regions respectively. Within the hinterland thrusting family, the total aseismic shortening and the corresponding spatio-temporal release pattern are revealed by mapping projection. Whereas, in the other three families, mapping projection delivers long-term large (M<8)-great (M>8) earthquake recurrence information, including total lifespans, frequencies and large-great earthquake alternation information by identifying rupture distances along the MHT. In addition, this partition has universality in continental-continental collisional orogenic belt with identified interseismic coupling pattern, while not applicable in continental-oceanic megathrust context.

  12. Critical assessment of digital PCR for the detection and quantification of genetically modified organisms.

    PubMed

    Demeke, Tigst; Dobnik, David

    2018-07-01

    The number of genetically modified organisms (GMOs) on the market is steadily increasing. Because of regulation of cultivation and trade of GMOs in several countries, there is pressure for their accurate detection and quantification. Today, DNA-based approaches are more popular for this purpose than protein-based methods, and real-time quantitative PCR (qPCR) is still the gold standard in GMO analytics. However, digital PCR (dPCR) offers several advantages over qPCR, making this new technique appealing also for GMO analysis. This critical review focuses on the use of dPCR for the purpose of GMO quantification and addresses parameters which are important for achieving accurate and reliable results, such as the quality and purity of DNA and reaction optimization. Three critical factors are explored and discussed in more depth: correct classification of partitions as positive, correctly determined partition volume, and dilution factor. This review could serve as a guide for all laboratories implementing dPCR. Most of the parameters discussed are applicable to fields other than purely GMO testing. Graphical abstract There are generally three different options for absolute quantification of genetically modified organisms (GMOs) using digital PCR: droplet- or chamber-based and droplets in chambers. All have in common the distribution of reaction mixture into several partitions, which are all subjected to PCR and scored at the end-point as positive or negative. Based on these results GMO content can be calculated.

  13. An Investigation of Document Partitions.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.

    1986-01-01

    Empirical significance of document partitions is investigated as a function of index term-weight and similarity thresholds. Results show the same empirically preferred partitions can be detected by two independent strategies: an analysis of cluster-based retrieval analysis and an analysis of regularities in the underlying structure of the document…

  14. Recursive partition analysis of peritoneal and systemic recurrence in patients with gastric cancer who underwent D2 gastrectomy: Implications for neoadjuvant therapy consideration.

    PubMed

    Chang, Jee Suk; Kim, Kyung Hwan; Keum, Ki Chang; Noh, Sung Hoon; Lim, Joon Seok; Kim, Hyo Song; Rha, Sun Young; Lee, Yong Chan; Hyung, Woo Jin; Koom, Woong Sub

    2016-12-01

    To classify patients with nonmetastatic advanced gastric cancer who underwent D2-gastrectomy into prognostic groups based on peritoneal and systemic recurrence risks. Between 2004 and 2007, 1,090 patients with T3-4 or N+ gastric cancer were identified from our registry. Recurrence rates were estimated using a competing-risk analysis. Different prognostic groups were defined using recursive partitioning analysis (RPA). Median follow-up was 7 years. In the RPA-model for peritoneal recurrence risk, the initial node was split by T stage, indicating that differences between patients with T1-3 and T4 cancer were the greatest. The 5-year peritoneal recurrence rates for patients with T4 (n = 627) and T1-3 (n = 463) disease were 34.3% and 9.1%, respectively. N stage and neural invasion had an additive impact on high-risk patients. The RPA model for systemic relapse incorporated N stage alone and gave two terminal nodes: N0-2 (n = 721) and N3 (n = 369). The 5-year cumulative incidences were 7.7% and 24.5%, respectively. We proposed risk stratification models of peritoneal and systemic recurrence in patients undergoing D2-gastrectomy. This classification could be used for stratification protocols in future studies evaluating adjuvant therapies such as preoperative chemoradiotherapy. J. Surg. Oncol. 2016;114:859-864. © 2016 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Constraining Aggregate-Scale Solar Energy Partitioning in Arctic Sea Ice Through Synthesis of Remote Sensing and Autonomous In-Situ Observations.

    NASA Astrophysics Data System (ADS)

    Wright, N.; Polashenski, C. M.; Deeb, E. J.; Morriss, B. F.; Song, A.; Chen, J.

    2015-12-01

    One of the key processes controlling sea ice mass balance in the Arctic is the partitioning of solar energy between reflection back to the atmosphere and absorption into the ice and upper ocean. We investigate the solar energy balance in the ice-ocean system using in-situ data collected from Arctic Observing Network (AON) sea ice sites and imagery from high resolution optical satellites. AON assets, including ice mass balance buoys and ice tethered profilers, monitor the storage and fluxes of heat in the ice-ocean system. High resolution satellite imagery, processed using object-based image classification techniques, allows us to quantify the evolution of surrounding ice conditions, including melt pond coverage and floe size distribution, at aggregate scale. We present results from regionally representative sites that constrain the partitioning of absorbed solar energy between ice melt and ocean storage, and quantify the strength of the ice-albedo feedback. We further demonstrate how the results can be used to validate model representations of the physical processes controlling ice-albedo feedbacks. The techniques can be extended to understand solar partitioning across the Arctic basin using additional sites and model based data integration.

  16. Three list scheduling temporal partitioning algorithm of time space characteristic analysis and compare for dynamic reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Chen, Naijin

    2013-03-01

    Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.

  17. Classification of visible and infrared hyperspectral images based on image segmentation and edge-preserving filtering

    NASA Astrophysics Data System (ADS)

    Cui, Binge; Ma, Xiudan; Xie, Xiaoyun; Ren, Guangbo; Ma, Yi

    2017-03-01

    The classification of hyperspectral images with a few labeled samples is a major challenge which is difficult to meet unless some spatial characteristics can be exploited. In this study, we proposed a novel spectral-spatial hyperspectral image classification method that exploited spatial autocorrelation of hyperspectral images. First, image segmentation is performed on the hyperspectral image to assign each pixel to a homogeneous region. Second, the visible and infrared bands of hyperspectral image are partitioned into multiple subsets of adjacent bands, and each subset is merged into one band. Recursive edge-preserving filtering is performed on each merged band which utilizes the spectral information of neighborhood pixels. Third, the resulting spectral and spatial feature band set is classified using the SVM classifier. Finally, bilateral filtering is performed to remove "salt-and-pepper" noise in the classification result. To preserve the spatial structure of hyperspectral image, edge-preserving filtering is applied independently before and after the classification process. Experimental results on different hyperspectral images prove that the proposed spectral-spatial classification approach is robust and offers more classification accuracy than state-of-the-art methods when the number of labeled samples is small.

  18. Acoustic, genetic and morphological variations within the katydid Gampsocleis sedakovii (Orthoptera, Tettigonioidea)

    PubMed Central

    Zhang, Xue; Wen, Ming; Li, Junjian; Zhu, Hui; Wang, Yinliang; Ren, Bingzhong

    2015-01-01

    Abstract In an attempt to explain the variation within this species and clarify the subspecies classification, an analysis of the genetic, calling songs, and morphological variations within the species Gampsocleis sedakovii is presented from Inner Mongolia, China. Recordings were compared of the male calling songs and analysis performed of selected acoustic variables. This analysis is combined with sequencing of mtDNA - COI and examination of morphological traits to perform cluster analyses. The trees constructed from different datasets were structurally similar, bisecting the six geographical populations studied. Based on two large branches in the analysis, the species Gampsocleis sedakovii was partitioned into two subspecies, Gampsocleis sedakovii sedakovii (Fischer von Waldheim, 1846) and Gampsocleis sedakovii obscura (Walker, 1869). Comparing all the traits, the individual of Elunchun (ELC) was the intermediate type in this species according to the acoustic, genetic, and morphological characteristics. This study provides evidence for insect acoustic signal divergence and the process of subspeciation. PMID:26692795

  19. Automated Classification of Thermal Infrared Spectra Using Self-organizing Maps

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Hogan, Robert

    2006-01-01

    Existing and planned space missions to a variety of planetary and satellite surfaces produce an ever increasing volume of spectral data. Understanding the scientific informational content in this large data volume is a daunting task. Fortunately various statistical approaches are available to assess such data sets. Here we discuss an automated classification scheme based on Kohonen Self-organizing maps (SOM) we have developed. The SUM process produces an output layer were spectra having similar properties lie in close proximity to each other. One major effort is partitioning this output layer into appropriate regions. This is prefonned by defining dosed regions based upon the strength of the boundaries between adjacent cells in the SOM output layer. We use the Davies-Bouldin index as a measure of the inter-class similarities and intra-class dissimilarities that determines the optimum partition of the output layer, and hence number of SOM clusters. This allows us to identify the natural number of clusters formed from the spectral data. Mineral spectral libraries prepared at Arizona State University (ASU) and John Hopkins University (JHU) are used to test and evaluate the classification scheme. We label the library sample spectra in a hierarchical scheme with class, subclass, and mineral group names. We use a portion of the spectra to train the SOM, i.e. produce the output layer, while the remaining spectra are used to test the SOM. The test spectra are presented to the SOM output layer and assigned membership to the appropriate cluster. We then evaluate these assignments to assess the scientific meaning and accuracy of the derived SOM classes as they relate to the labels. We demonstrate that unsupervised classification by SOMs can be a useful component in autonomous systems designed to identify mineral species from reflectance and emissivity spectra in the therrnal IR.

  20. Construction and Analysis of Multi-Rate Partitioned Runge-Kutta Methods

    DTIC Science & Technology

    2012-06-01

    ANALYSIS OF MULTI-RATE PARTITIONED RUNGE-KUTTA METHODS by Patrick R. Mugg June 2012 Thesis Advisor: Francis Giraldo Second Reader: Hong...COVERED Master’s Thesis 4. TITLE AND SUBTITLE Construction and Analysis of Multi-Rate Partitioned Runge-Kutta Methods 5. FUNDING NUMBERS 6. AUTHOR...The most widely known and used procedure for analyzing stability is the Von Neumann Method , such that Von Neumann’s stability analysis looks at

  1. Idiopathic interstitial pneumonias and emphysema: detection and classification using a texture-discriminative approach

    NASA Astrophysics Data System (ADS)

    Fetita, C.; Chang-Chien, K. C.; Brillet, P. Y.; Pr"teux, F.; Chang, R. F.

    2012-03-01

    Our study aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on three-dimensional (3-D) mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). According to a preliminary evaluation on an extended database, the proposed method can overcome the drawbacks of a previously developed approach and achieve higher sensitivity and specificity.

  2. A Hybrid Template-Based Composite Classification System

    DTIC Science & Technology

    2009-02-01

    Hybrid Classifier: Forced Decision . . . . 116 5.3.2 Forced Decision Experimental Results . . . . . 119 5.3.3 Test for Statistical Significance ...Results . . . . . . . . . . 127 5.4.2 Test for Statistical Significance : NDEC Option 129 5.5 Implementing the Hyrid Classifier with OOL Targets . 130...comple- mentary in nature . Complementary classifiers are observed by finding an optimal method for partitioning the problem space. For example, the

  3. Dynamic Routing and Coordination in Multi-Agent Networks

    DTIC Science & Technology

    2016-06-10

    SECURITY CLASSIFICATION OF: Supported by this project, we designed innovative routing, planning and coordination strategies for robotic networks and...tasks partitioned among robots , in what order are they to be performed, and along which deterministic routes or according to which stochastic rules do...individual robots move. The fundamental novelties and our recent breakthroughs supported by this project are manifold: (1) the application 1

  4. Spectroscopic diagnosis of laryngeal carcinoma using near-infrared Raman spectroscopy and random recursive partitioning ensemble techniques.

    PubMed

    Teh, Seng Khoon; Zheng, Wei; Lau, David P; Huang, Zhiwei

    2009-06-01

    In this work, we evaluated the diagnostic ability of near-infrared (NIR) Raman spectroscopy associated with the ensemble recursive partitioning algorithm based on random forests for identifying cancer from normal tissue in the larynx. A rapid-acquisition NIR Raman system was utilized for tissue Raman measurements at 785 nm excitation, and 50 human laryngeal tissue specimens (20 normal; 30 malignant tumors) were used for NIR Raman studies. The random forests method was introduced to develop effective diagnostic algorithms for classification of Raman spectra of different laryngeal tissues. High-quality Raman spectra in the range of 800-1800 cm(-1) can be acquired from laryngeal tissue within 5 seconds. Raman spectra differed significantly between normal and malignant laryngeal tissues. Classification results obtained from the random forests algorithm on tissue Raman spectra yielded a diagnostic sensitivity of 88.0% and specificity of 91.4% for laryngeal malignancy identification. The random forests technique also provided variables importance that facilitates correlation of significant Raman spectral features with cancer transformation. This study shows that NIR Raman spectroscopy in conjunction with random forests algorithm has a great potential for the rapid diagnosis and detection of malignant tumors in the larynx.

  5. Application of the criteria for classification of existing chemicals as dangerous for the environment.

    PubMed

    Knacker, T; Schallnaß, H J; Klaschka, U; Ahlers, J

    1995-11-01

    The criteria for classification and labelling of substances as "dangerous for the environment" agreed upon within the European Union (EU) were applied to two sets of existing chemicals. One set (sample A) consisted of 41 randomly selected compounds listed in the European Inventory of Existing Chemical Substances (EINECS). The other set (sample B) comprised 115 substances listed in Annex I of Directive 67/548/EEC which were classified by the EU Working Group on Classification and Labelling of Existing Chemicals. The aquatic toxicity (fish mortality,Daphnia immobilisation, algal growth inhibition), ready biodegradability and n-octanol/water partition coefficient were measured for sample A by one and the same laboratory. For sample B, the available ecotoxicological data originated from many different sources and therefore was rather heterogeneous. In both samples, algal toxicity was the most sensitive effect parameter for most substances. Furthermore, it was found that, classification based on a single aquatic test result differs in many cases from classification based on a complete data set, although a correlation exists between the biological end-points of the aquatic toxicity test systems.

  6. Application of Recursive Partitioning to Derive and Validate a Claims-Based Algorithm for Identifying Keratinocyte Carcinoma (Nonmelanoma Skin Cancer).

    PubMed

    Chan, An-Wen; Fung, Kinwah; Tran, Jennifer M; Kitchen, Jessica; Austin, Peter C; Weinstock, Martin A; Rochon, Paula A

    2016-10-01

    Keratinocyte carcinoma (nonmelanoma skin cancer) accounts for substantial burden in terms of high incidence and health care costs but is excluded by most cancer registries in North America. Administrative health insurance claims databases offer an opportunity to identify these cancers using diagnosis and procedural codes submitted for reimbursement purposes. To apply recursive partitioning to derive and validate a claims-based algorithm for identifying keratinocyte carcinoma with high sensitivity and specificity. Retrospective study using population-based administrative databases linked to 602 371 pathology episodes from a community laboratory for adults residing in Ontario, Canada, from January 1, 1992, to December 31, 2009. The final analysis was completed in January 2016. We used recursive partitioning (classification trees) to derive an algorithm based on health insurance claims. The performance of the derived algorithm was compared with 5 prespecified algorithms and validated using an independent academic hospital clinic data set of 2082 patients seen in May and June 2011. Sensitivity, specificity, positive predictive value, and negative predictive value using the histopathological diagnosis as the criterion standard. We aimed to achieve maximal specificity, while maintaining greater than 80% sensitivity. Among 602 371 pathology episodes, 131 562 (21.8%) had a diagnosis of keratinocyte carcinoma. Our final derived algorithm outperformed the 5 simple prespecified algorithms and performed well in both community and hospital data sets in terms of sensitivity (82.6% and 84.9%, respectively), specificity (93.0% and 99.0%, respectively), positive predictive value (76.7% and 69.2%, respectively), and negative predictive value (95.0% and 99.6%, respectively). Algorithm performance did not vary substantially during the 18-year period. This algorithm offers a reliable mechanism for ascertaining keratinocyte carcinoma for epidemiological research in the absence of cancer registry data. Our findings also demonstrate the value of recursive partitioning in deriving valid claims-based algorithms.

  7. Identifying risk profiles for childhood obesity using recursive partitioning based on individual, familial, and neighborhood environment factors.

    PubMed

    Van Hulst, Andraea; Roy-Gagnon, Marie-Hélène; Gauvin, Lise; Kestens, Yan; Henderson, Mélanie; Barnett, Tracie A

    2015-02-15

    Few studies consider how risk factors within multiple levels of influence operate synergistically to determine childhood obesity. We used recursive partitioning analysis to identify unique combinations of individual, familial, and neighborhood factors that best predict obesity in children, and tested whether these predict 2-year changes in body mass index (BMI). Data were collected in 2005-2008 and in 2008-2011 for 512 Quebec youth (8-10 years at baseline) with a history of parental obesity (QUALITY study). CDC age- and sex-specific BMI percentiles were computed and children were considered obese if their BMI was ≥95th percentile. Individual (physical activity and sugar-sweetened beverage intake), familial (household socioeconomic status and measures of parental obesity including both BMI and waist circumference), and neighborhood (disadvantage, prestige, and presence of parks, convenience stores, and fast food restaurants) factors were examined. Recursive partitioning, a method that generates a classification tree predicting obesity based on combined exposure to a series of variables, was used. Associations between resulting varying risk group membership and BMI percentile at baseline and 2-year follow up were examined using linear regression. Recursive partitioning yielded 7 subgroups with a prevalence of obesity equal to 8%, 11%, 26%, 28%, 41%, 60%, and 63%, respectively. The 2 highest risk subgroups comprised i) children not meeting physical activity guidelines, with at least one BMI-defined obese parent and 2 abdominally obese parents, living in disadvantaged neighborhoods without parks and, ii) children with these characteristics, except with access to ≥1 park and with access to ≥1 convenience store. Group membership was strongly associated with BMI at baseline, but did not systematically predict change in BMI. Findings support the notion that obesity is predicted by multiple factors in different settings and provide some indications of potentially obesogenic environments. Alternate group definitions as well as longer duration of follow up should be investigated to predict change in obesity.

  8. Skin injury model classification based on shape vector analysis

    PubMed Central

    2012-01-01

    Background: Skin injuries can be crucial in judicial decision making. Forensic experts base their classification on subjective opinions. This study investigates whether known classes of simulated skin injuries are correctly classified statistically based on 3D surface models and derived numerical shape descriptors. Methods: Skin injury surface characteristics are simulated with plasticine. Six injury classes – abrasions, incised wounds, gunshot entry wounds, smooth and textured strangulation marks as well as patterned injuries - with 18 instances each are used for a k-fold cross validation with six partitions. Deformed plasticine models are captured with a 3D surface scanner. Mean curvature is estimated for each polygon surface vertex. Subsequently, distance distributions and derived aspect ratios, convex hulls, concentric spheres, hyperbolic points and Fourier transforms are used to generate 1284-dimensional shape vectors. Subsequent descriptor reduction maximizing SNR (signal-to-noise ratio) result in an average of 41 descriptors (varying across k-folds). With non-normal multivariate distribution of heteroskedastic data, requirements for LDA (linear discriminant analysis) are not met. Thus, shrinkage parameters of RDA (regularized discriminant analysis) are optimized yielding a best performance with λ = 0.99 and γ = 0.001. Results: Receiver Operating Characteristic of a descriptive RDA yields an ideal Area Under the Curve of 1.0for all six categories. Predictive RDA results in an average CRR (correct recognition rate) of 97,22% under a 6 partition k-fold. Adding uniform noise within the range of one standard deviation degrades the average CRR to 71,3%. Conclusions: Digitized 3D surface shape data can be used to automatically classify idealized shape models of simulated skin injuries. Deriving some well established descriptors such as histograms, saddle shape of hyperbolic points or convex hulls with subsequent reduction of dimensionality while maximizing SNR seem to work well for the data at hand, as predictive RDA results in CRR of 97,22%. Objective basis for discrimination of non-overlapping hypotheses or categories are a major issue in medicolegal skin injury analysis and that is where this method appears to be strong. Technical surface quality is important in that adding noise clearly degrades CRR. Trial registration: This study does not cover the results of a controlled health care intervention as only plasticine was used. Thus, there was no trial registration. PMID:23497357

  9. Partitioning-based mechanisms under personalized differential privacy.

    PubMed

    Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian

    2017-05-01

    Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t -round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms.

  10. Partitioning-based mechanisms under personalized differential privacy

    PubMed Central

    Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian

    2017-01-01

    Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t-round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms. PMID:28932827

  11. Adinkra (in)equivalence from Coxeter group representations: A case study

    NASA Astrophysics Data System (ADS)

    Chappell, Isaac; Gates, S. James; Hübsch, T.

    2014-02-01

    Using a MathematicaTM code, we present a straightforward numerical analysis of the 384-dimensional solution space of signed permutation 4×4 matrices, which in sets of four, provide representations of the 𝒢ℛ(4, 4) algebra, closely related to the 𝒩 = 1 (simple) supersymmetry algebra in four-dimensional space-time. Following after ideas discussed in previous papers about automorphisms and classification of adinkras and corresponding supermultiplets, we make a new and alternative proposal to use equivalence classes of the (unsigned) permutation group S4 to define distinct representations of higher-dimensional spin bundles within the context of adinkras. For this purpose, the definition of a dual operator akin to the well-known Hodge star is found to partition the space of these 𝒢ℛ(4, 4) representations into three suggestive classes.

  12. The threshold bootstrap clustering: a new approach to find families or transmission clusters within molecular quasispecies.

    PubMed

    Prosperi, Mattia C F; De Luca, Andrea; Di Giambenedetto, Simona; Bracciale, Laura; Fabbiani, Massimiliano; Cauda, Roberto; Salemi, Marco

    2010-10-25

    Phylogenetic methods produce hierarchies of molecular species, inferring knowledge about taxonomy and evolution. However, there is not yet a consensus methodology that provides a crisp partition of taxa, desirable when considering the problem of intra/inter-patient quasispecies classification or infection transmission event identification. We introduce the threshold bootstrap clustering (TBC), a new methodology for partitioning molecular sequences, that does not require a phylogenetic tree estimation. The TBC is an incremental partition algorithm, inspired by the stochastic Chinese restaurant process, and takes advantage of resampling techniques and models of sequence evolution. TBC uses as input a multiple alignment of molecular sequences and its output is a crisp partition of the taxa into an automatically determined number of clusters. By varying initial conditions, the algorithm can produce different partitions. We describe a procedure that selects a prime partition among a set of candidate ones and calculates a measure of cluster reliability. TBC was successfully tested for the identification of type-1 human immunodeficiency and hepatitis C virus subtypes, and compared with previously established methodologies. It was also evaluated in the problem of HIV-1 intra-patient quasispecies clustering, and for transmission cluster identification, using a set of sequences from patients with known transmission event histories. TBC has been shown to be effective for the subtyping of HIV and HCV, and for identifying intra-patient quasispecies. To some extent, the algorithm was able also to infer clusters corresponding to events of infection transmission. The computational complexity of TBC is quadratic in the number of taxa, lower than other established methods; in addition, TBC has been enhanced with a measure of cluster reliability. The TBC can be useful to characterise molecular quasipecies in a broad context.

  13. A Locally Optimal Algorithm for Estimating a Generating Partition from an Observed Time Series and Its Application to Anomaly Detection.

    PubMed

    Ghalyan, Najah F; Miller, David J; Ray, Asok

    2018-06-12

    Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.

  14. Attribute-based Decision Graphs: A framework for multiclass data classification.

    PubMed

    Bertini, João Roberto; Nicoletti, Maria do Carmo; Zhao, Liang

    2017-01-01

    Graph-based algorithms have been successfully applied in machine learning and data mining tasks. A simple but, widely used, approach to build graphs from vector-based data is to consider each data instance as a vertex and connecting pairs of it using a similarity measure. Although this abstraction presents some advantages, such as arbitrary shape representation of the original data, it is still tied to some drawbacks, for example, it is dependent on the choice of a pre-defined distance metric and is biased by the local information among data instances. Aiming at exploring alternative ways to build graphs from data, this paper proposes an algorithm for constructing a new type of graph, called Attribute-based Decision Graph-AbDG. Given a vector-based data set, an AbDG is built by partitioning each data attribute range into disjoint intervals and representing each interval as a vertex. The edges are then established between vertices from different attributes according to a pre-defined pattern. Classification is performed through a matching process among the attribute values of the new instance and AbDG. Moreover, AbDG provides an inner mechanism to handle missing attribute values, which contributes for expanding its applicability. Results of classification tasks have shown that AbDG is a competitive approach when compared to well-known multiclass algorithms. The main contribution of the proposed framework is the combination of the advantages of attribute-based and graph-based techniques to perform robust pattern matching data classification, while permitting the analysis the input data considering only a subset of its attributes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. ADMET Evaluation in Drug Discovery. 16. Predicting hERG Blockers by Combining Multiple Pharmacophores and Machine Learning Approaches.

    PubMed

    Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun

    2016-08-01

    Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.

  16. Systemic treatment after whole-brain radiotherapy may improve survival in RPA class II/III breast cancer patients with brain metastasis.

    PubMed

    Zhang, Qian; Chen, Jian; Yu, Xiaoli; Ma, Jinli; Cai, Gang; Yang, Zhaozhi; Cao, Lu; Chen, Xingxing; Guo, Xiaomao; Chen, Jiayi

    2013-09-01

    Whole brain radiotherapy (WBRT) is the most widely used treatment for brain metastasis (BM), especially for patients with multiple intracranial lesions. The purpose of this study was to examine the efficacy of systemic treatments following WBRT in breast cancer patients with BM who had different clinical characteristics, based on the classification of the Radiation Therapy Oncology Group recursive partitioning analysis (RPA) and the breast cancer-specific Graded Prognostic Assessment (Breast-GPA). One hundred and one breast cancer patients with BM treated between 2006 and 2010 were analyzed. The median interval between breast cancer diagnosis and identification of BM in the triple-negative patients was shorter than in the luminal A subtype (26 vs. 36 months, respectively; P = 0.021). Univariate analysis indicated that age at BM diagnosis, Karnofsky performance status/recursive partitioning analysis (KPS/RPA) classes, number of BMs, primary tumor control, extracranial metastases and systemic treatment following WBRT were significant prognostic factors for overall survival (OS) (P < 0.05). Multivariate analysis revealed that KPS/RPA classes and systemic treatments following WBRT remained the significant prognostic factors for OS. For RPA class I, the median survival with and without systemic treatments following WBRT was 25 and 22 months, respectively (P = 0.819), while for RPA class II/III systemic treatments significantly improved OS from 7 and 2 months to 11 and 5 months, respectively (P < 0.05). Our results suggested that triple-negative patients had a shorter interval between initial diagnosis and the development of BM than luminal A patients. Systemic treatments following WBRT improved the survival of RPA class II/III patients.

  17. Mammographic images segmentation based on chaotic map clustering algorithm

    PubMed Central

    2014-01-01

    Background This work investigates the applicability of a novel clustering approach to the segmentation of mammographic digital images. The chaotic map clustering algorithm is used to group together similar subsets of image pixels resulting in a medically meaningful partition of the mammography. Methods The image is divided into pixels subsets characterized by a set of conveniently chosen features and each of the corresponding points in the feature space is associated to a map. A mutual coupling strength between the maps depending on the associated distance between feature space points is subsequently introduced. On the system of maps, the simulated evolution through chaotic dynamics leads to its natural partitioning, which corresponds to a particular segmentation scheme of the initial mammographic image. Results The system provides a high recognition rate for small mass lesions (about 94% correctly segmented inside the breast) and the reproduction of the shape of regions with denser micro-calcifications in about 2/3 of the cases, while being less effective on identification of larger mass lesions. Conclusions We can summarize our analysis by asserting that due to the particularities of the mammographic images, the chaotic map clustering algorithm should not be used as the sole method of segmentation. It is rather the joint use of this method along with other segmentation techniques that could be successfully used for increasing the segmentation performance and for providing extra information for the subsequent analysis stages such as the classification of the segmented ROI. PMID:24666766

  18. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  19. Acoustic structures in the alarm calls of Gunnison's prairie dogs.

    PubMed

    Slobodchikoff, C N; Placer, J

    2006-05-01

    Acoustic structures of sound in Gunnison's prairie dog alarm calls are described, showing how these acoustic structures may encode information about three different predator species (red-tailed hawk-Buteo jamaicensis; domestic dog-Canis familaris; and coyote-Canis latrans). By dividing each alarm call into 25 equal-sized partitions and using resonant frequencies within each partition, commonly occurring acoustic structures were identified as components of alarm calls for the three predators. Although most of the acoustic structures appeared in alarm calls elicited by all three predator species, the frequency of occurrence of these acoustic structures varied among the alarm calls for the different predators, suggesting that these structures encode identifying information for each of the predators. A classification analysis of alarm calls elicited by each of the three predators showed that acoustic structures could correctly classify 67% of the calls elicited by domestic dogs, 73% of the calls elicited by coyotes, and 99% of the calls elicited by red-tailed hawks. The different distributions of acoustic structures associated with alarm calls for the three predator species suggest a duality of function, one of the design elements of language listed by Hockett [in Animal Sounds and Communication, edited by W. E. Lanyon and W. N. Tavolga (American Institute of Biological Sciences, Washington, DC, 1960), pp. 392-430].

  20. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  1. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  2. Importance of partitioning membranes of the brain and the influence of the neck in head injury modelling.

    PubMed

    Kumaresan, S; Radhakrishnan, S

    1996-01-01

    A head injury model consisting of the skull, the CSF, the brain and its partitioning membranes and the neck region is simulated by considering its near actual geometry. Three-dimensional finite-element analysis is carried out to investigate the influence of the partitioning membranes of the brain and the neck in head injury analysis through free-vibration analysis and transient analysis. In free-vibration analysis, the first five modal frequencies are calculated, and in transient analysis intracranial pressure and maximum shear stress in the brain are determined for a given occipital impact load.

  3. The nature and barium partitioning between immiscible melts - A comparison of experimental and natural systems with reference to lunar granite petrogenesis

    NASA Technical Reports Server (NTRS)

    Neal, C. R.; Taylor, L. A.

    1989-01-01

    Elemental partitioning between immiscible melts has been studied using experimental liquid-liquid Kds and those determined by analysis of immiscible glasses in basalt mesostases in order to investigate lunar granite petrogenesis. Experimental data show that Ba is partitioned into the basic immiscible melt, while probe analysis results show that Ba is partitioned into the granitic immiscible melt. It is concluded that lunar granite of significant size can only occur in a plutonic or deep hypabyssal environment.

  4. Automating the expert consensus paradigm for robust lung tissue classification

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.

  5. Rule groupings in expert systems using nearest neighbour decision rules, and convex hulls

    NASA Technical Reports Server (NTRS)

    Anastasiadis, Stergios

    1991-01-01

    Expert System shells are lacking in many areas of software engineering. Large rule based systems are not semantically comprehensible, difficult to debug, and impossible to modify or validate. Partitioning a set of rules found in CLIPS (C Language Integrated Production System) into groups of rules which reflect the underlying semantic subdomains of the problem, will address adequately the concerns stated above. Techniques are introduced to structure a CLIPS rule base into groups of rules that inherently have common semantic information. The concepts involved are imported from the field of A.I., Pattern Recognition, and Statistical Inference. Techniques focus on the areas of feature selection, classification, and a criteria of how 'good' the classification technique is, based on Bayesian Decision Theory. A variety of distance metrics are discussed for measuring the 'closeness' of CLIPS rules and various Nearest Neighbor classification algorithms are described based on the above metric.

  6. Segmentation and classification of cell cycle phases in fluorescence imaging.

    PubMed

    Ersoy, Ilker; Bunyak, Filiz; Chagin, Vadim; Cardoso, M Christina; Palaniappan, Kannappan

    2009-01-01

    Current chemical biology methods for studying spatiotemporal correlation between biochemical networks and cell cycle phase progression in live-cells typically use fluorescence-based imaging of fusion proteins. Stable cell lines expressing fluorescently tagged protein GFP-PCNA produce rich, dynamically varying sub-cellular foci patterns characterizing the cell cycle phases, including the progress during the S-phase. Variable fluorescence patterns, drastic changes in SNR, shape and position changes and abundance of touching cells require sophisticated algorithms for reliable automatic segmentation and cell cycle classification. We extend the recently proposed graph partitioning active contours (GPAC) for fluorescence-based nucleus segmentation using regional density functions and dramatically improve its efficiency, making it scalable for high content microscopy imaging. We utilize surface shape properties of GFP-PCNA intensity field to obtain descriptors of foci patterns and perform automated cell cycle phase classification, and give quantitative performance by comparing our results to manually labeled data.

  7. An advanced method for classifying atmospheric circulation types based on prototypes connectivity graph

    NASA Astrophysics Data System (ADS)

    Zagouras, Athanassios; Argiriou, Athanassios A.; Flocas, Helena A.; Economou, George; Fotopoulos, Spiros

    2012-11-01

    Classification of weather maps at various isobaric levels as a methodological tool is used in several problems related to meteorology, climatology, atmospheric pollution and to other fields for many years. Initially the classification was performed manually. The criteria used by the person performing the classification are features of isobars or isopleths of geopotential height, depending on the type of maps to be classified. Although manual classifications integrate the perceptual experience and other unquantifiable qualities of the meteorology specialists involved, these are typically subjective and time consuming. Furthermore, during the last years different approaches of automated methods for atmospheric circulation classification have been proposed, which present automated and so-called objective classifications. In this paper a new method of atmospheric circulation classification of isobaric maps is presented. The method is based on graph theory. It starts with an intelligent prototype selection using an over-partitioning mode of fuzzy c-means (FCM) algorithm, proceeds to a graph formulation for the entire dataset and produces the clusters based on the contemporary dominant sets clustering method. Graph theory is a novel mathematical approach, allowing a more efficient representation of spatially correlated data, compared to the classical Euclidian space representation approaches, used in conventional classification methods. The method has been applied to the classification of 850 hPa atmospheric circulation over the Eastern Mediterranean. The evaluation of the automated methods is performed by statistical indexes; results indicate that the classification is adequately comparable with other state-of-the-art automated map classification methods, for a variable number of clusters.

  8. Authentication of Organically and Conventionally Grown Basils by Gas Chromatography/Mass Spectrometry Chemical Profiles

    PubMed Central

    Wang, Zhengfang; Chen, Pei; Yu, Liangli; Harrington, Peter de B.

    2013-01-01

    Basil plants cultivated by organic and conventional farming practices were accurately classified by pattern recognition of gas chromatography/mass spectrometry (GC/MS) data. A novel extraction procedure was devised to extract characteristic compounds from ground basil powders. Two in-house fuzzy classifiers, i.e., the fuzzy rule-building expert system (FuRES) and the fuzzy optimal associative memory (FOAM) for the first time, were used to build classification models. Two crisp classifiers, i.e., soft independent modeling by class analogy (SIMCA) and the partial least-squares discriminant analysis (PLS-DA), were used as control methods. Prior to data processing, baseline correction and retention time alignment were performed. Classifiers were built with the two-way data sets, the total ion chromatogram representation of data sets, and the total mass spectrum representation of data sets, separately. Bootstrapped Latin partition (BLP) was used as an unbiased evaluation of the classifiers. By using two-way data sets, average classification rates with FuRES, FOAM, SIMCA, and PLS-DA were 100 ± 0%, 94.4 ± 0.4%, 93.3 ± 0.4%, and 100 ± 0%, respectively, for 100 independent evaluations. The established classifiers were used to classify a new validation set collected 2.5 months later with no parametric changes except that the training set and validation set were individually mean-centered. For the new two-way validation set, classification rates with FuRES, FOAM, SIMCA, and PLS-DA were 100%, 83%, 97%, and 100%, respectively. Thereby, the GC/MS analysis was demonstrated as a viable approach for organic basil authentication. It is the first time that a FOAM has been applied to classification. A novel baseline correction method was used also for the first time. The FuRES and the FOAM are demonstrated as powerful tools for modeling and classifying GC/MS data of complex samples and the data pretreatments are demonstrated to be useful to improve the performance of classifiers. PMID:23398171

  9. Constraint Drive Generation of Vision Algorithms on an Elastic Infrastructure

    DTIC Science & Technology

    2014-10-01

    DIRECTOR: / S / / S / PATRICK K. McCABE MICHAEL J . WESSING Work Unit Manager Deputy Chief...partitioned into training and validation slices and we anno - tate images in descending order from the beginning of the key space. A separate contiguous...and S. Nowozin. On feature combination for multiclass object classification. In ICCV, pages 221–228, 2009. [4] J .-P. Heo, Y. Lee, J . He, S.-F. Chang

  10. Information-Based Approach to Unsupervised Machine Learning

    DTIC Science & Technology

    2013-06-19

    Leibler , R. A. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79–86. Minka, T. P. (2000). Old and new matrix algebra use ...and Arabie, P. Comparing partitions. Journal of Classification, 2(1):193–218, 1985. Kullback , S. and Leibler , R. A. On information and suf- ficiency...the test input density to a lin- ear combination of class-wise input distributions under the Kullback - Leibler (KL) divergence ( Kullback

  11. a Rough Set Decision Tree Based Mlp-Cnn for Very High Resolution Remotely Sensed Image Classification

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.

    2017-09-01

    Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  12. Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features.

    PubMed

    Su, Hang; Yin, Zhaozheng; Huh, Seungil; Kanade, Takeo

    2013-10-01

    Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. A mutual information-Dempster-Shafer based decision ensemble system for land cover classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Pahlavani, Parham; Bigdeli, Behnaz

    2017-12-01

    Hyperspectral images contain extremely rich spectral information that offer great potential to discriminate between various land cover classes. However, these images are usually composed of tens or hundreds of spectrally close bands, which result in high redundancy and great amount of computation time in hyperspectral classification. Furthermore, in the presence of mixed coverage pixels, crisp classifiers produced errors, omission and commission. This paper presents a mutual information-Dempster-Shafer system through an ensemble classification approach for classification of hyperspectral data. First, mutual information is applied to split data into a few independent partitions to overcome high dimensionality. Then, a fuzzy maximum likelihood classifies each band subset. Finally, Dempster-Shafer is applied to fuse the results of the fuzzy classifiers. In order to assess the proposed method, a crisp ensemble system based on a support vector machine as the crisp classifier and weighted majority voting as the crisp fusion method are applied on hyperspectral data. Furthermore, a dimension reduction system is utilized to assess the effectiveness of mutual information band splitting of the proposed method. The proposed methodology provides interesting conclusions on the effectiveness and potentiality of mutual information-Dempster-Shafer based classification of hyperspectral data.

  14. Classification and regression tree (CART) analysis of endometrial carcinoma: Seeing the forest for the trees.

    PubMed

    Barlin, Joyce N; Zhou, Qin; St Clair, Caryn M; Iasonos, Alexia; Soslow, Robert A; Alektiar, Kaled M; Hensley, Martee L; Leitao, Mario M; Barakat, Richard R; Abu-Rustum, Nadeem R

    2013-09-01

    The objectives of the study are to evaluate which clinicopathologic factors influenced overall survival (OS) in endometrial carcinoma and to determine if the surgical effort to assess para-aortic (PA) lymph nodes (LNs) at initial staging surgery impacts OS. All patients diagnosed with endometrial cancer from 1/1993-12/2011 who had LNs excised were included. PALN assessment was defined by the identification of one or more PALNs on final pathology. A multivariate analysis was performed to assess the effect of PALNs on OS. A form of recursive partitioning called classification and regression tree (CART) analysis was implemented. Variables included: age, stage, tumor subtype, grade, myometrial invasion, total LNs removed, evaluation of PALNs, and adjuvant chemotherapy. The cohort included 1920 patients, with a median age of 62 years. The median number of LNs removed was 16 (range, 1-99). The removal of PALNs was not associated with OS (P=0.450). Using the CART hierarchically, stage I vs. stages II-IV and grades 1-2 vs. grade 3 emerged as predictors of OS. If the tree was allowed to grow, further branching was based on age and myometrial invasion. Total number of LNs removed and assessment of PALNs as defined in this study were not predictive of OS. This innovative CART analysis emphasized the importance of proper stage assignment and a binary grading system in impacting OS. Notably, the total number of LNs removed and specific evaluation of PALNs as defined in this study were not important predictors of OS. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Plant interspecies competition for sunlight: a mathematical model of canopy partitioning.

    PubMed

    Nevai, Andrew L; Vance, Richard R

    2007-07-01

    We examine the influence of canopy partitioning on the outcome of competition between two plant species that interact only by mutually shading each other. This analysis is based on a Kolmogorov-type canopy partitioning model for plant species with clonal growth form and fixed vertical leaf profiles (Vance and Nevai in J. Theor. Biol., 2007, to appear). We show that canopy partitioning is necessary for the stable coexistence of the two competing plant species. We also use implicit methods to show that, under certain conditions, the species' nullclines can intersect at most once. We use nullcline endpoint analysis to show that when the nullclines do intersect, and in such a way that they cross, then the resulting equilibrium point is always stable. We also construct surfaces that divide parameter space into regions within which the various outcomes of competition occur, and then study parameter dependence in the locations of these surfaces. The analysis presented here and in a companion paper (Nevai and Vance, The role of leaf height in plant competition for sunlight: analysis of a canopy partitioning model, in review) together shows that canopy partitioning is both necessary and, under appropriate parameter values, sufficient for the stable coexistence of two hypothetical plant species whose structure and growth are described by our model.

  16. Molecular systematics of the barklouse family Psocidae (Insecta: Psocodea: 'Psocoptera') and implications for morphological and behavioral evolution.

    PubMed

    Yoshizawa, Kazunori; Johnson, Kevin P

    2008-02-01

    We evaluated the higher level classification within the family Psocidae (Insecta: Psocodea: 'Psocoptera') based on combined analyses of nuclear 18S, Histone 3, wingless and mitochondrial 12S, 16S and COI gene sequences. Various analyses (inclusion/exclusion of incomplete taxa and/or rapidly evolving genes, data partitioning, and analytical method selection) all provided similar results, which were generally concordant with relationships inferred using morphological observations. Based on the phylogenetic trees estimated for Psocidae, we propose a revised higher level classification of this family, although uncertainty still exists regarding some aspects of this classification. This classification includes a basal division into two subfamilies, 'Amphigerontiinae' (possibly paraphyletic) and Psocinae. The Amphigerontiinae is divided into the tribes Kaindipsocini (new tribe), Blastini, Amphigerontini, and Stylatopsocini. Psocinae is divided into the tribes 'Ptyctini' (probably paraphyletic), Psocini, Atrichadenotecnini (new tribe), Sigmatoneurini, Metylophorini, and Thyrsophorini (the latter includes the taxon previously recognized as Cerastipsocini). We examined the evolution of symmetric/asymmetric male genitalia over this tree and found this character to be quite homoplasious.

  17. Applications of Some Artificial Intelligence Methods to Satellite Soundings

    NASA Technical Reports Server (NTRS)

    Munteanu, M. J.; Jakubowicz, O.

    1985-01-01

    Hard clustering of temperature profiles and regression temperature retrievals were used to refine the method using the probabilities of membership of each pattern vector in each of the clusters derived with discriminant analysis. In hard clustering the maximum probability is taken and the corresponding cluster as the correct cluster are considered discarding the rest of the probabilities. In fuzzy partitioned clustering these probabilities are kept and the final regression retrieval is a weighted regression retrieval of several clusters. This method was used in the clustering of brightness temperatures where the purpose was to predict tropopause height. A further refinement is the division of temperature profiles into three major regions for classification purposes. The results are summarized in the tables total r.m.s. errors are displayed. An approach based on fuzzy logic which is intimately related to artificial intelligence methods is recommended.

  18. Comparison of modeling approaches for carbon partitioning: Impact on estimates of global net primary production and equilibrium biomass of woody vegetation from MODIS GPP

    NASA Astrophysics Data System (ADS)

    Ise, Takeshi; Litton, Creighton M.; Giardina, Christian P.; Ito, Akihiko

    2010-12-01

    Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long-lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning to leaves, stems, and roots varies consistently with GPP and that the ratio of net primary production (NPP) to GPP is conservative across environmental gradients. To examine influences of carbon partitioning schemes employed by global ecosystem models, we used this meta-analysis-based model and a satellite-based (MODIS) terrestrial GPP data set to estimate global woody NPP and equilibrium biomass, and then compared it to two process-based ecosystem models (Biome-BGC and VISIT) using the same GPP data set. We hypothesized that different carbon partitioning schemes would result in large differences in global estimates of woody NPP and equilibrium biomass. Woody NPP estimated by Biome-BGC and VISIT was 25% and 29% higher than the meta-analysis-based model for boreal forests, with smaller differences in temperate and tropics. Global equilibrium woody biomass, calculated from model-specific NPP estimates and a single set of tissue turnover rates, was 48 and 226 Pg C higher for Biome-BGC and VISIT compared to the meta-analysis-based model, reflecting differences in carbon partitioning to structural versus metabolically active tissues. In summary, we found that different carbon partitioning schemes resulted in large variations in estimates of global woody carbon flux and storage, indicating that stand-level controls on carbon partitioning are not yet accurately represented in ecosystem models.

  19. Systematic review and meta-analysis of feasibility, safety, and efficacy of a novel procedure: associating liver partition and portal vein ligation for staged hepatectomy.

    PubMed

    Schadde, Erik; Schnitzbauer, Andreas A; Tschuor, Christoph; Raptis, Dimitri A; Bechstein, Wolf O; Clavien, Pierre-Alain

    2015-09-01

    Associating liver partition and portal vein ligation for staged hepatectomy (ALPPS) is a novel strategy to resect liver tumors despite the small size of the liver remnant. It is an hepatectomy in two stages, with PVL and parenchymal transection during the first stage, which induces rapid growth of the remnant liver exceeding any other technique. Despite high postoperative morbidity and mortality in most reports, the technique was adopted by a number of surgeons. This systematic review explores current data regarding the feasibility, safety, and oncologic efficacy of ALPPS; the search strategy has been published online. A meta-analysis of hypertrophy, feasibility (ALPPS stage 2 performed), mortality, complications, and R0 (complete) resection was performed. A literature search revealed a total of 13 publications that met the search criteria, reporting data from 295 patients. Evidence levels were low, with the highest Oxford evidence level being 2c. The most common indication was colorectal liver metastasis in 203 patients. Hypertrophy in the meta-analysis was 84 %, feasibility (ALPPS stage 2 performed) 97 % (CI 94-99 %), 90-day mortality 11 % (CI 8-16 %), and complications grade IIIa or higher occured in 44 % (CI 38-50 %) of patients. A standardized reporting format for complications is lacking despite the widespread use of the Clavien-Dindo classification. Oncological outcome is not well-documented. The most common topics in the selected studies published were technical feasibility and indications for the procedures. Publication bias due to case-series and single-center reports is common. A systematic exploration of this novel operation with a rigid methodology, such as registry analyses and a randomized controlled trial, is highly advised.

  20. Adjuvant treatment may benefit patients with high-risk upper rectal cancer: A nomogram and recursive partitioning analysis of 547 patients.

    PubMed

    Wang, Xin; Jin, Jing; Yang, Yong; Liu, Wen-Yang; Ren, Hua; Feng, Yan-Ru; Xiao, Qin; Li, Ning; Deng, Lei; Fang, Hui; Jing, Hao; Lu, Ning-Ning; Tang, Yu; Wang, Jian-Yang; Wang, Shu-Lian; Wang, Wei-Hu; Song, Yong-Wen; Liu, Yue-Ping; Li, Ye-Xiong

    2016-10-04

    The role of adjuvant chemoradiotherapy (ACRT) or adjuvant chemotherapy (ACT) in treating patients with locally advanced upper rectal cancer (URC) after total mesorectal excision (TME) surgery remains unclear. We developed a clinical nomogram and a recursive partitioning analysis (RPA)-based risk stratification system for predicting 5-year cancer-specific survival (CSS) to determine whether these individuals require ACRT or ACT. This retrospective analysis included 547 patients with primary URC. A nomogram was developed based on the Cox regression model. The performance of the model was assessed by concordance index (C-index) and calibration curve in internal validation with bootstrapping. RPA stratified patients into risk groups based on their tumor characteristics. Five independent prognostic factors (age, preoperative increased carcinoembryonic antigen and carcinoma antigen 19-9, positive lymph node [PLN] number, tumor deposit [TD], pathological T classification) were identified and entered into the predictive nomogram. The bootstrap-corrected C-index was 0.757. RPA stratification of the three prognostic groups showed obviously different prognosis. Only the high-risk group (patients with PLN ≤ 6 and TD, or PLN > 6) benefited from ACRT plus ACT when compared with surgery followed by ACRT or ACT, and surgery alone (5-year CSS: 70.8% vs. 57.8% vs. 15.6%, P < 0.001). Our nomogram predicts 5-year CSS after TME surgery for locally advanced rectal cancer and RPA-based stratification indicates that ACRT plus ACT post-surgery may be an important treatment plan with potentially ignificant survival advantages in high-risk URC. This may help to select candidates of adjuvant treatment in prospective studies.

  1. Relationship between tourism development and vegetated landscapes in Luya Mountain Nature Reserve, Shanxi, China.

    PubMed

    Cheng, Zhan-Hong; Zhang, Jin-Tun

    2005-09-01

    The relationship between tourism development and vegetated landscapes is analyzed for the Luya Mountain Nature Reserve (LMNR), Shanxi, China, in this study. Indices such as Sensitive Level (SL), Landscape Importance Value (LIV), information index of biodiversity (H'), Shade-tolerant Species Proportion (SSP), and Tourism Influencing Index (TII) are used to characterize vegetated landscapes, the impact of tourism, and their relationship. Their relationship is studied by Two-Way Indicator Species Analysis (TWINSPAN) and Detrended Correspondence Analysis (DCA). TWINSPAN gives correct and rapid partition to the classification, and DCA ordination shows the changing tendency of all vegetation types based on tourism development. These results reflect the ecological relationship between tourism development and vegetated landscapes. In Luya Mountain Nature Reserve, most plant communities are in good or medium condition, which shows that these vegetated landscapes can support more tourism. However, the occurrence of the bad condition shows that there is a severe contradiction between tourism development and vegetated landscapes.

  2. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  3. Contextually guided very-high-resolution imagery classification with semantic segments

    NASA Astrophysics Data System (ADS)

    Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.

    2017-10-01

    Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).

  4. Hepatoprotective action of various partitions of methanol extract of Bauhinia purpurea leaves against paracetamol-induced liver toxicity: involvement of the antioxidant mechanisms.

    PubMed

    Zakaria, Zainul Amiruddin; Yahya, Farhana; Mamat, Siti Syariah; Mahmood, Nur Diyana; Mohtarrudin, Nurhafizah; Taher, Muhammad; Hamid, Siti Selina Abdul; Teh, Lay Kek; Salleh, Mohd Zaki

    2016-06-11

    Methanol extract of Bauhinia purpurea L. (family Fabaceae) (MEBP) possesses high antioxidant and anti-inflammatory activities and recently reported to exert hepatoprotection against paracetamol (PCM)-induced liver injury in rats. In an attempt to identify the hepatoprotective bioactive compounds in MEBP, the extract was prepared in different partitions and subjected to the PCM-induced liver injury model in rats. Dried MEBP was partitioned successively to obtain petroleum ether (PEBP), ethylacetate (EABP) and aqueous (AQBP) partitions, respectively. All partitions were subjected to in vitro antioxidant (i.e. total phenolic content (TPC), 2,2-diphenyl-1-picrylhydrazyl (DPPH)- and superoxide-radicals scavenging assay, and oxygen radical absorbance capacity (ORAC) assay) and anti-inflammatory (i.e. lipooxygenase (LOX) and xanthine oxidase (XO) assay) analysis. The partitions, prepared in the dose range of 50, 250 and 500 mg/kg, together with a vehicle (10 % DMSO) and standard drug (200 mg/kg silymarin) were administered orally for 7 consecutive days prior to subjection to the 3 mg/kg PCM-induced liver injury model in rats. Following the hepatic injury induction, blood samples and liver were collected for the respective biochemical parameter and histopathological studies. Body weight changes and liver weight were also recorded. The partitions were also subjected to the phytochemical screening and HPLC analysis. Of all partitions, EABP possessed high TPC value and demonstrated remarkable antioxidant activity when assessed using the DPPH- and superoxide-radical scavenging assay, as well as ORAC assay, which was followed by AQBP and PEBP. All partitions also showed low anti-inflammatory activity via the LOX and XO pathways. In the hepatoprotective study, the effectiveness of the partitions is in the order of EABP>AQBP>PEBP, which is supported by the microscopic analysis and histopathological scoring. In the biochemical analysis, EABP also exerted the most effective effect by reducing the serum level of alanine transaminase (ALT) and aspartate transaminase (AST) at all doses tested in comparison to the other partitions. Phytochemical screening and HPLC analysis suggested the presence of: flavonoids, condensed tannins and triterpenes in EABP; flavonoids, condensed tannins and saponins in PEBP and; only saponins in AQBP. EABP demonstrates the most effective hepatoprotection against PCM-induced liver injury in rats. This observation could be attributed to its remarkable antioxidant activity and the presence of flavonoids that might probably act synergistically with other biocompounds to cause the hepatoprotection.

  5. High- and low-level hierarchical classification algorithm based on source separation process

    NASA Astrophysics Data System (ADS)

    Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber

    2016-10-01

    High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.

  6. Segmentation, feature extraction, and multiclass brain tumor classification.

    PubMed

    Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal

    2013-12-01

    Multiclass brain tumor classification is performed by using a diversified dataset of 428 post-contrast T1-weighted MR images from 55 patients. These images are of primary brain tumors namely astrocytoma (AS), glioblastoma multiforme (GBM), childhood tumor-medulloblastoma (MED), meningioma (MEN), secondary tumor-metastatic (MET), and normal regions (NR). Eight hundred fifty-six regions of interest (SROIs) are extracted by a content-based active contour model. Two hundred eighteen intensity and texture features are extracted from these SROIs. In this study, principal component analysis (PCA) is used for reduction of dimensionality of the feature space. These six classes are then classified by artificial neural network (ANN). Hence, this approach is named as PCA-ANN approach. Three sets of experiments have been performed. In the first experiment, classification accuracy by ANN approach is performed. In the second experiment, PCA-ANN approach with random sub-sampling has been used in which the SROIs from the same patient may get repeated during testing. It is observed that the classification accuracy has increased from 77 to 91 %. PCA-ANN has delivered high accuracy for each class: AS-90.74 %, GBM-88.46 %, MED-85 %, MEN-90.70 %, MET-96.67 %, and NR-93.78 %. In the third experiment, to remove bias and to test the robustness of the proposed system, data is partitioned in a manner such that the SROIs from the same patient are not common for training and testing sets. In this case also, the proposed system has performed well by delivering an overall accuracy of 85.23 %. The individual class accuracy for each class is: AS-86.15 %, GBM-65.1 %, MED-63.36 %, MEN-91.5 %, MET-65.21 %, and NR-93.3 %. A computer-aided diagnostic system comprising of developed methods for segmentation, feature extraction, and classification of brain tumors can be beneficial to radiologists for precise localization, diagnosis, and interpretation of brain tumors on MR images.

  7. Outcomes of pancreatogastrostomy with gastric partition after pylorus-preserving pancreaticoduodenectomy with gastric partition.

    PubMed

    Sánchez Cabús, Santiago; Saavedra, David; Sampson, Jaime; Cubel, Marc; López-Boado, Miguel Ángel; Ferrer, Joana; Fernández-Cruz, Laureano

    2015-10-01

    Pylorus-preserving pancreatoduodenectomy with gastric partition (PPPD-GP) seems to be associated to a better postoperative outcome than conventional pancreaticojejunostomy in the setting of a prospective-randomized study. The aim of this study is to further evaluate the surgical outcome in a series of 129 consecutive patients. Between 2007 and June 2013, 129 patients with periampullary tumors surgically treated with PPPD-GP were retrospectively analyzed. Surgical complications (Clavien-Dindo score), as well as pancreatic and non-pancreas related complications were analyzed. Overall postoperative complication rate was 77%, although 50% of complications were graded I-II by the Clavien-Dindo classification. Incidence of clinically relevant pancreatic fistula was 18%: ISGFP type B: 12%, and type C: 6%. Other pancreas specific complications such as delayed gastric emptying and pospancreatectomy haemorrhage were 27 and 15%, respectively, similar to results published in the literature. Overall perioperative mortality rate was 4.6%. PPPD-GP results show that it is a technique with an acceptable morbidity, low mortality and pancreatic fistula rate similar to other techniques currently described of pancreaticoenteric reconstruction. Copyright © 2015 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Gas-particle partitioning of semi-volatile organics on organic aerosols using a predictive activity coefficient model: analysis of the effects of parameter choices on model performance

    NASA Astrophysics Data System (ADS)

    Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.

    The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.

  9. Catchment Classification: Connecting Climate, Structure and Function

    NASA Astrophysics Data System (ADS)

    Sawicz, K. A.; Wagener, T.; Sivapalan, M.; Troch, P. A.; Carrillo, G. A.

    2010-12-01

    Hydrology does not yet possess a generally accepted catchment classification framework. Such a classification framework needs to: [1] give names to things, i.e. the main classification step, [2] permit transfer of information, i.e. regionalization of information, [3] permit development of generalizations, i.e. to develop new theory, and [4] provide a first order environmental change impact assessment, i.e., the hydrologic implications of climate, land use and land cover change. One strategy is to create a catchment classification framework based on the notion of catchment functions (partitioning, storage, and release). Results of an empirical study presented here connects climate and structure to catchment function (in the form of select hydrologic signatures), based on analyzing over 300 US catchments. Initial results indicate a wide assortment of signature relationships with properties of climate, geology, and vegetation. The uncertainty in the different regionalized signatures varies widely, and therefore there is variability in the robustness of classifying ungauged basins. This research provides insight into the controls of hydrologic behavior of a catchment, and enables a classification framework applicable to gauged and ungauged across the study domain. This study sheds light on what we can expect to achieve in mapping climate, structure and function in a top-down manner. Results of this study complement work done using a bottom-up physically-based modeling framework to generalize this approach (Carrillo et al., this session).

  10. EXPLORATORY ANALYSIS OF THE EFFECTS OF PARTICULATE CHARACTERISTICS ON THE VARIATION IN PARTITIONING OF NONPOLAR ORGANIC CONTAMINANTS TO MARINE SEDIMENTS

    EPA Science Inventory

    The partitioning of nonpolar organic contaminants to marine sediments is considered to be controlled by the amount of organic carbon present. However, several studies propose that other characteristics of sediments may affect the partitioning of contaminants. For this exploratory...

  11. Evaluation of the use of partition coefficients and molecular surface properties as predictors of drug absorption: a provisional biopharmaceutical classification of the list of national essential medicines of Pakistan

    PubMed Central

    Shawahna, R.; Rahman, NU.

    2011-01-01

    Background and the purpose of the study Partition coefficients (log D and log P) and molecular surface area (PSA) are potential predictors of the intestinal permeability of drugs. The aim of this investigation was to evaluate and compare these intestinal permeability indicators. Methods Aqueous solubility data were obtained from literature or calculated using ACD/Labs and ALOGPS. Permeability data were predicted based on log P, log D at pH 6.0 (log D6.0), and PSA. Results Metoprolol's log P, log D6.0, and a PSA of <65 Å correctly predicted 55.9%, 50.8% and 54.2% of permeability classes, respectively. Labetalol's log P, log D6.0 and PSA correctly predicted 54.2%, 64.4% and 61% of permeability classes, respectively. Log D6.0 correlated well (81%) with Caco-2 permeability (Papp). Of the list of national essential medicines, 135 orally administered drugs were classified into biopharmaceutical classification system (BCS). Of these, 57 (42.2%), 28 (20.7%), 44 (32.6%), and 6 (4.4%) were class I, II, III and IV respectively. Conclusion Log D6.0 showed better prediction capability than log P. Metoprolol as permeability internal standard was more conservative than labetalol. PMID:22615645

  12. What are the structural features that drive partitioning of proteins in aqueous two-phase systems?

    PubMed

    Wu, Zhonghua; Hu, Gang; Wang, Kui; Zaslavsky, Boris Yu; Kurgan, Lukasz; Uversky, Vladimir N

    2017-01-01

    Protein partitioning in aqueous two-phase systems (ATPSs) represents a convenient, inexpensive, and easy to scale-up protein separation technique. Since partition behavior of a protein dramatically depends on an ATPS composition, it would be highly beneficial to have reliable means for (even qualitative) prediction of partitioning of a target protein under different conditions. Our aim was to understand which structural features of proteins contribute to partitioning of a query protein in a given ATPS. We undertook a systematic empirical analysis of relations between 57 numerical structural descriptors derived from the corresponding amino acid sequences and crystal structures of 10 well-characterized proteins and the partition behavior of these proteins in 29 different ATPSs. This analysis revealed that just a few structural characteristics of proteins can accurately determine behavior of these proteins in a given ATPS. However, partition behavior of proteins in different ATPSs relies on different structural features. In other words, we could not find a unique set of protein structural features derived from their crystal structures that could be used for the description of the protein partition behavior of all proteins in all ATPSs analyzed in this study. We likely need to gain better insight into relationships between protein-solvent interactions and protein structure peculiarities, in particular given limitations of the used here crystal structures, to be able to construct a model that accurately predicts protein partition behavior across all ATPSs. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. [On the partition of acupuncture academic schools].

    PubMed

    Yang, Pengyan; Luo, Xi; Xia, Youbing

    2016-05-01

    Nowadays extensive attention has been paid on the research of acupuncture academic schools, however, a widely accepted method of partition of acupuncture academic schools is still in need. In this paper, the methods of partition of acupuncture academic schools in the history have been arranged, and three typical methods of"partition of five schools" "partition of eighteen schools" and "two-stage based partition" are summarized. After adeep analysis on the disadvantages and advantages of these three methods, a new method of partition of acupuncture academic schools that is called "three-stage based partition" is proposed. In this method, after the overall acupuncture academic schools are divided into an ancient stage, a modern stage and a contemporary stage, each schoolis divided into its sub-school category. It is believed that this method of partition can remedy the weaknesses ofcurrent methods, but also explore a new model of inheritance and development under a different aspect through thedifferentiation and interaction of acupuncture academic schools at three stages.

  14. Partitioning Strategy Using Static Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Seo, Yongjin; Soo Kim, Hyeon

    2016-08-01

    Flight software is software used in satellites' on-board computers. It has requirements such as real time and reliability. The IMA architecture is used to satisfy these requirements. The IMA architecture has the concept of partitions and this affected the configuration of flight software. That is, situations occurred in which software that had been loaded on one system was divided into many partitions when being loaded. For new issues, existing studies use experience based partitioning methods. However, these methods have a problem that they cannot be reused. In this respect, this paper proposes a partitioning method that is reusable and consistent.

  15. A Summary of Precipitation Characteristics from the 2006–11 Northern Australian Wet Seasons as Revealed by ARM Disdrometer Research Facilities (Darwin, Australia)

    DOE PAGES

    Giangrande, Scott E.; Bartholomew, Mary Jane; Pope, Mick; ...

    2014-05-09

    The variability of rainfall and drop size distributions (DSDs) as a function of large-scale atmospheric conditions and storm characteristics is investigated using measurements from the Atmospheric Radiation Measurement (ARM) program facility at Darwin, Australia. Observations are obtained from an impact disdrometer with a near continuous record of operation over five consecutive wet seasons (2006-2011). We partition bulk rainfall characteristics according to diurnal accumulation, convective and stratiform precipitation classifications, objective monsoonal regime and MJO phase. Our findings support previous Darwin studies suggesting a significant diurnal and DSD parameter signal associated with both convective-stratiform and wet season monsoonal regime classification. Negligible MJOmore » phase influence is determined for cumulative disdrometric statistics over the Darwin location.« less

  16. Discriminant analysis of fused positive and negative ion mobility spectra using multivariate self-modeling mixture analysis and neural networks.

    PubMed

    Chen, Ping; Harrington, Peter B

    2008-02-01

    A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).

  17. Compressible fluids with Maxwell-type equations, the minimal coupling with electromagnetic field and the Stefan–Boltzmann law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendes, Albert C.R., E-mail: albert@fisica.ufjf.br; Takakura, Flavio I., E-mail: takakura@fisica.ufjf.br; Abreu, Everton M.C., E-mail: evertonabreu@ufrrj.br

    In this work we have obtained a higher-derivative Lagrangian for a charged fluid coupled with the electromagnetic fluid and the Dirac’s constraints analysis was discussed. A set of first-class constraints fixed by noncovariant gauge condition were obtained. The path integral formalism was used to obtain the partition function for the corresponding higher-derivative Hamiltonian and the Faddeev–Popov ansatz was used to construct an effective Lagrangian. Through the partition function, a Stefan–Boltzmann type law was obtained. - Highlights: • Higher-derivative Lagrangian for a charged fluid. • Electromagnetic coupling and Dirac’s constraint analysis. • Partition function through path integral formalism. • Stefan–Boltzmann-kind lawmore » through the partition function.« less

  18. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  19. Cluster formation and drag reduction-proposed mechanism of particle recirculation within the partition column of the bottom spray fluid-bed coater.

    PubMed

    Wang, Li Kun; Heng, Paul Wan Sia; Liew, Celine Valeria

    2015-04-01

    Bottom spray fluid-bed coating is a common technique for coating multiparticulates. Under the quality-by-design framework, particle recirculation within the partition column is one of the main variability sources affecting particle coating and coat uniformity. However, the occurrence and mechanism of particle recirculation within the partition column of the coater are not well understood. The purpose of this study was to visualize and define particle recirculation within the partition column. Based on different combinations of partition gap setting, air accelerator insert diameter, and particle size fraction, particle movements within the partition column were captured using a high-speed video camera. The particle recirculation probability and voidage information were mapped using a visiometric process analyzer. High-speed images showed that particles contributing to the recirculation phenomenon were behaving as clustered colonies. Fluid dynamics analysis indicated that particle recirculation within the partition column may be attributed to the combined effect of cluster formation and drag reduction. Both visiometric process analysis and particle coating experiments showed that smaller particles had greater propensity toward cluster formation than larger particles. The influence of cluster formation on coating performance and possible solutions to cluster formation were further discussed. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. An iterative network partition algorithm for accurate identification of dense network modules

    PubMed Central

    Sun, Siqi; Dong, Xinran; Fu, Yao; Tian, Weidong

    2012-01-01

    A key step in network analysis is to partition a complex network into dense modules. Currently, modularity is one of the most popular benefit functions used to partition network modules. However, recent studies suggested that it has an inherent limitation in detecting dense network modules. In this study, we observed that despite the limitation, modularity has the advantage of preserving the primary network structure of the undetected modules. Thus, we have developed a simple iterative Network Partition (iNP) algorithm to partition a network. The iNP algorithm provides a general framework in which any modularity-based algorithm can be implemented in the network partition step. Here, we tested iNP with three modularity-based algorithms: multi-step greedy (MSG), spectral clustering and Qcut. Compared with the original three methods, iNP achieved a significant improvement in the quality of network partition in a benchmark study with simulated networks, identified more modules with significantly better enrichment of functionally related genes in both yeast protein complex network and breast cancer gene co-expression network, and discovered more cancer-specific modules in the cancer gene co-expression network. As such, iNP should have a broad application as a general method to assist in the analysis of biological networks. PMID:22121225

  1. Cortical Signatures of Heard and Imagined Speech Envelopes

    DTIC Science & Technology

    2013-08-01

    music  has  a  great   rhythm,  and  6)  The  government  sought  authorization  of  his...a   training  set  and  a  testing  set.    This  partitioning  was  performed  to  prevent  circular  inference...and  to   facilitate  the  classification  procedures  described  in  the  next  few  sections.    The   training

  2. Study of a monogamous entanglement measure for three-qubit quantum systems

    NASA Astrophysics Data System (ADS)

    Li, Qiting; Cui, Jianlian; Wang, Shuhao; Long, Gui-Lu

    2016-06-01

    The entanglement quantification and classification of multipartite quantum states is an important research area in quantum information. In this paper, in terms of the reduced density matrices corresponding to all possible partitions of the entire system, a bounded entanglement measure is constructed for arbitrary-dimensional multipartite quantum states. In particular, for three-qubit quantum systems, we prove that our entanglement measure satisfies the relation of monogamy. Furthermore, we present a necessary condition for characterizing maximally entangled states using our entanglement measure.

  3. Can texture analysis of tooth microwear detect within guild niche partitioning in extinct species?

    NASA Astrophysics Data System (ADS)

    Purnell, Mark; Nedza, Christopher; Rychlik, Leszek

    2017-04-01

    Recent work shows that tooth microwear analysis can be applied further back in time and deeper into the phylogenetic history of vertebrate clades than previously thought (e.g. niche partitioning in early Jurassic insectivorous mammals; Gill et al., 2014, Nature). Furthermore, quantitative approaches to analysis based on parameterization of surface roughness are increasing the robustness and repeatability of this widely used dietary proxy. Discriminating between taxa within dietary guilds has the potential to significantly increase our ability to determine resource use and partitioning in fossil vertebrates, but how sensitive is the technique? To address this question we analysed tooth microwear texture in sympatric populations of shrew species (Neomys fodiens, Neomys anomalus, Sorex araneus, Sorex minutus) from BiaŁ owieza Forest, Poland. These populations are known to exhibit varying degrees of niche partitioning (Churchfield & Rychlik, 2006, J. Zool.) with greatest overlap between the Neomys species. Sorex araneus also exhibits some niche overlap with N. anomalus, while S. minutus is the most specialised. Multivariate analysis based only on tooth microwear textures recovers the same pattern of niche partitioning. Our results also suggest that tooth textures track seasonal differences in diet. Projecting data from fossils into the multivariate dietary space defined using microwear from extant taxa demonstrates that the technique is capable of subtle dietary discrimination in extinct insectivores.

  4. The Partition of Multi-Resolution LOD Based on Qtm

    NASA Astrophysics Data System (ADS)

    Hou, M.-L.; Xing, H.-Q.; Zhao, X.-S.; Chen, J.

    2011-08-01

    The partition hierarch of Quaternary Triangular Mesh (QTM) determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details) based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.

  5. Analysis of red blood cell partitioning at bifurcations in simulated microvascular networks

    NASA Astrophysics Data System (ADS)

    Balogh, Peter; Bagchi, Prosenjit

    2018-05-01

    Partitioning of red blood cells (RBCs) at vascular bifurcations has been studied over many decades using in vivo, in vitro, and theoretical models. These studies have shown that RBCs usually do not distribute to the daughter vessels with the same proportion as the blood flow. Such disproportionality occurs, whereby the cell distribution fractions are either higher or lower than the flow fractions and have been referred to as classical partitioning and reverse partitioning, respectively. The current work presents a study of RBC partitioning based on, for the first time, a direct numerical simulation (DNS) of a flowing cell suspension through modeled vascular networks that are comprised of multiple bifurcations and have topological similarity to microvasculature in vivo. The flow of deformable RBCs at physiological hematocrits is considered through the networks, and the 3D dynamics of each individual cell are accurately resolved. The focus is on the detailed analysis of the partitioning, based on the DNS data, as it develops naturally in successive bifurcations, and the underlying mechanisms. We find that while the time-averaged partitioning at a bifurcation manifests in one of two ways, namely, the classical or reverse partitioning, the time-dependent behavior can cycle between these two types. We identify and analyze four different cellular-scale mechanisms underlying the time-dependent partitioning. These mechanisms arise, in general, either due to an asymmetry in the RBC distribution in the feeding vessels caused by the events at an upstream bifurcation or due to a temporary increase in cell concentration near capillary bifurcations. Using the DNS results, we show that a positive skewness in the hematocrit profile in the feeding vessel is associated with the classical partitioning, while a negative skewness is associated with the reverse one. We then present a detailed analysis of the two components of disproportionate partitioning as identified in prior studies, namely, plasma skimming and cell screening. The plasma skimming component is shown to under-predict the disproportionality, leaving the cell screening component to make up for the difference. The crossing of the separation surface by the cells is observed to be a dominant mechanism underlying the cell screening, which is shown to mitigate extreme heterogeneity in RBC distribution across the networks.

  6. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    NASA Astrophysics Data System (ADS)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  7. Covariance Partition Priors: A Bayesian Approach to Simultaneous Covariance Estimation for Longitudinal Data.

    PubMed

    Gaskins, J T; Daniels, M J

    2016-01-02

    The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.

  8. Automatic intelligibility classification of sentence-level pathological speech

    PubMed Central

    Kim, Jangwon; Kumar, Naveen; Tsiartas, Andreas; Li, Ming; Narayanan, Shrikanth S.

    2014-01-01

    Pathological speech usually refers to the condition of speech distortion resulting from atypicalities in voice and/or in the articulatory mechanisms owing to disease, illness or other physical or biological insult to the production system. Although automatic evaluation of speech intelligibility and quality could come in handy in these scenarios to assist experts in diagnosis and treatment design, the many sources and types of variability often make it a very challenging computational processing problem. In this work we propose novel sentence-level features to capture abnormal variation in the prosodic, voice quality and pronunciation aspects in pathological speech. In addition, we propose a post-classification posterior smoothing scheme which refines the posterior of a test sample based on the posteriors of other test samples. Finally, we perform feature-level fusions and subsystem decision fusion for arriving at a final intelligibility decision. The performances are tested on two pathological speech datasets, the NKI CCRT Speech Corpus (advanced head and neck cancer) and the TORGO database (cerebral palsy or amyotrophic lateral sclerosis), by evaluating classification accuracy without overlapping subjects’ data among training and test partitions. Results show that the feature sets of each of the voice quality subsystem, prosodic subsystem, and pronunciation subsystem, offer significant discriminating power for binary intelligibility classification. We observe that the proposed posterior smoothing in the acoustic space can further reduce classification errors. The smoothed posterior score fusion of subsystems shows the best classification performance (73.5% for unweighted, and 72.8% for weighted, average recalls of the binary classes). PMID:25414544

  9. Accurate potentiometric determination of lipid membrane-water partition coefficients and apparent dissociation constants of ionizable drugs: electrostatic corrections.

    PubMed

    Elsayed, Mustafa M A; Vierl, Ulrich; Cevc, Gregor

    2009-06-01

    Potentiometric lipid membrane-water partition coefficient studies neglect electrostatic interactions to date; this leads to incorrect results. We herein show how to account properly for such interactions in potentiometric data analysis. We conducted potentiometric titration experiments to determine lipid membrane-water partition coefficients of four illustrative drugs, bupivacaine, diclofenac, ketoprofen and terbinafine. We then analyzed the results conventionally and with an improved analytical approach that considers Coulombic electrostatic interactions. The new analytical approach delivers robust partition coefficient values. In contrast, the conventional data analysis yields apparent partition coefficients of the ionized drug forms that depend on experimental conditions (mainly the lipid-drug ratio and the bulk ionic strength). This is due to changing electrostatic effects originating either from bound drug and/or lipid charges. A membrane comprising 10 mol-% mono-charged molecules in a 150 mM (monovalent) electrolyte solution yields results that differ by a factor of 4 from uncharged membranes results. Allowance for the Coulombic electrostatic interactions is a prerequisite for accurate and reliable determination of lipid membrane-water partition coefficients of ionizable drugs from potentiometric titration data. The same conclusion applies to all analytical methods involving drug binding to a surface.

  10. Discriminating Drug-Like Compounds by Partition Trees with Quantum Similarity Indices and Graph Invariants.

    PubMed

    Julián-Ortiz, Jesus V de; Gozalbes, Rafael; Besalú, Emili

    2016-01-01

    The search for new drug candidates in databases is of paramount importance in pharmaceutical chemistry. The selection of molecular subsets is greatly optimized and much more promising when potential drug-like molecules are detected a priori. In this work, about one hundred thousand molecules are ranked following a new methodology: a drug/non-drug classifier constructed by a consensual set of classification trees. The classification trees arise from the stochastic generation of training sets, which in turn are used to estimate probability factors of test molecules to be drug-like compounds. Molecules were represented by Topological Quantum Similarity Indices and their Graph Theoretical counterparts. The contribution of the present paper consists of presenting an effective ranking method able to improve the probability of finding drug-like substances by using these types of molecular descriptors.

  11. A matter of phylogenetic scale: Distinguishing incomplete lineage sorting from lateral gene transfer as the cause of gene tree discord in recent versus deep diversification histories.

    PubMed

    Knowles, L Lacey; Huang, Huateng; Sukumaran, Jeet; Smith, Stephen A

    2018-03-01

    Discordant gene trees are commonly encountered when sequences from thousands of loci are applied to estimate phylogenetic relationships. Several processes contribute to this discord. Yet, we have no methods that jointly model different sources of conflict when estimating phylogenies. An alternative to analyzing entire genomes or all the sequenced loci is to identify a subset of loci for phylogenetic analysis. If we can identify data partitions that are most likely to reflect descent from a common ancestor (i.e., discordant loci that indeed reflect incomplete lineage sorting [ILS], as opposed to some other process, such as lateral gene transfer [LGT]), we can analyze this subset using powerful coalescent-based species-tree approaches. Test data sets were simulated where discord among loci could arise from ILS and LGT. Data sets where analyzed using the newly developed program CLASSIPHY (Huang et al., ) to assess whether our ability to distinguish the cause of discord among loci varied when ILS and LGT occurred in the recent versus deep past and whether the accuracy of these inferences were affected by the mutational process. We show that accuracy of probabilistic classification of individual loci by the cause of discord differed when ILS and LGT events occurred more recently compared with the distant past and that the signal-to-noise ratio arising from the mutational process contributes to difficulties in inferring LGT data partitions. We discuss our findings in terms of the promise and limitations of identifying subsets of loci for species-tree inference that will not violate the underlying coalescent model (i.e., data partitions in which ILS, and not LGT, contributes to discord). We also discuss the empirical implications of our work given the many recalcitrant nodes in the tree of life (e.g., origins of angiosperms, amniotes, or Neoaves), and recent arguments for concatenating loci. © 2018 Botanical Society of America.

  12. Community Landscapes: An Integrative Approach to Determine Overlapping Network Module Hierarchy, Identify Key Nodes and Predict Network Dynamics

    PubMed Central

    Kovács, István A.; Palotai, Robin; Szalay, Máté S.; Csermely, Peter

    2010-01-01

    Background Network communities help the functional organization and evolution of complex networks. However, the development of a method, which is both fast and accurate, provides modular overlaps and partitions of a heterogeneous network, has proven to be rather difficult. Methodology/Principal Findings Here we introduce the novel concept of ModuLand, an integrative method family determining overlapping network modules as hills of an influence function-based, centrality-type community landscape, and including several widely used modularization methods as special cases. As various adaptations of the method family, we developed several algorithms, which provide an efficient analysis of weighted and directed networks, and (1) determine pervasively overlapping modules with high resolution; (2) uncover a detailed hierarchical network structure allowing an efficient, zoom-in analysis of large networks; (3) allow the determination of key network nodes and (4) help to predict network dynamics. Conclusions/Significance The concept opens a wide range of possibilities to develop new approaches and applications including network routing, classification, comparison and prediction. PMID:20824084

  13. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.

    2018-06-01

    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.

  14. Refined 4-group classification of left ventricular hypertrophy based on ventricular concentricity and volume dilatation outlines distinct noninvasive hemodynamic profiles in a large contemporary echocardiographic population.

    PubMed

    Barbieri, Andrea; Rossi, Andrea; Gaibazzi, Nicola; Erlicher, Andrea; Mureddu, Gian Francesco; Frattini, Silvia; Faden, Giacomo; Manicardi, Marcella; Beraldi, Monica; Agostini, Francesco; Lazzarini, Valentina; Moreo, Antonella; Temporelli, Pier Luigi; Faggiano, Pompilio

    2018-05-23

    Left ventricular hypertrophy (LVH) may reflect a wide variety of physiologic and pathologic conditions. Thus, it can be misleading to consider all LVH to be homogenous or similar. Refined 4-group classification of LVH based on ventricular concentricity and dilatation may be identified. To determine whether the 4-group classification of LVH identified distinct phenotypes, we compared their association with various noninvasive markers of cardiac stress. Cohort of unselected adult outpatients referred to a seven tertiary care echocardiographic laboratory for any indication in a 2-week period. We evaluated the LV geometric patterns using validated echocardiographic indexation methods and partition values. Standard echocardiography was performed in 1137 consecutive subjects, and LVH was found in 42%. The newly proposed 4-group classification of LVH was applicable in 88% of patients. The most common pattern resulted in concentric LVH (19%). The worst functional and hemodynamic profile was associated with eccentric LVH and those with mixed LVH had a higher prevalence of reduced EF than those with concentric LVH (P < .001 for all). The new 4-group classification of LVH system showed distinct differences in cardiac function and noninvasive hemodynamics allowing clinicians to distinguish different LV hemodynamic stress adaptations in patients with LVH. © 2018 Wiley Periodicals, Inc.

  15. Methods for selecting fixed-effect models for heterogeneous codon evolution, with comments on their application to gene and genome data.

    PubMed

    Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P

    2007-02-08

    Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.

  16. Automated classifications of topography from DEMs by an unsupervised nested-means algorithm and a three-part geometric signature

    NASA Astrophysics Data System (ADS)

    Iwahashi, Junko; Pike, Richard J.

    2007-05-01

    An iterative procedure that implements the classification of continuous topography as a problem in digital image-processing automatically divides an area into categories of surface form; three taxonomic criteria-slope gradient, local convexity, and surface texture-are calculated from a square-grid digital elevation model (DEM). The sequence of programmed operations combines twofold-partitioned maps of the three variables converted to greyscale images, using the mean of each variable as the dividing threshold. To subdivide increasingly subtle topography, grid cells sloping at less than mean gradient of the input DEM are classified by designating mean values of successively lower-sloping subsets of the study area (nested means) as taxonomic thresholds, thereby increasing the number of output categories from the minimum 8 to 12 or 16. Program output is exemplified by 16 topographic types for the world at 1-km spatial resolution (SRTM30 data), the Japanese Islands at 270 m, and part of Hokkaido at 55 m. Because the procedure is unsupervised and reflects frequency distributions of the input variables rather than pre-set criteria, the resulting classes are undefined and must be calibrated empirically by subsequent analysis. Maps of the example classifications reflect physiographic regions, geological structure, and landform as well as slope materials and processes; fine-textured terrain categories tend to correlate with erosional topography or older surfaces, coarse-textured classes with areas of little dissection. In Japan the resulting classes approximate landform types mapped from airphoto analysis, while in the Americas they create map patterns resembling Hammond's terrain types or surface-form classes; SRTM30 output for the United States compares favorably with Fenneman's physical divisions. Experiments are suggested for further developing the method; the Arc/Info AML and the map of terrain classes for the world are available as online downloads.

  17. Automated classifications of topography from DEMs by an unsupervised nested-means algorithm and a three-part geometric signature

    USGS Publications Warehouse

    Iwahashi, J.; Pike, R.J.

    2007-01-01

    An iterative procedure that implements the classification of continuous topography as a problem in digital image-processing automatically divides an area into categories of surface form; three taxonomic criteria-slope gradient, local convexity, and surface texture-are calculated from a square-grid digital elevation model (DEM). The sequence of programmed operations combines twofold-partitioned maps of the three variables converted to greyscale images, using the mean of each variable as the dividing threshold. To subdivide increasingly subtle topography, grid cells sloping at less than mean gradient of the input DEM are classified by designating mean values of successively lower-sloping subsets of the study area (nested means) as taxonomic thresholds, thereby increasing the number of output categories from the minimum 8 to 12 or 16. Program output is exemplified by 16 topographic types for the world at 1-km spatial resolution (SRTM30 data), the Japanese Islands at 270??m, and part of Hokkaido at 55??m. Because the procedure is unsupervised and reflects frequency distributions of the input variables rather than pre-set criteria, the resulting classes are undefined and must be calibrated empirically by subsequent analysis. Maps of the example classifications reflect physiographic regions, geological structure, and landform as well as slope materials and processes; fine-textured terrain categories tend to correlate with erosional topography or older surfaces, coarse-textured classes with areas of little dissection. In Japan the resulting classes approximate landform types mapped from airphoto analysis, while in the Americas they create map patterns resembling Hammond's terrain types or surface-form classes; SRTM30 output for the United States compares favorably with Fenneman's physical divisions. Experiments are suggested for further developing the method; the Arc/Info AML and the map of terrain classes for the world are available as online downloads. ?? 2006 Elsevier B.V. All rights reserved.

  18. Worldwide analysis of multiple microsatellites: language diversity has a detectable influence on DNA diversity.

    PubMed

    Belle, Elise M S; Barbujani, Guido

    2007-08-01

    Previous studies of the correlations between the languages spoken by human populations and the genes carried by the members of those populations have been limited by the small amount of genetic markers available and by approximations in the treatment of linguistic data. In this study we analyzed a large collection of polymorphic microsatellite loci (377), distributed on all autosomes, and used Ruhlen's linguistic classification, to investigate the relative roles of geography and language in shaping the distribution of human DNA diversity at a worldwide scale. For this purpose, we performed three different kinds of analysis: (i) we partitioned genetic variances at three hierarchical levels of population subdivision according to language group by means of a molecular analysis of variance (AMOVA); (ii) we quantified by a series of Mantel's tests the correlation between measures of genetic and linguistic differentiation; and (iii) we tested whether linguistic differences are increased across known zones of increased genetic change between populations. Genetic differences appear to more closely reflect geographic than linguistic differentiation. However, our analyses show that language differences also have a detectable effect on DNA diversity at the genomic level, above and beyond the effects of geographic distance. (c) 2007 Wiley-Liss, Inc.

  19. On the interpretation of domain averaged Fermi hole analyses of correlated wavefunctions.

    PubMed

    Francisco, E; Martín Pendás, A; Costales, Aurora

    2014-03-14

    Few methods allow for a physically sound analysis of chemical bonds in cases where electron correlation may be a relevant factor. The domain averaged Fermi hole (DAFH) analysis, a tool firstly proposed by Robert Ponec in the 1990's to provide interpretations of the chemical bonding existing between two fragments Ω and Ω' that divide the real space exhaustively, is one of them. This method allows for a partition of the delocalization index or bond order between Ω and Ω' into one electron contributions, but the chemical interpretation of its parameters has been firmly established only for single determinant wavefunctions. In this paper we report a general interpretation based on the concept of excluded density that is also valid for correlated descriptions. Both analytical models and actual computations on a set of simple molecules (H2, N2, LiH, and CO) are discussed, and a classification of the possible DAFH situations is presented. Our results show that this kind of analysis may reveal several correlated assisted bonding patterns that might be difficult to detect using other methods. In agreement with previous knowledge, we find that the effective bond order in covalent links decreases due to localization of electrons driven by Coulomb correlation.

  20. Metabolic flux ratio analysis and cell staining suggest the existence of C4 photosynthesis in Phaeodactylum tricornutum.

    PubMed

    Huang, A; Liu, L; Zhao, P; Yang, C; Wang, G C

    2016-03-01

    Mechanisms for carbon fixation via photosynthesis in the diatom Phaeodactylum tricornutum Bohlin were studied recently but there remains a long-standing debate concerning the occurrence of C4 photosynthesis in this species. A thorough investigation of carbon metabolism and the evidence for C4 photosynthesis based on organelle partitioning was needed. In this study, we identified the flux ratios between C3 and C4 compounds in P. tricornutum using (13)C-labelling metabolic flux ratio analysis, and stained cells with various cell-permeant fluorescent probes to investigate the likely organelle partitioning required for single-cell C4 photosynthesis. Metabolic flux ratio analysis indicated the C3/C4 exchange ratios were high. Cell staining indicated organelle partitioning required for single-cell C4 photosynthesis might exist in P. tricornutum. The results of (13)C-labelling metabolic flux ratio analysis and cell staining suggest single-cell C4 photosynthesis exists in P. tricornutum. This study provides insights into photosynthesis patterns of P. tricornutum and the evidence for C4 photosynthesis based on (13)C-labelling metabolic flux ratio analysis and organelle partitioning. © 2015 The Society for Applied Microbiology.

  1. GAS-PARTICLE PARTITIONING OF SEMI-VOLATILE ORGANICS ON ORGANIC AEROSOLS USING A PREDICTIVE ACTIVITY COEFFICIENT MODEL: ANALYSIS OF THE EFFECTS OF PARAMETER CHOICES ON MODEL PERFORMANCE. (R826771)

    EPA Science Inventory

    The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the Comparison of modeling approaches for carbon partitioning: Impact on estimates of global net primary production and equilibrium biomass of woody vegetation from MODIS GPP

    Treesearch

    Takeshi Ise; Creighton M. Litton; Christian P. Giardina; Akihiko Ito

    2010-01-01

    Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long�]lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning...

  2. The importance of having an appropriate relational data segmentation in ATLAS

    NASA Astrophysics Data System (ADS)

    Dimitrov, G.

    2015-12-01

    In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and analysis on the new Oracle 12c version features that could be beneficial will be shared with the audience.

  3. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip; Garg, Sanjay; Holowecky, Brian

    1992-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  4. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Garg, Sanjay; Holowecky, Brian R.

    1993-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  5. Considering the Spatial Layout Information of Bag of Features (BoF) Framework for Image Classification.

    PubMed

    Mu, Guangyu; Liu, Ying; Wang, Limin

    2015-01-01

    The spatial pooling method such as spatial pyramid matching (SPM) is very crucial in the bag of features model used in image classification. SPM partitions the image into a set of regular grids and assumes that the spatial layout of all visual words obey the uniform distribution over these regular grids. However, in practice, we consider that different visual words should obey different spatial layout distributions. To improve SPM, we develop a novel spatial pooling method, namely spatial distribution pooling (SDP). The proposed SDP method uses an extension model of Gauss mixture model to estimate the spatial layout distributions of the visual vocabulary. For each visual word type, SDP can generate a set of flexible grids rather than the regular grids from the traditional SPM. Furthermore, we can compute the grid weights for visual word tokens according to their spatial coordinates. The experimental results demonstrate that SDP outperforms the traditional spatial pooling methods, and is competitive with the state-of-the-art classification accuracy on several challenging image datasets.

  6. Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution

    PubMed Central

    Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen

    2014-01-01

    Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804

  7. Partition of some key regulating services in terrestrial ecosystems: Meta-analysis and review.

    PubMed

    Viglizzo, E F; Jobbágy, E G; Ricard, M F; Paruelo, J M

    2016-08-15

    Our knowledge about the functional foundations of ecosystem service (ES) provision is still limited and more research is needed to elucidate key functional mechanisms. Using a simplified eco-hydrological scheme, in this work we analyzed how land-use decisions modify the partition of some essential regulatory ES by altering basic relationships between biomass stocks and water flows. A comprehensive meta-analysis and review was conducted based on global, regional and local data from peer-reviewed publications. We analyzed five datasets comprising 1348 studies and 3948 records on precipitation (PPT), aboveground biomass (AGB), AGB change, evapotranspiration (ET), water yield (WY), WY change, runoff (R) and infiltration (I). The conceptual framework was focused on ES that are associated with the ecological functions (e.g., intermediate ES) of ET, WY, R and I. ES included soil protection, carbon sequestration, local climate regulation, water-flow regulation and water recharge. To address the problem of data normality, the analysis included both parametric and non-parametric regression analysis. Results demonstrate that PPT is a first-order biophysical factor that controls ES release at the broader scales. At decreasing scales, ES are partitioned as result of PPT interactions with other biophysical and anthropogenic factors. At intermediate scales, land-use change interacts with PPT modifying ES partition as it the case of afforestation in dry regions, where ET and climate regulation may be enhanced at the expense of R and water-flow regulation. At smaller scales, site-specific conditions such as topography interact with PPT and AGB displaying different ES partition formats. The probable implications of future land-use and climate change on some key ES production and partition are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. On the Partitioning of Squared Euclidean Distance and Its Applications in Cluster Analysis.

    ERIC Educational Resources Information Center

    Carter, Randy L.; And Others

    1989-01-01

    The partitioning of squared Euclidean--E(sup 2)--distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed. Applications to specific cluster analysis problems are provided (i.e., to design Monte Carlo studies for performance comparisons of several clustering methods…

  9. Study of oil-water partitioning of a chemical dispersant using an acute bioassay with marine crustaceans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, P.G.; Abernethy, S.; Mackay, D.

    1982-01-01

    The toxicity of seawater dispersions of a chemical dispersant to two marine crustaceans was investigated in the presence and absence of various quantities of a non-toxic mineral oil. From the results and a physical-chemical partitioning analysis, a limiting value of the oil-water partition coefficient of the toxic compounds is deduced suggesting that essentially all of the toxic compounds in the dispersant will partition into solution in water following dispersant application to an oil spill. This conclusion simplifies interpretation and prediction of the toxic effects of a dispersed oil spill. The combined bioassay-partitioning procedure may have applications to the study ofmore » the toxicity of other complex mixtures such as industrial effluents.« less

  10. Retinal ganglion cells in the eastern newt Notophthalmus viridescens: topography, morphology, and diversity.

    PubMed

    Pushchin, Igor I; Karetin, Yuriy A

    2009-10-20

    The topography and morphology of retinal ganglion cells (RGCs) in the eastern newt were studied. Cells were retrogradely labeled with tetramethylrhodamine-conjugated dextran amines or horseradish peroxidase and examined in retinal wholemounts. Their total number was 18,025 +/- 3,602 (mean +/- SEM). The spatial density of RGCs varied from 2,100 cells/mm(2) in the retinal periphery to 4,500 cells/mm(2) in the dorsotemporal retina. No prominent retinal specializations were found. The spatial resolution estimated from the spatial density of RGCs varied from 1.4 cycles per degree in the periphery to 1.95 cycles per degree in the region of the peak RGC density. A sample of 68 cells was camera lucida drawn and subjected to quantitative analysis. A total of 21 parameters related to RGC morphology and stratification in the retina were estimated. Partitionings obtained by using different clustering algorithms combined with automatic variable weighting and dimensionality reduction techniques were compared, and an effective solution was found by using silhouette analysis. A total of seven clusters were identified and associated with potential cell types. Kruskal-Wallis ANOVA-on-Ranks with post hoc Mann-Whitney U tests showed significant pairwise between-cluster differences in one or more of the clustering variables. The average silhouette values of the clusters were reasonably high, ranging from 0.52 to 0.79. Cells assigned to the same cluster displayed similar morphology and stratification in the retina. The advantages and limitations of the methodology adopted are discussed. The present classification is compared with known morphological and physiological RGC classifications in other salamanders.

  11. Differential Impact of Whole-Brain Radiotherapy Added to Radiosurgery for Brain Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Doo-Sik; Lee, Jung-Il, E-mail: jilee@skku.ed; Im, Yong-Seok

    2010-10-01

    Purpose: The authors investigated whether the addition of whole-brain radiotherapy (WBRT) to stereotactic radiosurgery (SRS) provided any therapeutic benefit according to recursive partitioning analysis (RPA) class. Methods and Materials: Two hundred forty-five patients with 1 to 10 metastases who underwent SRS between January 2002 and December 2007 were included in the study. Of those, 168 patients were treated with SRS alone and 77 patients received SRS followed by WBRT. Actuarial curves were estimated using the Kaplan-Meier method regarding overall survival (OS), distant brain control (DC), and local brain control (LC) stratified by RPA class. Analyses for known prognostic variables weremore » performed using the Cox proportional hazards model. Results: Univariate and multivariate analysis revealed that control of the primary tumor, small number of brain metastases, Karnofsky performance scale (KPS) > 70, and initial treatment modalities were significant predictors for survival. For RPA class 1, SRS plus WBRT was associated with a longer survival time compared with SRS alone (854 days vs. 426 days, p = 0.042). The SRS plus WBRT group also showed better LC rate than did the SRS-alone group (p = 0.021), although they did not show a better DC rate (p = 0.079). By contrast, for RPA class 2 or 3, no significant difference in OS, LC, or DC was found between the two groups. Conclusions: These results suggest that RPA classification should determine whether or not WBRT is added to SRS. WBRT may be recommended to be added to SRS for patients in whom long-term survival is expected on the basis of RPA classification.« less

  12. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  13. Fast Solution in Sparse LDA for Binary Classification

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback

    2010-01-01

    An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic form along with the inherent sequential nature of greedy search itself. Together this enables the use of highly-efficient partitioned-matrix-inverse techniques that result in large speedups of computation in both the forward-selection and backward-elimination stages of greedy algorithms in general.

  14. Partitioning coefficients of polycyclic aromatic hydrocarbons in stack gas from a municipal incinerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, W.M.G.; Chen, J.C.

    1995-12-31

    In this study, solid-gas partitioning coefficients of PAHs on fly ash in stack gas from a municipal incinerator were determined according to elution analysis with gas-solid chromatography. The fly ash from the electrostatic precipitator was sieved and packed into a 1/4 inch (6.3 mm) pyrex column. Elution analysis with gas-solid chromatography was conducted for three PAEs, Napthalene, Anthracene, and Pyrene. The temperature for elution analysis was in the range of 100{degrees}C to 300{degrees}C. Vg, specific retention volume obtained from elution analysis, and S, specific surface area of fly ash measured by a surface area measurement instrument were used to estimatemore » the solid-gas partitioning coefficient KR. In addition, the relationships between KR and temperature and KR and PAH concentrations were investigated.« less

  15. Effect of partition board color on mood and autonomic nervous function.

    PubMed

    Sakuragi, Sokichi; Sugiyama, Yoshiki

    2011-12-01

    The purpose of this study was to evaluate the effects of the presence or absence (control) of a partition board and its color (red, yellow, blue) on subjective mood ratings and changes in autonomic nervous system indicators induced by a video game task. The increase in the mean Profile of Mood States (POMS) Fatigue score and mean Oppressive feeling rating after the task was lowest with the blue partition board. Multiple-regression analysis identified oppressive feeling and error scores on the second half of the task as statistically significant contributors to Fatigue. While explanatory variables were limited to the physiological indices, multiple-regression analysis identified a significant contribution of autonomic reactivity (assessed by heart rate variability) to Fatigue. These results suggest that a blue partition board would reduce task-induced subjective fatigue, in part by lowering the oppressive feeling of being enclosed during the task, possibly by increasing autonomic reactivity.

  16. PAQ: Partition Analysis of Quasispecies.

    PubMed

    Baccam, P; Thompson, R J; Fedrigo, O; Carpenter, S; Cornette, J L

    2001-01-01

    The complexities of genetic data may not be accurately described by any single analytical tool. Phylogenetic analysis is often used to study the genetic relationship among different sequences. Evolutionary models and assumptions are invoked to reconstruct trees that describe the phylogenetic relationship among sequences. Genetic databases are rapidly accumulating large amounts of sequences. Newly acquired sequences, which have not yet been characterized, may require preliminary genetic exploration in order to build models describing the evolutionary relationship among sequences. There are clustering techniques that rely less on models of evolution, and thus may provide nice exploratory tools for identifying genetic similarities. Some of the more commonly used clustering methods perform better when data can be grouped into mutually exclusive groups. Genetic data from viral quasispecies, which consist of closely related variants that differ by small changes, however, may best be partitioned by overlapping groups. We have developed an intuitive exploratory program, Partition Analysis of Quasispecies (PAQ), which utilizes a non-hierarchical technique to partition sequences that are genetically similar. PAQ was used to analyze a data set of human immunodeficiency virus type 1 (HIV-1) envelope sequences isolated from different regions of the brain and another data set consisting of the equine infectious anemia virus (EIAV) regulatory gene rev. Analysis of the HIV-1 data set by PAQ was consistent with phylogenetic analysis of the same data, and the EIAV rev variants were partitioned into two overlapping groups. PAQ provides an additional tool which can be used to glean information from genetic data and can be used in conjunction with other tools to study genetic similarities and genetic evolution of viral quasispecies.

  17. A genetic algorithm-based framework for wavelength selection on sample categorization.

    PubMed

    Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F

    2017-08-01

    In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Understanding patient outcomes after acute respiratory distress syndrome: identifying subtypes of physical, cognitive and mental health outcomes.

    PubMed

    Brown, Samuel M; Wilson, Emily L; Presson, Angela P; Dinglas, Victor D; Greene, Tom; Hopkins, Ramona O; Needham, Dale M

    2017-12-01

    With improving short-term mortality in acute respiratory distress syndrome (ARDS), understanding survivors' posthospitalisation outcomes is increasingly important. However, little is known regarding associations among physical, cognitive and mental health outcomes. Identification of outcome subtypes may advance understanding of post-ARDS morbidities. We analysed baseline variables and 6-month health status for participants in the ARDS Network Long-Term Outcomes Study. After division into derivation and validation datasets, we used weighted network analysis to identify subtypes from predictors and outcomes in the derivation dataset. We then used recursive partitioning to develop a subtype classification rule and assessed adequacy of the classification rule using a kappa statistic with the validation dataset. Among 645 ARDS survivors, 430 were in the derivation and 215 in the validation datasets. Physical and mental health status, but not cognitive status, were closely associated. Four distinct subtypes were apparent (percentages in the derivation cohort): (1) mildly impaired physical and mental health (22% of patients), (2) moderately impaired physical and mental health (39%), (3) severely impaired physical health with moderately impaired mental health (15%) and (4) severely impaired physical and mental health (24%). The classification rule had high agreement (kappa=0.89 in validation dataset). Female Latino smokers had the poorest status, while male, non-Latino non-smokers had the best status. We identified four post-ARDS outcome subtypes that were predicted by sex, ethnicity, pre-ARDS smoking status and other baseline factors. These subtypes may help develop tailored rehabilitation strategies, including investigation of combined physical and mental health interventions, and distinct interventions to improve cognitive outcomes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  1. Multiple Hypotheses Image Segmentation and Classification With Application to Dietary Assessment

    PubMed Central

    Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2016-01-01

    We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier’s confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback. PMID:25561457

  2. Prediction of bioavailability of selected bisphosphonates using in silico methods towards categorization into a biopharmaceutical classification system.

    PubMed

    Biernacka, Joanna; Betlejewska-Kielak, Katarzyna; Kłosińska-Szmurło, Ewa; Pluciński, Franciszek A; Mazurek, Aleksander P

    2013-01-01

    The physicochemical properties relevant to biological activity of selected bisphosphonates such as clodronate disodium salt, etidronate disodium salt, pamidronate disodium salt, alendronate sodium salt, ibandronate sodium salt, risedronate sodium salt and zoledronate disodium salt were determined using in silico methods. The main aim of our research was to investigate and propose molecular determinants thataffect bioavailability of above mentioned compounds. These determinants are: stabilization energy (deltaE), free energy of solvation (deltaG(solv)), electrostatic potential, dipole moment, as well as partition and distribution coefficients estimated by the log P and log D values. Presented values indicate that selected bisphosphonates a recharacterized by high solubility and low permeability. The calculated parameters describing both solubility and permeability through biological membranes seem to be a good bioavailability indicators of bisphosphonates examined and can be a useful tool to include into Biopharmaceutical Classification System (BCS) development.

  3. Multiple hypotheses image segmentation and classification with application to dietary assessment.

    PubMed

    Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J; Delp, Edward J

    2015-01-01

    We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier's confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback.

  4. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  5. SAMPLING AND ANALYSIS OF SEMIVOLATILE AEROSOLS

    EPA Science Inventory

    Denuder based samplers can effectively separate semivolatile gases from particles and 'freeze' the partitioning in time. Conversely, samples collected on filters partition mass according to the conditions of the influent airstream, which may change over time. As a result thes...

  6. The role of leaf height in plant competition for sunlight: analysis of a canopy partitioning model.

    PubMed

    Nevai, Andrew L; Vance, Richard R

    2008-01-01

    A global method of nullcline endpoint analysis is employed to determine the outcome of competition for sunlight between two hypothetical plant species with clonal growth form that differ solely in the height at which they place their leaves above the ground. This difference in vertical leaf placement, or canopy partitioning, produces species differences in sunlight energy capture and stem metabolic maintenance costs. The competitive interaction between these two species is analyzed by considering a special case of a canopy partitioning model (RR Vance and AL Nevai, J. Theor. Biol. 2007, 245:210-219; AL Nevai and RR Vance, J. Math. Biol. 2007, 55:105-145). Nullcline endpoint analysis is used to partition parameter space into regions within which either competitive exclusion or competitive coexistence occurs. The principal conclusion is that two clonal plant species which compete for sunlight and place their leaves at different heights above the ground but differ in no other way can, under suitable parameter values, experience stable coexistence even though they occupy an environment which varies neither over horizontal space nor through time.

  7. Effect of Clustering Algorithm on Establishing Markov State Model for Molecular Dynamics Simulations.

    PubMed

    Li, Yan; Dong, Zigang

    2016-06-27

    Recently, the Markov state model has been applied for kinetic analysis of molecular dynamics simulations. However, discretization of the conformational space remains a primary challenge in model building, and it is not clear how the space decomposition by distinct clustering strategies exerts influence on the model output. In this work, different clustering algorithms are employed to partition the conformational space sampled in opening and closing of fatty acid binding protein 4 as well as inactivation and activation of the epidermal growth factor receptor. Various classifications are achieved, and Markov models are set up accordingly. On the basis of the models, the total net flux and transition rate are calculated between two distinct states. Our results indicate that geometric and kinetic clustering perform equally well. The construction and outcome of Markov models are heavily dependent on the data traits. Compared to other methods, a combination of Bayesian and hierarchical clustering is feasible in identification of metastable states.

  8. Breast Cancer Detection with Reduced Feature Set.

    PubMed

    Mert, Ahmet; Kılıç, Niyazi; Bilgili, Erdem; Akan, Aydin

    2015-01-01

    This paper explores feature reduction properties of independent component analysis (ICA) on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC) dataset is reduced to one-dimensional feature vector computing an independent component (IC). The original data with 30 features and reduced one feature (IC) are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN), artificial neural network (ANN), radial basis function neural network (RBFNN), and support vector machine (SVM). The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations) and partitioning (20%-40%) methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden's index, discriminant power, and the receiver operating characteristic (ROC) curve with its criterion values including area under curve (AUC) and 95% confidential interval (CI). This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  9. Partition functions of thermally dissociating diatomic molecules and related momentum problem

    NASA Astrophysics Data System (ADS)

    Buchowiecki, Marcin

    2017-11-01

    The anharmonicity and ro-vibrational coupling in ro-vibrational partition functions of diatomic molecules are analyzed for the high temperatures of the thermal dissociation regime. The numerically exact partition functions and thermal energies are calculated. At the high temperatures the proper integration of momenta is important if the partition function of the molecule, understood as bounded system, is to be obtained. The problem of proper treatment of momentum is crucial for correctness of high temperature molecular simulations as the decomposition of simulated molecule have to be avoided; the analysis of O2, H2+, and NH3 molecules allows to show importance of βDe value.

  10. ANALYTICAL METHOD DEVELOPMENTS TO SUPPORT PARTITIONING INTERWELL TRACER TESTING

    EPA Science Inventory

    Partitioning Interwell Tracer Testing (PITT) uses alcohol tracer compounds in estimating subsurface contamination from non-polar pollutants. PITT uses the analysis of water samples for various alcohols as part of the overall measurement process. The water samples may contain many...

  11. Lake Michigan Diversion Accounting land cover change estimation by use of the National Land Cover Dataset and raingage network partitioning analysis

    USGS Publications Warehouse

    Sharpe, Jennifer B.; Soong, David T.

    2015-01-01

    This study used the National Land Cover Dataset (NLCD) and developed an automated process for determining the area of the three land cover types, thereby allowing faster updating of future models, and for evaluating land cover changes by use of historical NLCD datasets. The study also carried out a raingage partitioning analysis so that the segmentation of land cover and rainfall in each modeled unit is directly applicable to the HSPF modeling. Historical and existing impervious, grass, and forest land acreages partitioned by percentages covered by two sets of raingages for the Lake Michigan diversion SCAs, gaged basins, and ungaged basins are presented.

  12. The influence of hydrogen bonding on partition coefficients

    NASA Astrophysics Data System (ADS)

    Borges, Nádia Melo; Kenny, Peter W.; Montanari, Carlos A.; Prokopczyk, Igor M.; Ribeiro, Jean F. R.; Rocha, Josmar R.; Sartori, Geraldo Rodrigues

    2017-02-01

    This Perspective explores how consideration of hydrogen bonding can be used to both predict and better understand partition coefficients. It is shown how polarity of both compounds and substructures can be estimated from measured alkane/water partition coefficients. When polarity is defined in this manner, hydrogen bond donors are typically less polar than hydrogen bond acceptors. Analysis of alkane/water partition coefficients in conjunction with molecular electrostatic potential calculations suggests that aromatic chloro substituents may be less lipophilic than is generally believed and that some of the effect of chloro-substitution stems from making the aromatic π-cloud less available to hydrogen bond donors. Relationships between polarity and calculated hydrogen bond basicity are derived for aromatic nitrogen and carbonyl oxygen. Aligned hydrogen bond acceptors appear to present special challenges for prediction of alkane/water partition coefficients and this may reflect `frustration' of solvation resulting from overlapping hydration spheres. It is also shown how calculated hydrogen bond basicity can be used to model the effect of aromatic aza-substitution on octanol/water partition coefficients.

  13. Improved parallel data partitioning by nested dissection with applications to information retrieval.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar

    The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less

  14. Partitioning Carbon Dioxide and Water Vapor Fluxes Using Correlation Analysis

    USDA-ARS?s Scientific Manuscript database

    Partitioning of eddy covariance flux measurements is routinely done to quantify the contributions of separate processes to the overall fluxes. Measurements of carbon dioxide fluxes represent the difference between gross ecosystem photosynthesis and total respiration, while measurements of water vapo...

  15. Source counting in MEG neuroimaging

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.

    2009-02-01

    Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.

  16. Language in the brain at rest: new insights from resting state data and graph theoretical analysis

    PubMed Central

    Muller, Angela M.; Meyer, Martin

    2014-01-01

    In humans, the most obvious functional lateralization is the specialization of the left hemisphere for language. Therefore, the involvement of the right hemisphere in language is one of the most remarkable findings during the last two decades of fMRI research. However, the importance of this finding continues to be underestimated. We examined the interaction between the two hemispheres and also the role of the right hemisphere in language. From two seeds representing Broca's area, we conducted a seed correlation analysis (SCA) of resting state fMRI data and could identify a resting state network (RSN) overlapping to significant extent with a language network that was generated by an automated meta-analysis tool. To elucidate the relationship between the clusters of this RSN, we then performed graph theoretical analyses (GTA) using the same resting state dataset. We show that the right hemisphere is clearly involved in language. A modularity analysis revealed that the interaction between the two hemispheres is mediated by three partitions: A bilateral frontal partition consists of nodes representing the classical left sided language regions as well as two right-sided homologs. The second bilateral partition consists of nodes from the right frontal, the left inferior parietal cortex as well as of two nodes within the posterior cerebellum. The third partition is also bilateral and comprises five regions from the posterior midline parts of the brain to the temporal and frontal cortex, two of the nodes are prominent default mode nodes. The involvement of this last partition in a language relevant function is a novel finding. PMID:24808843

  17. Models for liquid-liquid partition in the system dimethyl sulfoxide-organic solvent and their use for estimating descriptors for organic compounds.

    PubMed

    Karunasekara, Thushara; Poole, Colin F

    2011-07-15

    Partition coefficients for varied compounds were determined for the organic solvent-dimethyl sulfoxide biphasic partition system where the organic solvent is n-heptane or isopentyl ether. These partition coefficient databases are analyzed using the solvation parameter model facilitating a quantitative comparison of the dimethyl sulfoxide-based partition systems with other totally organic partition systems. Dimethyl sulfoxide is a moderately cohesive solvent, reasonably dipolar/polarizable and strongly hydrogen-bond basic. Although generally considered to be non-hydrogen-bond acidic, analysis of the partition coefficient database strongly supports reclassification as a weak hydrogen-bond acid in agreement with recent literature. The system constants for the n-heptane-dimethyl sulfoxide biphasic system provide an explanation of the mechanism for the selective isolation of polycyclic aromatic compounds from mixtures containing low-polarity hydrocarbons based on the capability of the polar interactions (dipolarity/polarizability and hydrogen-bonding) to overcome the opposing cohesive forces in dimethyl sulfoxide that are absent for the interactions with hydrocarbons of low polarity. In addition, dimethyl sulfoxide-organic solvent systems afford a complementary approach to other totally organic biphasic partition systems for descriptor measurements of compounds virtually insoluble in water. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Analysis of Partitioned Methods for the Biot System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bukac, Martina; Layton, William; Moraiti, Marina

    2015-02-18

    In this work, we present a comprehensive study of several partitioned methods for the coupling of flow and mechanics. We derive energy estimates for each method for the fully-discrete problem. We write the obtained stability conditions in terms of a key control parameter defined as a ratio of the coupling strength and the speed of propagation. Depending on the parameters in the problem, give the choice of the partitioned method which allows the largest time step. (C) 2015 Wiley Periodicals, Inc.

  19. A fully automatic evolutionary classification of protein folds: Dali Domain Dictionary version 3

    PubMed Central

    Dietmann, Sabine; Park, Jong; Notredame, Cedric; Heger, Andreas; Lappe, Michael; Holm, Liisa

    2001-01-01

    The Dali Domain Dictionary (http://www.ebi.ac.uk/dali/domain) is a numerical taxonomy of all known structures in the Protein Data Bank (PDB). The taxonomy is derived fully automatically from measurements of structural, functional and sequence similarities. Here, we report the extension of the classification to match the traditional four hierarchical levels corresponding to: (i) supersecondary structural motifs (attractors in fold space), (ii) the topology of globular domains (fold types), (iii) remote homologues (functional families) and (iv) homologues with sequence identity above 25% (sequence families). The computational definitions of attractors and functional families are new. In September 2000, the Dali classification contained 10 531 PDB entries comprising 17 101 chains, which were partitioned into five attractor regions, 1375 fold types, 2582 functional families and 3724 domain sequence families. Sequence families were further associated with 99 582 unique homologous sequences in the HSSP database, which increases the number of effectively known structures several-fold. The resulting database contains the description of protein domain architecture, the definition of structural neighbours around each known structure, the definition of structurally conserved cores and a comprehensive library of explicit multiple alignments of distantly related protein families. PMID:11125048

  20. Assessing and grouping chemicals applying partial ordering Alkyl anilines as an illustrative example.

    PubMed

    Carlsen, Lars; Bruggemann, Rainer

    2018-06-03

    In chemistry there is a long tradition in classification. Usually methods are adopted from the wide field of cluster analysis. Here, based on the example of 21 alkyl anilines we show that also concepts taken out from the mathematical discipline of partially ordered sets may also be applied. The chemical compounds are described by a multi-indicator system. For the present study four indicators, mainly taken from the field of environmental chemistry were applied and a Hasse diagram was constructed. A Hasse diagram is an acyclic, transitively reduced, triangle free graph that may have several components. The crucial question is, whether or not the Hasse diagram can be interpreted from a structural chemical point of view. This is indeed the case, but it must be clearly stated that a guarantee for meaningful results in general cannot be given. For that further theoretical work is needed. Two cluster analysis methods are applied (K-means and a hierarchical cluster method). In both cases the partitioning of the set of 21 compounds by the component structure of the Hasse diagram appears to be better interpretable. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Building supertrees: an empirical assessment using the grass family (Poaceae).

    PubMed

    Salamin, Nicolas; Hodkinson, Trevor R; Savolainen, Vincent

    2002-02-01

    Large and comprehensive phylogenetic trees are desirable for studying macroevolutionary processes and for classification purposes. Such trees can be obtained in two different ways. Either the widest possible range of taxa can be sampled and used in a phylogenetic analysis to produce a "big tree," or preexisting topologies can be used to create a supertree. Although large multigene analyses are often favored, combinable data are not always available, and supertrees offer a suitable solution. The most commonly used method of supertree reconstruction, matrix representation with parsimony (MRP), is presented here. We used a combined data set for the Poaceae to (1) assess the differences between an approach that uses combined data and one that uses different MRP modifications based on the character partitions and (2) investigate the advantages and disadvantages of these modifications. Baum and Ragan and Purvis modifications gave similar results. Incorporating bootstrap support associated with pre-existing topologies improved Baum and Ragan modification and its similarity with a combined analysis. Finally, we used the supertree reconstruction approach on 55 published phylogenies to build one of most comprehensive phylogenetic trees published for the grass family including 403 taxa and discuss its strengths and weaknesses in relation to other published hypotheses.

  2. Multilevel Green's function interpolation method for scattering from composite metallic and dielectric objects.

    PubMed

    Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou

    2008-10-01

    A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).

  3. A Short Review of the Generation of Molecular Descriptors and Their Applications in Quantitative Structure Property/Activity Relationships.

    PubMed

    Sahoo, Sagarika; Adhikari, Chandana; Kuanar, Minati; Mishra, Bijay K

    2016-01-01

    Synthesis of organic compounds with specific biological activity or physicochemical characteristics needs a thorough analysis of the enumerable data set obtained from literature. Quantitative structure property/activity relationships have made it simple by predicting the structure of the compound with any optimized activity. For that there is a paramount data set of molecular descriptors (MD). This review is a survey on the generation of the molecular descriptors and its probable applications in QSP/AR. Literatures have been collected from a wide class of research journals, citable web reports, seminar proceedings and books. The MDs were classified according to their generation. The applications of the MDs on the QSP/AR have also been reported in this review. The MDs can be classified into experimental and theoretical types, having a sub classification of the later into structural and quantum chemical descriptors. The structural parameters are derived from molecular graphs or topology of the molecules. Even the pixel of the molecular image can be used as molecular descriptor. In QSPR studies the physicochemical properties include boiling point, heat capacity, density, refractive index, molar volume, surface tension, heat of formation, octanol-water partition coefficient, solubility, chromatographic retention indices etc. Among biological activities toxicity, antimalarial activity, sensory irritant, potencies of local anesthetic, tadpole narcosis, antifungal activity, enzyme inhibiting activity are some important parameters in the QSAR studies. The classification of the MDs is mostly generic in nature. The application of the MDs in QSP/AR also has a generic link. Experimental MDs are more suitable in correlation analysis than the theoretical ones but are more expensive for generation. In advent of sophisticated computational tools and experimental design proliferation of MDs is inevitable, but for a highly optimized MD, studies on generation of MD is an unending process.

  4. An Objective Classification of Saturn Cloud Features from Cassini ISS Images

    NASA Technical Reports Server (NTRS)

    Del Genio, Anthony D.; Barbara, John M.

    2016-01-01

    A k -means clustering algorithm is applied to Cassini Imaging Science Subsystem continuum and methane band images of Saturn's northern hemisphere to objectively classify regional albedo features and aid in their dynamical interpretation. The procedure is based on a technique applied previously to visible- infrared images of Earth. It provides a new perspective on giant planet cloud morphology and its relationship to the dynamics and a meteorological context for the analysis of other types of simultaneous Saturn observations. The method identifies 6 clusters that exhibit distinct morphology, vertical structure, and preferred latitudes of occurrence. These correspond to areas dominated by deep convective cells; low contrast areas, some including thinner and thicker clouds possibly associated with baroclinic instability; regions with possible isolated thin cirrus clouds; darker areas due to thinner low level clouds or clearer skies due to downwelling, or due to absorbing particles; and fields of relatively shallow cumulus clouds. The spatial associations among these cloud types suggest that dynamically, there are three distinct types of latitude bands on Saturn: deep convectively disturbed latitudes in cyclonic shear regions poleward of the eastward jets; convectively suppressed regions near and surrounding the westward jets; and baro-clinically unstable latitudes near eastward jet cores and in the anti-cyclonic regions equatorward of them. These are roughly analogous to some of the features of Earth's tropics, subtropics, and midlatitudes, respectively. This classification may be more useful for dynamics purposes than the traditional belt-zone partitioning. Temporal variations of feature contrast and cluster occurrence suggest that the upper tropospheric haze in the northern hemisphere may have thickened by 2014. The results suggest that routine use of clustering may be a worthwhile complement to many different types of planetary atmospheric data analysis.

  5. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.

    1994-01-01

    Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.

  6. Temperature and composition dependencies of trace element partitioning - Olivine/melt and low-Ca pyroxene/melt

    NASA Technical Reports Server (NTRS)

    Colson, R. O.; Mckay, G. A.; Taylor, L. A.

    1988-01-01

    This paper presents a systematic thermodynamic analysis of the effects of temperature and composition on olivine/melt and low-Ca pyroxene/melt partitioning. Experiments were conducted in several synthetic basalts with a wide range of Fe/Mg, determining partition coefficients for Eu, Ca, Mn, Fe, Ni, Sm, Cd, Y, Yb, Sc, Al, Zr, and Ti and modeling accurately the changes in free energy for trace element exchange between crystal and melt as functions of the trace element size and charge. On the basis of this model, partition coefficients for olivine/melt and low-Ca pyroxene/melt can be predicted for a wide range of elements over a variety of basaltic bulk compositions and temperatures. Moreover, variations in partition coeffeicients during crystallization or melting can be modeled on the basis of changes in temperature and major element chemistry.

  7. Impact of water use efficiency on eddy covariance flux partitioning using correlation structure analysis

    USDA-ARS?s Scientific Manuscript database

    Partitioned land surfaces fluxes (e.g. evaporation, transpiration, photosynthesis, and ecosystem respiration) are needed as input, calibration, and validation data for numerous hydrological and land surface models. However, one of the most commonly used techniques for measuring land surface fluxes,...

  8. Application of partition technology to particle electrophoresis

    NASA Technical Reports Server (NTRS)

    Van Alstine, James M.; Harris, J. Milton; Karr, Laurel J.; Bamberger, Stephan; Matsos, Helen C.; Snyder, Robert S.

    1989-01-01

    The effects of polymer-ligand concentration on particle electrophoretic mobility and partition in aqueous polymer two-phase systems are investigated. Polymer coating chemistry and affinity ligand synthesis, purification, and analysis are conducted. It is observed that poly (ethylene glycol)-ligands are effective for controlling particle electrophoretic mobility.

  9. Implementation of hybrid clustering based on partitioning around medoids algorithm and divisive analysis on human Papillomavirus DNA

    NASA Astrophysics Data System (ADS)

    Arimbi, Mentari Dian; Bustamam, Alhadi; Lestari, Dian

    2017-03-01

    Data clustering can be executed through partition or hierarchical method for many types of data including DNA sequences. Both clustering methods can be combined by processing partition algorithm in the first level and hierarchical in the second level, called hybrid clustering. In the partition phase some popular methods such as PAM, K-means, or Fuzzy c-means methods could be applied. In this study we selected partitioning around medoids (PAM) in our partition stage. Furthermore, following the partition algorithm, in hierarchical stage we applied divisive analysis algorithm (DIANA) in order to have more specific clusters and sub clusters structures. The number of main clusters is determined using Davies Bouldin Index (DBI) value. We choose the optimal number of clusters if the results minimize the DBI value. In this work, we conduct the clustering on 1252 HPV DNA sequences data from GenBank. The characteristic extraction is initially performed, followed by normalizing and genetic distance calculation using Euclidean distance. In our implementation, we used the hybrid PAM and DIANA using the R open source programming tool. In our results, we obtained 3 main clusters with average DBI value is 0.979, using PAM in the first stage. After executing DIANA in the second stage, we obtained 4 sub clusters for Cluster-1, 9 sub clusters for Cluster-2 and 2 sub clusters in Cluster-3, with the BDI value 0.972, 0.771, and 0.768 for each main cluster respectively. Since the second stage produce lower DBI value compare to the DBI value in the first stage, we conclude that this hybrid approach can improve the accuracy of our clustering results.

  10. LPS-induced NO inhibition and antioxidant activities of ethanol extracts and their solvent partitioned fractions from four brown seaweeds

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Lae; Lee, Dong-Jin; Lee, Hyi-Seung; Lee, Yeon-Ju; You, Sang Guan

    2013-12-01

    The nitric oxide inhibitory (NOI) and antioxidant (ABTS and DPPH radical scavenging effects with reducing power) activities of the ethanol (EtOH) extracts and solvent partitioned fractions from Scytosiphon lomentaria, Chorda filum, Agarum cribrosum, and Desmarestia viridis were investigated, and the correlation between biological activity and total phenolic (TP) and phlorotannin (TPT) content was determined by PCA analysis. The yield of EtOH extracts from four brown seaweeds ranged from 2.6 to 6.6% with the highest yield from D. viridis, and the predominant compounds in their solvent partitioned fractions had medium and/or less polarity. The TP and TPT content of the EtOH extracts were in the ranges of 25.0-44.1 mg GAE/g sample and 0.2-4.6 mg PG/g sample, respectively, which were mostly included in the organic solvent partitioned fractions. Strong NOI activity was observed in the EtOH extracts and their solvent partitioned fractions from D. viridis and C. filum. In addition, the EtOH extract and its solvent partitioned fractions of D. viridis exhibited little cytotoxicity to Raw 264.7 cells. The most potent ABTS and DPPH radical scavenging capacity was shown in the EtOH extracts and their solvent partitioned fractions from S. lomentaria and C. filum, and both also exhibited strong reducing ability. In the PCA analysis the content of TPT had a good correlation with DPPH ( r = 0.62), ABTS ( r = 0.69) and reducing power ( r = 0.65), however, an unfair correlation was observed between the contents of TP and TPT and NOI, suggesting that the phlorotannins might be responsible for the DPPH and ABTS radical scavenging activities.

  11. Comparison of Modeling Approaches for Carbon Partitioning: Impact on Estimates of Global Net Primary Production and Equilibrium Biomass of Woody Vegetation from MODIS GPP

    NASA Astrophysics Data System (ADS)

    Ise, T.; Litton, C. M.; Giardina, C. P.; Ito, A.

    2009-12-01

    Plant partitioning of carbon (C) to above- vs. belowground, to growth vs. respiration, and to short vs. long lived tissues exerts a large influence on ecosystem structure and function with implications for the global C budget. Importantly, outcomes of process-based terrestrial vegetation models are likely to vary substantially with different C partitioning algorithms. However, controls on C partitioning patterns remain poorly quantified, and studies have yielded variable, and at times contradictory, results. A recent meta-analysis of forest studies suggests that the ratio of net primary production (NPP) and gross primary production (GPP) is fairly conservative across large scales. To illustrate the effect of this unique meta-analysis-based partitioning scheme (MPS), we compared an application of MPS to a terrestrial satellite-based (MODIS) GPP to estimate NPP vs. two global process-based vegetation models (Biome-BGC and VISIT) to examine the influence of C partitioning on C budgets of woody plants. Due to the temperature dependence of maintenance respiration, NPP/GPP predicted by the process-based models increased with latitude while the ratio remained constant with MPS. Overall, global NPP estimated with MPS was 17 and 27% lower than the process-based models for temperate and boreal biomes, respectively, with smaller differences in the tropics. Global equilibrium biomass of woody plants was then calculated from the NPP estimates and tissue turnover rates from VISIT. Since turnover rates differed greatly across tissue types (i.e., metabolically active vs. structural), global equilibrium biomass estimates were sensitive to the partitioning scheme employed. The MPS estimate of global woody biomass was 7-21% lower than that of the process-based models. In summary, we found that model output for NPP and equilibrium biomass was quite sensitive to the choice of C partitioning schemes. Carbon use efficiency (CUE; NPP/GPP) by forest biome and the globe. Values are means for 2001-2006.

  12. Chaotic Dynamics of Linguistic-Like Processes at the Syntactical and Semantic Levels: in the Pursuit of a Multifractal Attractor

    NASA Astrophysics Data System (ADS)

    Nicolis, John S.; Katsikas, Anastassis A.

    Collective parameters such as the Zipf's law-like statistics, the Transinformation, the Block Entropy and the Markovian character are compared for natural, genetic, musical and artificially generated long texts from generating partitions (alphabets) on homogeneous as well as on multifractal chaotic maps. It appears that minimal requirements for a language at the syntactical level such as memory, selectivity of few keywords and broken symmetry in one dimension (polarity) are more or less met by dynamically iterating simple maps or flows e.g. very simple chaotic hardware. The same selectivity is observed at the semantic level where the aim refers to partitioning a set of enviromental impinging stimuli onto coexisting attractors-categories. Under the regime of pattern recognition and classification, few key features of a pattern or few categories claim the lion's share of the information stored in this pattern and practically, only these key features are persistently scanned by the cognitive processor. A multifractal attractor model can in principle explain this high selectivity, both at the syntactical and the semantic levels.

  13. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.

    1975-01-01

    An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.

  14. Strainrange partitioning behavior of an automotive turbine alloy

    NASA Technical Reports Server (NTRS)

    Annis, C. G.; Vanwanderham, M. C.; Wallace, R. M.

    1976-01-01

    This report addresses Strainrange Partitioning, an advanced life prediction analysis procedure, as applied to CA-101 (cast IN 792 + Hf), an alloy proposed for turbine disks in automotive gas turbine engines. The methodology was successful in predicting specimen life under thermal-mechanical cycling, to within a factor of + or - 2.

  15. Measurement and analysis of the mannitol partition coefficient in sucrose crystallization under simulated industrial conditions

    USDA-ARS?s Scientific Manuscript database

    Mannitol is a major deterioration product of Leuconstoc mesenteroides bacterial deterioration of both sugarcane and sugar beet. The effect of crystallization conditions on the mannitol partition coefficient (Keff) between impure sucrose syrup and crystal has been investigated in a batch laboratory c...

  16. Reoperation and readmission after clipping of an unruptured intracranial aneurysm: a National Surgical Quality Improvement Program analysis.

    PubMed

    Dasenbrock, Hormuzdiyar H; Smith, Timothy R; Rudy, Robert F; Gormley, William B; Aziz-Sultan, M Ali; Du, Rose

    2018-03-01

    OBJECTIVE Although reoperation and readmission have been used as quality metrics, there are limited data evaluating the rate of, reasons for, and predictors of reoperation and readmission after microsurgical clipping of unruptured aneurysms. METHODS Adult patients who underwent craniotomy for clipping of an unruptured aneurysm electively were extracted from the prospective National Surgical Quality Improvement Program registry (2011-2014). Multivariable logistic regression and recursive partitioning analysis evaluated the independent predictors of nonroutine hospital discharge, unplanned 30-day reoperation, and readmission. Predictors screened included patient age, sex, comorbidities, American Society of Anesthesiologists (ASA) classification, functional status, aneurysm location, preoperative laboratory values, operative time, and postoperative complications. RESULTS Among the 460 patients evaluated, 4.2% underwent any reoperation at a median of 7 days (interquartile range [IQR] 2-17 days) postoperatively, and 1.1% required a cranial reoperation. The most common reoperation was ventricular shunt placement (23.5%); other reoperations were tracheostomy, craniotomy for hematoma evacuation, and decompressive hemicraniectomy. Independent predictors of any unplanned reoperation were age greater than 51 years and longer operative time (p ≤ 0.04). Readmission occurred in 6.3% of patients at a median of 6 days (IQR 5-13 days) after discharge from the surgical hospitalization; 59.1% of patients were readmitted within 1 week and 86.4% within 2 weeks of discharge. The most common reason for readmission was seizure (26.7%); other causes of readmission included hydrocephalus, cerebrovascular accidents, and headache. Unplanned readmission was independently associated with age greater than 65 years, Class II or III obesity (body mass index > 35 kg/m 2 ), preoperative hyponatremia, and preoperative anemia (p ≤ 0.04). Readmission was not associated with operative time, complications during the surgical hospitalization, length of stay, or discharge disposition. Recursive partitioning analysis identified the same 4 variables, as well as ASA classification, as associated with unplanned readmission. The most potent predictors of nonroutine hospital discharge (16.7%) were postoperative neurological and cardiopulmonary complications; other predictors were age greater than 51 years, preoperative hyponatremia, African American and Asian race, and a complex vertebrobasilar circulation aneurysm. CONCLUSIONS In this national analysis, patient age greater than 65 years, Class II or III obesity, preoperative hyponatremia, and anemia were associated with adverse events, highlighting patients who may be at risk for complications after clipping of unruptured cerebral aneurysms. The preponderance of early readmissions highlights the importance of early surveillance and follow-up after discharge; the frequency of readmission for seizure emphasizes the need for additional data evaluating the utility and duration of postcraniotomy seizure prophylaxis. Moreover, readmission was primarily associated with preoperative characteristics rather than metrics of perioperative care, suggesting that readmission may be a suboptimal indicator of the quality of care received during the surgical hospitalization in this patient population.

  17. A Distributed Fuzzy Associative Classifier for Big Data.

    PubMed

    Segatori, Armando; Bechini, Alessio; Ducange, Pietro; Marcelloni, Francesco

    2017-09-19

    Fuzzy associative classification has not been widely analyzed in the literature, although associative classifiers (ACs) have proved to be very effective in different real domain applications. The main reason is that learning fuzzy ACs is a very heavy task, especially when dealing with large datasets. To overcome this drawback, in this paper, we propose an efficient distributed fuzzy associative classification approach based on the MapReduce paradigm. The approach exploits a novel distributed discretizer based on fuzzy entropy for efficiently generating fuzzy partitions of the attributes. Then, a set of candidate fuzzy association rules is generated by employing a distributed fuzzy extension of the well-known FP-Growth algorithm. Finally, this set is pruned by using three purposely adapted types of pruning. We implemented our approach on the popular Hadoop framework. Hadoop allows distributing storage and processing of very large data sets on computer clusters built from commodity hardware. We have performed an extensive experimentation and a detailed analysis of the results using six very large datasets with up to 11,000,000 instances. We have also experimented different types of reasoning methods. Focusing on accuracy, model complexity, computation time, and scalability, we compare the results achieved by our approach with those obtained by two distributed nonfuzzy ACs recently proposed in the literature. We highlight that, although the accuracies result to be comparable, the complexity, evaluated in terms of number of rules, of the classifiers generated by the fuzzy distributed approach is lower than the one of the nonfuzzy classifiers.

  18. Matrix partitioning and EOF/principal component analysis of Antarctic Sea ice brightness temperatures

    NASA Technical Reports Server (NTRS)

    Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.

    1984-01-01

    A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.

  19. MODFLOW-CDSS, a version of MODFLOW-2005 with modifications for Colorado Decision Support Systems

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    MODFLOW-CDSS is a three-dimensional, finite-difference groundwater-flow model based on MODFLOW-2005, with two modifications. The first modification is the introduction of a Partition Stress Boundaries capability, which enables the user to partition a selected subset of MODFLOW's stress-boundary packages, with each partition defined by a separate input file. Volumetric water-budget components of each partition are tracked and listed separately in the volumetric water-budget tables. The second modification enables the user to specify that execution of a simulation should continue despite failure of the solver to satisfy convergence criteria. This modification is particularly intended to be used in conjunction with automated model-analysis software; its use is not recommended for other purposes.

  20. Visual saliency detection based on in-depth analysis of sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Shen, Siqiu; Ning, Chen

    2018-03-01

    Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.

  1. Some remarks on using circulation classifications to evaluate circulation model and atmospheric reanalysis data

    NASA Astrophysics Data System (ADS)

    Stryhal, Jan; Huth, Radan

    2017-04-01

    Automated classifications of atmospheric circulation patterns represent a tool widely used for studying the circulation in both the real atmosphere, represented by atmospheric reanalyses, and in circulation model outputs. It is well known that the results of studies utilizing one of these methods are influenced by several subjective choices, of which one of the most crucial is the selection of the method itself. Authors of the present study used eight methods from the COST733 classification software (Grosswettertypes, two variants of Jenkinson-Collison, Lund, T-mode PCA with oblique rotation of principal components, k-medoids, k-means with differing starting partitions, and SANDRA) to assess the winter 1961-2000 daily sea level pressure patterns in five reanalysis datasets (ERA-40, NCEP-1, JRA-55, 20CRv2, and ERA-20C), as well as in the historical runs and 21st century projections of an ensemble of CMIP5 GCMs. The classification methods were quite consistent in displaying the strongest biases in GCM simulations. However, the results also showed that multiple classifications are required to quantify the biases in certain types of circulation (e.g., zonal circulation or blocking-like patterns). There was no sign that any method should have a tendency to over- or underestimate the biases in circulation type frequency. The bias found by a particular method for a particular domain clearly reflects the ability of the algorithm to detect groups of similar patterns within the data space, and whether these groups do or do not differ one dataset to another is to a large extend coincidental. There were, nevertheless, systematic differences between groups of methods that use some form of correlation to classify the patterns to circulation types (CTs) and those which use the Euclidean distance. The comparison of reanalyses, which was conducted over eight European domains, showed that there is even a weak negative correlation between the average differences of CT frequency found by cluster analysis methods on one hand, and the remaining methods on the other. This suggests that groups of different methods capture different kinds of errors and that averaging the results obtained by an ensemble of methods very likely leads to an underestimation of the errors actually present in the data.

  2. Improved adaptive splitting and selection: the hybrid training method of a classifier based on a feature space partitioning.

    PubMed

    Jackowski, Konrad; Krawczyk, Bartosz; Woźniak, Michał

    2014-05-01

    Currently, methods of combined classification are the focus of intense research. A properly designed group of combined classifiers exploiting knowledge gathered in a pool of elementary classifiers can successfully outperform a single classifier. There are two essential issues to consider when creating combined classifiers: how to establish the most comprehensive pool and how to design a fusion model that allows for taking full advantage of the collected knowledge. In this work, we address the issues and propose an AdaSS+, training algorithm dedicated for the compound classifier system that effectively exploits local specialization of the elementary classifiers. An effective training procedure consists of two phases. The first phase detects the classifier competencies and adjusts the respective fusion parameters. The second phase boosts classification accuracy by elevating the degree of local specialization. The quality of the proposed algorithms are evaluated on the basis of a wide range of computer experiments that show that AdaSS+ can outperform the original method and several reference classifiers.

  3. A new clustering algorithm applicable to multispectral and polarimetric SAR images

    NASA Technical Reports Server (NTRS)

    Wong, Yiu-Fai; Posner, Edward C.

    1993-01-01

    We describe an application of a scale-space clustering algorithm to the classification of a multispectral and polarimetric SAR image of an agricultural site. After the initial polarimetric and radiometric calibration and noise cancellation, we extracted a 12-dimensional feature vector for each pixel from the scattering matrix. The clustering algorithm was able to partition a set of unlabeled feature vectors from 13 selected sites, each site corresponding to a distinct crop, into 13 clusters without any supervision. The cluster parameters were then used to classify the whole image. The classification map is much less noisy and more accurate than those obtained by hierarchical rules. Starting with every point as a cluster, the algorithm works by melting the system to produce a tree of clusters in the scale space. It can cluster data in any multidimensional space and is insensitive to variability in cluster densities, sizes and ellipsoidal shapes. This algorithm, more powerful than existing ones, may be useful for remote sensing for land use.

  4. Quantitative analysis of molecular partition towards lipid membranes using surface plasmon resonance

    NASA Astrophysics Data System (ADS)

    Figueira, Tiago N.; Freire, João M.; Cunha-Santos, Catarina; Heras, Montserrat; Gonçalves, João; Moscona, Anne; Porotto, Matteo; Salomé Veiga, Ana; Castanho, Miguel A. R. B.

    2017-03-01

    Understanding the interplay between molecules and lipid membranes is fundamental when studying cellular and biotechnological phenomena. Partition between aqueous media and lipid membranes is key to the mechanism of action of many biomolecules and drugs. Quantifying membrane partition, through adequate and robust parameters, is thus essential. Surface Plasmon Resonance (SPR) is a powerful technique for studying 1:1 stoichiometric interactions but has limited application to lipid membrane partition data. We have developed and applied a novel mathematical model for SPR data treatment that enables determination of kinetic and equilibrium partition constants. The method uses two complementary fitting models for association and dissociation sensorgram data. The SPR partition data obtained for the antibody fragment F63, the HIV fusion inhibitor enfuvirtide, and the endogenous drug kyotorphin towards POPC membranes were compared against data from independent techniques. The comprehensive kinetic and partition models were applied to the membrane interaction data of HRC4, a measles virus entry inhibitor peptide, revealing its increased affinity for, and retention in, cholesterol-rich membranes. Overall, our work extends the application of SPR beyond the realm of 1:1 stoichiometric ligand-receptor binding into a new and immense field of applications: the interaction of solutes such as biomolecules and drugs with lipids.

  5. Physicochemical properties/descriptors governing the solubility and partitioning of chemicals in water-solvent-gas systems. Part 1. Partitioning between octanol and air.

    PubMed

    Raevsky, O A; Grigor'ev, V J; Raevskaja, O E; Schaper, K-J

    2006-06-01

    QSPR analyses of a data set containing experimental partition coefficients in the three systems octanol-water, water-gas, and octanol-gas for 98 chemicals have shown that it is possible to calculate any partition coefficient in the system 'gas phase/octanol/water' by three different approaches: (1) from experimental partition coefficients obtained in the corresponding two other subsystems. However, in many cases these data may not be available. Therefore, a solution may be approached (2), a traditional QSPR analysis based on e.g. HYBOT descriptors (hydrogen bond acceptor and donor factors, SigmaCa and SigmaCd, together with polarisability alpha, a steric bulk effect descriptor) and supplemented with substructural indicator variables. (3) A very promising approach which is a combination of the similarity concept and QSPR based on HYBOT descriptors. In this approach observed partition coefficients of structurally nearest neighbours of a compound-of-interest are used. In addition, contributions arising from differences in alpha, SigmaCa, and SigmaCd values between the compound-of-interest and its nearest neighbour(s), respectively, are considered. In this investigation highly significant relationships were obtained by approaches (1) and (3) for the octanol/gas phase partition coefficient (log Log).

  6. ANALYSIS OF A GAS-PHASE PARTITIONING TRACER TEST CONDUCTED IN AN UNSATURATED FRACTURED-CLAY FORMATION

    EPA Science Inventory

    The gas-phase partitioning tracer method was used to estimate non-aqueous phase liquid (NAPL), water, and air saturations in the vadose zone at a chlorinated-solvent contaminated field site in Tucson, AZ. The tracer test was conducted in a fractured-clay system that is the confin...

  7. Rate-distortion analysis of directional wavelets.

    PubMed

    Maleki, Arian; Rajaei, Boshra; Pourreza, Hamid Reza

    2012-02-01

    The inefficiency of separable wavelets in representing smooth edges has led to a great interest in the study of new 2-D transformations. The most popular criterion for analyzing these transformations is the approximation power. Transformations with near-optimal approximation power are useful in many applications such as denoising and enhancement. However, they are not necessarily good for compression. Therefore, most of the nearly optimal transformations such as curvelets and contourlets have not found any application in image compression yet. One of the most promising schemes for image compression is the elegant idea of directional wavelets (DIWs). While these algorithms outperform the state-of-the-art image coders in practice, our theoretical understanding of them is very limited. In this paper, we adopt the notion of rate-distortion and calculate the performance of the DIW on a class of edge-like images. Our theoretical analysis shows that if the edges are not "sharp," the DIW will compress them more efficiently than the separable wavelets. It also demonstrates the inefficiency of the quadtree partitioning that is often used with the DIW. To solve this issue, we propose a new partitioning scheme called megaquad partitioning. Our simulation results on real-world images confirm the benefits of the proposed partitioning algorithm, promised by our theoretical analysis. © 2011 IEEE

  8. ESTimating plant phylogeny: lessons from partitioning

    PubMed Central

    de la Torre, Jose EB; Egan, Mary G; Katari, Manpreet S; Brenner, Eric D; Stevenson, Dennis W; Coruzzi, Gloria M; DeSalle, Rob

    2006-01-01

    Background While Expressed Sequence Tags (ESTs) have proven a viable and efficient way to sample genomes, particularly those for which whole-genome sequencing is impractical, phylogenetic analysis using ESTs remains difficult. Sequencing errors and orthology determination are the major problems when using ESTs as a source of characters for systematics. Here we develop methods to incorporate EST sequence information in a simultaneous analysis framework to address controversial phylogenetic questions regarding the relationships among the major groups of seed plants. We use an automated, phylogenetically derived approach to orthology determination called OrthologID generate a phylogeny based on 43 process partitions, many of which are derived from ESTs, and examine several measures of support to assess the utility of EST data for phylogenies. Results A maximum parsimony (MP) analysis resulted in a single tree with relatively high support at all nodes in the tree despite rampant conflict among trees generated from the separate analysis of individual partitions. In a comparison of broader-scale groupings based on cellular compartment (ie: chloroplast, mitochondrial or nuclear) or function, only the nuclear partition tree (based largely on EST data) was found to be topologically identical to the tree based on the simultaneous analysis of all data. Despite topological conflict among the broader-scale groupings examined, only the tree based on morphological data showed statistically significant differences. Conclusion Based on the amount of character support contributed by EST data which make up a majority of the nuclear data set, and the lack of conflict of the nuclear data set with the simultaneous analysis tree, we conclude that the inclusion of EST data does provide a viable and efficient approach to address phylogenetic questions within a parsimony framework on a genomic scale, if problems of orthology determination and potential sequencing errors can be overcome. In addition, approaches that examine conflict and support in a simultaneous analysis framework allow for a more precise understanding of the evolutionary history of individual process partitions and may be a novel way to understand functional aspects of different kinds of cellular classes of gene products. PMID:16776834

  9. Check-Standard Testing Across Multiple Transonic Wind Tunnels with the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2012-01-01

    This paper reports the result of an analysis of wind tunnel data acquired in support of the Facility Analysis Verification & Operational Reliability (FAVOR) project. The analysis uses methods referred to collectively at Langley Research Center as the Modern Design of Experiments (MDOE). These methods quantify the total variance in a sample of wind tunnel data and partition it into explained and unexplained components. The unexplained component is further partitioned in random and systematic components. This analysis was performed on data acquired in similar wind tunnel tests executed in four different U.S. transonic facilities. The measurement environment of each facility was quantified and compared.

  10. DAPNe with micro-capillary separatory chemistry-coupled to MALDI-MS for the analysis of polar and non-polar lipid metabolism in one cell

    NASA Astrophysics Data System (ADS)

    Hamilton, Jason S.; Aguilar, Roberto; Petros, Robby A.; Verbeck, Guido F.

    2017-05-01

    The cellular metabolome is considered to be a representation of cellular phenotype and cellular response to changes to internal or external events. Methods to expand the coverage of the expansive physiochemical properties that makeup the metabolome currently utilize multi-step extractions and chromatographic separations prior to chemical detection, leading to lengthy analysis times. In this study, a single-step procedure for the extraction and separation of a sample using a micro-capillary as a separatory funnel to achieve analyte partitioning within an organic/aqueous immiscible solvent system is described. The separated analytes are then spotted for MALDI-MS imaging and distribution ratios are calculated. Initially, the method is applied to standard mixtures for proof of partitioning. The extraction of an individual cell is non-reproducible; therefore, a broad chemical analysis of metabolites is necessary and will be illustrated with the one-cell analysis of a single Snu-5 gastric cancer cell taken from a cellular suspension. The method presented here shows a broad partitioning dynamic range as a single-step method for lipid analysis demonstrating a decrease in ion suppression often present in MALDI analysis of lipids.

  11. The potential of cloud point system as a novel two-phase partitioning system for biotransformation.

    PubMed

    Wang, Zhilong

    2007-05-01

    Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.

  12. Orientifolding of the ABJ Fermi gas

    NASA Astrophysics Data System (ADS)

    Okuyama, Kazumi

    2016-03-01

    The grand partition functions of ABJ theory can be factorized into even and odd parts under the reflection of fermion coordinate in the Fermi gas approach. In some cases, the even/odd part of ABJ grand partition function is equal to that of {N}=5O(n)× USp({n}^') theory, hence it is natural to think of the even/odd projection of grand partition function as an orientifolding of ABJ Fermi gas system. By a systematic WKB analysis, we determine the coefficients in the perturbative part of grand potential of such orientifold ABJ theory. We also find the exact form of the first few "half-instanton" corrections coming from the twisted sector of the reflection of fermion coordinate. For the Chern-Simons level k = 2 ,4 ,8 we find closed form expressions of the grand partition functions of orientifold ABJ theory, and for k = 2 , 4 we prove the functional relations among the grand partition functions conjectured in arXiv:1410.7658.

  13. The Deformation Behavior Analysis and Mechanical Modeling of Step/Intercritical Quenching and Partitioning-Treated Multiphase Steels

    NASA Astrophysics Data System (ADS)

    Zhao, Hongshan; Li, Wei; Wang, Li; Zhou, Shu; Jin, Xuejun

    2016-08-01

    T wo types of multiphase steels containing blocky or fine martensite have been used to study the phase interaction and the TRIP effect. These steels were obtained by step-quenching and partitioning (S-QP820) or intercritical-quenching and partitioning (I-QP800 & I-QP820). The retained austenite (RA) in S-QP820 specimen containing blocky martensite transformed too early to prevent the local failure at high strain due to the local strain concentration. In contrast, plentiful RA in I-QP800 specimen containing finely dispersed martensite transformed uniformly at high strain, which led to optimized strength and elongation. By applying a coordinate conversion method to the microhardness test, the load partitioning between ferrite and partitioned martensite was proved to follow the linear mixture law. The mechanical behavior of multiphase S-QP820 steel can be modeled based on the Mecking-Kocks theory, Bouquerel's spherical assumption, and Gladman-type mixture law. Finally, the transformation-induced martensite hardening effect has been studied on a bake-hardened specimen.

  14. DeepPap: Deep Convolutional Networks for Cervical Cell Classification.

    PubMed

    Zhang, Ling; Le Lu; Nogues, Isabella; Summers, Ronald M; Liu, Shaoxiong; Yao, Jianhua

    2017-11-01

    Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.

  15. Correlation of soil and sediment organic matter polarity to aqueous sorption of nonionic compounds

    USGS Publications Warehouse

    Kile, D.E.; Wershaw, R. L.; Chiou, C.T.

    1999-01-01

    Polarities of the soiL/sediment organic matter (SOM) in 19 soil and 9 freshwater sediment sam pies were determined from solid-state 13C-CP/MAS NMR spectra and compared with published partition coefficients (K(oc)) of carbon tetrachloride (CT) from aqueous solution. Nondestructive analysis of whole samples by solid-state NMR permits a direct assessment of the polarity of SOM that is not possible by elemental analysis. The percent of organic carbon associated with polar functional groups was estimated from the combined fraction of carbohydrate and carboxylamide-ester carbons. A plot of the measured partition coefficients (K(oc)) of carbon tetrachloride (CT) vs. percent polar organic carbon (POC) shows distinctly different populations of soils and sediments as well as a roughly inverse trend among the soil/sediment populations. Plots of K(oc) values for CT against other structural group carbon fractions did not yield distinct populations. The results indicate that the polarity of SOM is a significant factor in accounting for differences in K(oc) between the organic matter in soils and sediments. The alternate direct correlation of the sum of aliphatic and aromatic structural carbons with K(oc) illustrates the influence of nonpolar hydrocarbon on solute partition interaction. Additional elemental analysis data of selected samples further substantiate the effect of the organic matter polarity on the partition efficiency of nonpolar solutes. The separation between soil and sediment samples based on percent POC reflects definite differences of the properties of soil and sediment organic matters that are attributable to diagenesis.Polarities of the soil/sediment organic matter (SOM) in 19 soil and 9 freshwater sediment samples were determined from solid-state 13C-CP/MAS NMR spectra and compared with published partition coefficients (Koc) of carbon tetrachloride (CT) from aqueous solution. Nondestructive analysis of whole samples by solid-state NMR permits a direct assessment of the polarity of SOM that is not possible by elemental analysis. The percent of organic carbon associated with polar functional groups was estimated from the combined fraction of carbohydrate and carboxyl-amide-ester carbons. A plot of the measured partition coefficients (Koc) of carbon tetrachloride (CT) vs. percent polar organic carbon (POC) shows distinctly different populations of soils and sediments as well as a roughly inverse trend among the soil/sediment populations. Plots of Koc values for CT against other structural group carbon fractions did not yield distinct populations. The results indicate that the polarity of SOM is a significant factor in accounting for differences in Koc between the organic matter in soils and sediments. The alternate direct correlation of the sum of aliphatic and aromatic structural carbons with Koc illustrates the influence of nonpolar hydrocarbon on solute partition interaction. Additional elemental analysis data of selected samples further substantiate the effect of the organic matter polarity on the partition efficiency of nonpolar solutes. The separation between soil and sediment samples based on percent POC reflects definite differences of the properties of soil and sediment organic matters that are attributable to diagenesis.

  16. pH recycling aqueous two-phase systems applied in extraction of Maitake β-Glucan and mechanism analysis using low-field nuclear magnetic resonance.

    PubMed

    Hou, Huiyun; Cao, Xuejun

    2015-07-31

    In this paper, a recycling aqueous two-phase systems (ATPS) based on two pH-response copolymers PADB and PMDM were used in purification of β-Glucan from Grifola frondosa. The main parameters, such as polymer concentration, type and concentration of salt, extraction temperature and pH, were investigated to optimize partition conditions. The results demonstrated that β-Glucan was extracted into PADB-rich phase, while impurities were extracted into PMDM-rich phase. In this 2.5% PADB/2.5% PMDM ATPS, 7.489 partition coefficient and 96.92% extraction recovery for β-Glucan were obtained in the presence of 30mmol/L KBr, at pH 8.20, 30°C. The phase-forming copolymers could be recycled by adjusting pH, with recoveries of over 96.0%. Furthermore, the partition mechanism of Maitake β-Glucan in PADB/PMDM aqueous two-phase systems was studied. Fourier transform infrared spectra, ForteBio Octet system and low-field nuclear magnetic resonance (LF-NMR) were introduced for elucidating the partition mechanism of β-Glucan. Especially, LF-NMR was firstly used in the mechanism analysis in partition of aqueous two-phase systems. The change of transverse relaxation time (T2) in ATPS could reflect the interaction between polymers and β-Glucan. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Elliptic supersymmetric integrable model and multivariable elliptic functions

    NASA Astrophysics Data System (ADS)

    Motegi, Kohei

    2017-12-01

    We investigate the elliptic integrable model introduced by Deguchi and Martin [Int. J. Mod. Phys. A 7, Suppl. 1A, 165 (1992)], which is an elliptic extension of the Perk-Schultz model. We introduce and study a class of partition functions of the elliptic model by using the Izergin-Korepin analysis. We show that the partition functions are expressed as a product of elliptic factors and elliptic Schur-type symmetric functions. This result resembles recent work by number theorists in which the correspondence between the partition functions of trigonometric models and the product of the deformed Vandermonde determinant and Schur functions were established.

  18. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  19. Predicting human liver microsomal stability with machine learning techniques.

    PubMed

    Sakiyama, Yojiro; Yuki, Hitomi; Moriya, Takashi; Hattori, Kazunari; Suzuki, Misaki; Shimada, Kaoru; Honma, Teruki

    2008-02-01

    To ensure a continuing pipeline in pharmaceutical research, lead candidates must possess appropriate metabolic stability in the drug discovery process. In vitro ADMET (absorption, distribution, metabolism, elimination, and toxicity) screening provides us with useful information regarding the metabolic stability of compounds. However, before the synthesis stage, an efficient process is required in order to deal with the vast quantity of data from large compound libraries and high-throughput screening. Here we have derived a relationship between the chemical structure and its metabolic stability for a data set of in-house compounds by means of various in silico machine learning such as random forest, support vector machine (SVM), logistic regression, and recursive partitioning. For model building, 1952 proprietary compounds comprising two classes (stable/unstable) were used with 193 descriptors calculated by Molecular Operating Environment. The results using test compounds have demonstrated that all classifiers yielded satisfactory results (accuracy > 0.8, sensitivity > 0.9, specificity > 0.6, and precision > 0.8). Above all, classification by random forest as well as SVM yielded kappa values of approximately 0.7 in an independent validation set, slightly higher than other classification tools. These results suggest that nonlinear/ensemble-based classification methods might prove useful in the area of in silico ADME modeling.

  20. On the use of harmony search algorithm in the training of wavelet neural networks

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2015-10-01

    Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.

  1. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  2. Microstructural evolution during quenching and partitioning of 0.2C-1.5Mn-1.3Si steels with Cr or Ni additions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Dean T.; Coughlin, D. R.; Clarke, Kester D.

    Here, the influence of Cr and Ni additions and quench and partition (Q&P) processing parameters on the microstructural development, including carbide formation and austenite retention during Q&P, was studied in two steels with a base composition of 0.2C-1.5Mn-1.3Si wt.% and additions of 1.5 wt.% Cr (1.5Cr) or Ni (1.5Ni). Additions of 1.5 wt.% Cr significantly slowed the kinetics of austenite decomposition relative to the 1.5Ni alloy at all partitioning temperatures, promoting greater austenite retention, lower retained austenite carbon (C) contents, and reduced sensitivity of the retained austenite amounts to processing variables. In the 1.5Cr alloy after partitioning at 400 °Cmore » for 300 s, η-carbides were identified by transmission electron microscopy (TEM) and atom probe tomography (APT) revealed no significant enrichment of substitutional elements in the carbides. In the 1.5Ni alloy after partitioning at 450 °C for 300 s, both plate-like and globular carbides were observed by TEM. APT analysis of the globular carbides clearly revealed significant Si rejection and Mn enrichment. Mössbauer effect spectroscopy was used to quantify the amount of carbides after Q&P. In general, carbide amounts below ~0.3% of Fe were measured in both alloys after partitioning for short times (10 s), irrespective of quench or partitioning temperature, which corresponds to a relatively small portion of the bulk C. With increasing partitioning time, carbide amounts remained approximately constant or increased, depending on the alloy, quench temperature, and/or partitioning temperature.« less

  3. Microstructural evolution during quenching and partitioning of 0.2C-1.5Mn-1.3Si steels with Cr or Ni additions

    DOE PAGES

    Pierce, Dean T.; Coughlin, D. R.; Clarke, Kester D.; ...

    2018-03-08

    Here, the influence of Cr and Ni additions and quench and partition (Q&P) processing parameters on the microstructural development, including carbide formation and austenite retention during Q&P, was studied in two steels with a base composition of 0.2C-1.5Mn-1.3Si wt.% and additions of 1.5 wt.% Cr (1.5Cr) or Ni (1.5Ni). Additions of 1.5 wt.% Cr significantly slowed the kinetics of austenite decomposition relative to the 1.5Ni alloy at all partitioning temperatures, promoting greater austenite retention, lower retained austenite carbon (C) contents, and reduced sensitivity of the retained austenite amounts to processing variables. In the 1.5Cr alloy after partitioning at 400 °Cmore » for 300 s, η-carbides were identified by transmission electron microscopy (TEM) and atom probe tomography (APT) revealed no significant enrichment of substitutional elements in the carbides. In the 1.5Ni alloy after partitioning at 450 °C for 300 s, both plate-like and globular carbides were observed by TEM. APT analysis of the globular carbides clearly revealed significant Si rejection and Mn enrichment. Mössbauer effect spectroscopy was used to quantify the amount of carbides after Q&P. In general, carbide amounts below ~0.3% of Fe were measured in both alloys after partitioning for short times (10 s), irrespective of quench or partitioning temperature, which corresponds to a relatively small portion of the bulk C. With increasing partitioning time, carbide amounts remained approximately constant or increased, depending on the alloy, quench temperature, and/or partitioning temperature.« less

  4. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part I: Model problem analysis

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi

    2017-08-01

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added-mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this first part of a two-part series, the properties of the AMP scheme are motivated and evaluated through the development and analysis of some model problems. The analysis shows when and why the traditional partitioned scheme becomes unstable due to either added-mass or added-damping effects. The analysis also identifies the proper form of the added-damping which depends on the discrete time-step and the grid-spacing normal to the rigid body. The results of the analysis are confirmed with numerical simulations that also demonstrate a second-order accurate implementation of the AMP scheme.

  5. Provisional in-silico biopharmaceutics classification (BCS) to guide oral drug product development

    PubMed Central

    Wolk, Omri; Agbaria, Riad; Dahan, Arik

    2014-01-01

    The main objective of this work was to investigate in-silico predictions of physicochemical properties, in order to guide oral drug development by provisional biopharmaceutics classification system (BCS). Four in-silico methods were used to estimate LogP: group contribution (CLogP) using two different software programs, atom contribution (ALogP), and element contribution (KLogP). The correlations (r2) of CLogP, ALogP and KLogP versus measured LogP data were 0.97, 0.82, and 0.71, respectively. The classification of drugs with reported intestinal permeability in humans was correct for 64.3%–72.4% of the 29 drugs on the dataset, and for 81.82%–90.91% of the 22 drugs that are passively absorbed using the different in-silico algorithms. Similar permeability classification was obtained with the various in-silico methods. The in-silico calculations, along with experimental melting points, were then incorporated into a thermodynamic equation for solubility estimations that largely matched the reference solubility values. It was revealed that the effect of melting point on the solubility is minor compared to the partition coefficient, and an average melting point (162.7°C) could replace the experimental values, with similar results. The in-silico methods classified 20.76% (±3.07%) as Class 1, 41.51% (±3.32%) as Class 2, 30.49% (±4.47%) as Class 3, and 6.27% (±4.39%) as Class 4. In conclusion, in-silico methods can be used for BCS classification of drugs in early development, from merely their molecular formula and without foreknowledge of their chemical structure, which will allow for the improved selection, engineering, and developability of candidates. These in-silico methods could enhance success rates, reduce costs, and accelerate oral drug products development. PMID:25284986

  6. Local performance optimization for a class of redundant eight-degree-of-freedom manipulators

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1994-01-01

    Local performance optimization for joint limit avoidance and manipulability maximization (singularity avoidance) is obtained by using the Jacobian matrix pseudoinverse and by projecting the gradient of an objective function into the Jacobian null space. Real-time redundancy optimization control is achieved for an eight-joint redundant manipulator having a three-axis spherical shoulder, a single elbow joint, and a four-axis spherical wrist. Symbolic solutions are used for both full-Jacobian and wrist-partitioned pseudoinverses, partitioned null-space projection matrices, and all objective function gradients. A kinematic limitation of this class of manipulators and the limitation's effect on redundancy resolution are discussed. Results obtained with graphical simulation are presented to demonstrate the effectiveness of local redundant manipulator performance optimization. Actual hardware experiments performed to verify the simulated results are also discussed. A major result is that the partitioned solution is desirable because of low computation requirements. The partitioned solution is suboptimal compared with the full solution because translational and rotational terms are optimized separately; however, the results show that the difference is not significant. Singularity analysis reveals that no algorithmic singularities exist for the partitioned solution. The partitioned and full solutions share the same physical manipulator singular conditions. When compared with the full solution, the partitioned solution is shown to be ill-conditioned in smaller neighborhoods of the shared singularities.

  7. Prognostic value of the Glasgow Prognostic Score for glioblastoma multiforme patients treated with radiotherapy and temozolomide.

    PubMed

    Topkan, Erkan; Selek, Ugur; Ozdemir, Yurday; Yildirim, Berna A; Guler, Ozan C; Ciner, Fuat; Mertsoylu, Huseyin; Tufan, Kadir

    2018-04-25

    To evaluate the prognostic value of the Glasgow Prognostic Score (GPS), the combination of C-reactive protein (CRP) and albumin, in glioblastoma multiforme (GBM) patients treated with radiotherapy (RT) and concurrent plus adjuvant temozolomide (GPS). Data of newly diagnosed GBM patients treated with partial brain RT and concurrent and adjuvant TMZ were retrospectively analyzed. The patients were grouped into three according to the GPS criteria: GPS-0: CRP < 10 mg/L and albumin > 35 g/L; GPS-1: CRP < 10 mg/L and albumin < 35 g/L or CRP > 10 mg/L and albumin > 35 g/L; and GPS-2: CRP > 10 mg/L and albumin < 35 g/L. Primary end-point was the association between the GPS groups and the overall survival (OS) outcomes. A total of 142 patients were analyzed (median age: 58 years, 66.2% male). There were 64 (45.1%), 40 (28.2%), and 38 (26.7%) patients in GPS-0, GPS-1, and GPS-2 groups, respectively. At median 15.7 months follow-up, the respective median and 5-year OS rates for the whole cohort were 16.2 months (95% CI 12.7-19.7) and 9.5%. In multivariate analyses GPS grouping emerged independently associated with the median OS (P < 0.001) in addition to the extent of surgery (P = 0.032), Karnofsky performance status (P = 0.009), and the Radiation Therapy Oncology Group recursive partitioning analysis (RTOG RPA) classification (P < 0.001). The GPS grouping and the RTOG RPA classification were found to be strongly correlated in prognostic stratification of GBM patients (correlation coefficient: 0.42; P < 0.001). The GPS appeared to be useful in prognostic stratification of GBM patients into three groups with significantly different survival durations resembling the RTOG RPA classification.

  8. Classification of Animal Movement Behavior through Residence in Space and Time.

    PubMed

    Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R

    2017-01-01

    Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can enhance our fine- and broad- scale understanding of animal movement ecology.

  9. Compilation and analysis of global surface water concentrations for individual insecticide compounds.

    PubMed

    Stehle, Sebastian; Bub, Sascha; Schulz, Ralf

    2018-10-15

    The decades-long agricultural use of insecticides resulted in frequent contamination of surface waters globally regularly posing high risks for the aquatic biodiversity. However, the concentration levels of individual insecticide compounds have by now not been compiled and reported using global scale data, hampering our knowledge on the insecticide exposure of aquatic ecosystems. Here, we specify measured insecticide concentrations (MICs, comprising in total 11,300 water and sediment concentrations taken from a previous publication) for 28 important insecticide compounds covering four major insecticide classes. Results show that organochlorine and organophosphate insecticides, which dominated the global insecticide market for decades, have been detected most often and at highest concentration levels in surface waters globally. In comparison, MICs of the more recent pyrethroids and neonicotinoids were less often reported and generally at lower concentrations as a result of their later market introduction and lower application rates. An online insecticide classification calculator (ICC; available at: https://static.magic.eco/icc/v1) is provided in order to enable the comparison and classification of prospective MICs with available global insecticide concentrations. Spatial analyses of existing data show that most MICs were reported for surface waters in North America, Asia and Europe, whereas highest concentration levels were detected in Africa, Asia and South America. An evaluation of water and sediment MICs showed that theoretical organic carbon-water partition coefficients (K OC ) determined in the laboratory overestimated K OC values based on actual field concentrations by up to a factor of more than 20, with highest deviations found for highly sorptive pyrethroids. Overall, the comprehensive compilation of insecticide field concentrations presented here is a valuable tool for the classification of future surface water monitoring results and serves as important input data for more field relevant toxicity testing approaches and pesticide exposure and risk assessment schemes. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Differential diagnosis of jaw pain using informatics technology.

    PubMed

    Nam, Y; Kim, H-G; Kho, H-S

    2018-05-21

    This study aimed to deduce evidence-based clinical clues that differentiate temporomandibular disorders (TMD)-mimicking conditions from genuine TMD by text mining using natural language processing (NLP) and recursive partitioning. We compared the medical records of 29 patients diagnosed with TMD-mimicking conditions and 290 patients diagnosed with genuine TMD. Chief complaints and medical histories were preprocessed via NLP to compare the frequency of word usage. In addition, recursive partitioning was used to deduce the optimal size of mouth opening, which could differentiate TMD-mimicking from genuine TMD groups. The prevalence of TMD-mimicking conditions was more evenly distributed across all age groups and showed a nearly equal gender ratio, which was significantly different from genuine TMD. TMD-mimicking conditions were caused by inflammation, infection, hereditary disease and neoplasm. Patients with TMD-mimicking conditions frequently used "mouth opening limitation" (P < .001), but less commonly used words such as "noise" (P < .001) and "temporomandibular joint" (P < .001) than patients with genuine TMD. A diagnostic classification tree on the basis of recursive partitioning suggested that 12.0 mm of comfortable mouth opening and 26.5 mm of maximum mouth opening were deduced as the most optimal mouth-opening cutoff sizes. When the combined analyses were performed based on both the text mining and clinical examination data, the predictive performance of the model was 96.6% with 69.0% sensitivity and 99.3% specificity in predicting TMD-mimicking conditions. In conclusion, this study showed that AI technology-based methods could be applied in the field of differential diagnosis of orofacial pain disorders. © 2018 John Wiley & Sons Ltd.

  11. Pattern formation, logistics, and maximum path probability

    NASA Astrophysics Data System (ADS)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are sufficiently strong interpretations of the second law of thermodynamics to define the approach to and the nature of patterned stable steady states. For many pattern-forming systems these principles define quantifiable stable states as maxima or minima (or both) in the dissipation. An elementary statistical-mechanical proof is offered. To turn the argument full circle, the transformations of the partitions and classes which are predicated upon such minimax entropic paths can through digital modeling be directly identified with the syntactic and inferential elements of deductive logic. It follows therefore that all self-organizing or pattern-forming systems which possess stable steady states approach these states according to the imperatives of formal logic, the optimum pattern with its rich endowment of equivalence relations representing the central theorem of the associated calculus. Logic is thus ``the stuff of the universe,'' and biological evolution with its culmination in the human brain is the most significant example of all the irreversible pattern-forming processes. We thus conclude with a few remarks on the relevance of the contribution to the theory of evolution and to research on artificial intelligence.

  12. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less

  13. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  14. SEU System Analysis: Not Just the Sum of All Parts

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth

    2014-01-01

    Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.

  15. A methodology for commonality analysis, with applications to selected space station systems

    NASA Technical Reports Server (NTRS)

    Thomas, Lawrence Dale

    1989-01-01

    The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.

  16. Geochemical fractionation and pollution assessment of Zn, Cu, and Fe in surface sediments from Shadegan Wildlife Refuge, southwest of Iran.

    PubMed

    Chaharlang, Behnam Heidari; Bakhtiari, Alireza Riyahi; Mohammadi, Jahangard; Farshchi, Parvin

    2017-09-01

    This research focuses on the fractionation and distribution patterns of heavy metals (Zn, Cu, and Fe) in surficial sediments collected from Shadegan Wildlife Refuge, the biggest wetland in southern part of Iran, to provide an overall classification for the sources of metals in the study area using a sequential extraction method. For this purpose, a four-step sequential extraction technique was applied to define the partitioning of the metals into different geochemical phases of the sediment. The results illustrated that the average total level of Zn, Cu, and Fe in surface sediments were 55.20 ± 16.04, 22.86 ± 5.68, and 25,979.01 ± 6917.91 μg/g dw, respectively. On the average, the chemical partitioning of all metals in most stations was in the order of residual >oxidizable-organic > acid-reducible > exchangeable. In the same way, the results of calculated geochemical indices revealed that Cu, Zn, and Fe concentrations are mainly influenced by lithogenic origins. Compared with consensus-based SQGs, Cu was likely to result in occasionally harmful biological effects on the biota.

  17. BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.

    PubMed

    White, B J; Amrine, D E; Larson, R L

    2018-04-14

    Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.

  18. Advanced flight hardware for organic separations using aqueous two-phase partitioning

    NASA Astrophysics Data System (ADS)

    Deuser, Mark S.; Vellinger, John C.; Weber, John T.

    1996-03-01

    Separation of cells and cell components is the limiting factor in many biomedical research and pharmaceutical development processes. Aqueous Two-Phase Partitioning (ATPP) is a unique separation technique which allows purification and classification of biological materials. SHOT has employed the ATPP process in separation equipment developed for both space and ground applications. Initial equipment development and research focused on the ORganic SEParation (ORSEP) space flight experiments that were performed on suborbital rockets and the shuttle. ADvanced SEParations (ADSEP) technology was developed as the next generation of ORSEP equipment through a NASA Small Business Innovation Research (SBIR) contract. Under the SBIR contract, a marketing study was conducted, indicating a growing commercial market exists among biotechnology firms for ADSEP equipment and associated flight research and development services. SHOT is preparing to begin manufacturing and marketing laboratory versions of the ADSEP hardware for the ground-based market. In addition, through a self-financed SBIR Phase III effort, SHOT is fabricating and integrating the ADSEP flight hardware for a commercially-driven SPACEHAB 04 experiment that will be the initial step in marketing space separations services. The ADSEP ground-based and microgravity research is expected to play a vital role in developing important new biomedical and pharmaceutical products.

  19. A Human Rights and History Education Model for Teaching about Historical Events of Mass Violence: The 1947 British India Partition

    ERIC Educational Resources Information Center

    Chhabra, Meenakshi

    2017-01-01

    This article examines singular historical narratives of the 1947 British India Partition in four history textbooks from India, Pakistan, Bangladesh, and Britain, respectively. Drawing on analysis and work in the field, this study proposes a seven-module "integrated snail model" with a human rights orientation that can be applied to…

  20. Partitioning in aqueous two-phase systems: Analysis of strengths, weaknesses, opportunities and threats.

    PubMed

    Soares, Ruben R G; Azevedo, Ana M; Van Alstine, James M; Aires-Barros, M Raquel

    2015-08-01

    For half a century aqueous two-phase systems (ATPSs) have been applied for the extraction and purification of biomolecules. In spite of their simplicity, selectivity, and relatively low cost they have not been significantly employed for industrial scale bioprocessing. Recently their ability to be readily scaled and interface easily in single-use, flexible biomanufacturing has led to industrial re-evaluation of ATPSs. The purpose of this review is to perform a SWOT analysis that includes a discussion of: (i) strengths of ATPS partitioning as an effective and simple platform for biomolecule purification; (ii) weaknesses of ATPS partitioning in regard to intrinsic problems and possible solutions; (iii) opportunities related to biotechnological challenges that ATPS partitioning may solve; and (iv) threats related to alternative techniques that may compete with ATPS in performance, economic benefits, scale up and reliability. This approach provides insight into the current status of ATPS as a bioprocessing technique and it can be concluded that most of the perceived weakness towards industrial implementation have now been largely overcome, thus paving the way for opportunities in fermentation feed clarification, integration in multi-stage operations and in single-step purification processes. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Finding reproducible cluster partitions for the k-means algorithm

    PubMed Central

    2013-01-01

    K-means clustering is widely used for exploratory data analysis. While its dependence on initialisation is well-known, it is common practice to assume that the partition with lowest sum-of-squares (SSQ) total i.e. within cluster variance, is both reproducible under repeated initialisations and also the closest that k-means can provide to true structure, when applied to synthetic data. We show that this is generally the case for small numbers of clusters, but for values of k that are still of theoretical and practical interest, similar values of SSQ can correspond to markedly different cluster partitions. This paper extends stability measures previously presented in the context of finding optimal values of cluster number, into a component of a 2-d map of the local minima found by the k-means algorithm, from which not only can values of k be identified for further analysis but, more importantly, it is made clear whether the best SSQ is a suitable solution or whether obtaining a consistently good partition requires further application of the stability index. The proposed method is illustrated by application to five synthetic datasets replicating a real world breast cancer dataset with varying data density, and a large bioinformatics dataset. PMID:23369085

  2. Finding reproducible cluster partitions for the k-means algorithm.

    PubMed

    Lisboa, Paulo J G; Etchells, Terence A; Jarman, Ian H; Chambers, Simon J

    2013-01-01

    K-means clustering is widely used for exploratory data analysis. While its dependence on initialisation is well-known, it is common practice to assume that the partition with lowest sum-of-squares (SSQ) total i.e. within cluster variance, is both reproducible under repeated initialisations and also the closest that k-means can provide to true structure, when applied to synthetic data. We show that this is generally the case for small numbers of clusters, but for values of k that are still of theoretical and practical interest, similar values of SSQ can correspond to markedly different cluster partitions. This paper extends stability measures previously presented in the context of finding optimal values of cluster number, into a component of a 2-d map of the local minima found by the k-means algorithm, from which not only can values of k be identified for further analysis but, more importantly, it is made clear whether the best SSQ is a suitable solution or whether obtaining a consistently good partition requires further application of the stability index. The proposed method is illustrated by application to five synthetic datasets replicating a real world breast cancer dataset with varying data density, and a large bioinformatics dataset.

  3. Stockholder projector analysis: A Hilbert-space partitioning of the molecular one-electron density matrix with orthogonal projectors

    NASA Astrophysics Data System (ADS)

    Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel

    2012-01-01

    A previously introduced partitioning of the molecular one-electron density matrix over atoms and bonds [D. Vanfleteren et al., J. Chem. Phys. 133, 231103 (2010)] is investigated in detail. Orthogonal projection operators are used to define atomic subspaces, as in Natural Population Analysis. The orthogonal projection operators are constructed with a recursive scheme. These operators are chemically relevant and obey a stockholder principle, familiar from the Hirshfeld-I partitioning of the electron density. The stockholder principle is extended to density matrices, where the orthogonal projectors are considered to be atomic fractions of the summed contributions. All calculations are performed as matrix manipulations in one-electron Hilbert space. Mathematical proofs and numerical evidence concerning this recursive scheme are provided in the present paper. The advantages associated with the use of these stockholder projection operators are examined with respect to covalent bond orders, bond polarization, and transferability.

  4. Proposed Molecular Beam Determination of Energy Partition in the Photodissociation of Polyatomic Molecules

    DOE R&D Accomplishments Database

    Zare, P. N.; Herschbach, D. R.

    1964-01-29

    Conventional photochemical experiments give no information about the partitioning of energy between translational recoil and internal excitation of the fragment molecules formed in photodissociation of a polyatomic molecule. In a molecular beam experiment, it becomes possible to determine the energy partition from the form of the laboratory angular distribution of one of the photodissociation products. A general kinematic analysis is worked out in detail, and the uncertainty introduced by the finite angular resolution of the apparatus and the velocity spread in the parent beam is examined. The experimental requirements are evaluated for he photolysis of methyl iodide by the 2537 angstrom Hg line.

  5. Comparison of the applicability domain of a quantitative structure-activity relationship for estrogenicity with a large chemical inventory.

    PubMed

    Netzeva, Tatiana I; Gallegos Saliner, Ana; Worth, Andrew P

    2006-05-01

    The aim of the present study was to illustrate that it is possible and relatively straightforward to compare the domain of applicability of a quantitative structure-activity relationship (QSAR) model in terms of its physicochemical descriptors with a large inventory of chemicals. A training set of 105 chemicals with data for relative estrogenic gene activation, obtained in a recombinant yeast assay, was used to develop the QSAR. A binary classification model for predicting active versus inactive chemicals was developed using classification tree analysis and two descriptors with a clear physicochemical meaning (octanol-water partition coefficient, or log Kow, and the number of hydrogen bond donors, or n(Hdon)). The model demonstrated a high overall accuracy (90.5%), with a sensitivity of 95.9% and a specificity of 78.1%. The robustness of the model was evaluated using the leave-many-out cross-validation technique, whereas the predictivity was assessed using an artificial external test set composed of 12 compounds. The domain of the QSAR training set was compared with the chemical space covered by the European Inventory of Existing Commercial Chemical Substances (EINECS), as incorporated in the CDB-EC software, in the log Kow / n(Hdon) plane. The results showed that the training set and, therefore, the applicability domain of the QSAR model covers a small part of the physicochemical domain of the inventory, even though a simple method for defining the applicability domain (ranges in the descriptor space) was used. However, a large number of compounds are located within the narrow descriptor window.

  6. Intelligence system based classification approach for medical disease diagnosis

    NASA Astrophysics Data System (ADS)

    Sagir, Abdu Masanawa; Sathasivam, Saratha

    2017-08-01

    The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.

  7. On N = 1 partition functions without R-symmetry

    DOE PAGES

    Knodel, Gino; Liu, James T.; Zayas, Leopoldo A. Pando

    2015-03-25

    Here, we examine the dependence of four-dimensional Euclidean N = 1 partition functions on coupling constants. In particular, we focus on backgrounds without R-symmetry, which arise in the rigid limit of old minimal supergravity. Backgrounds preserving a single supercharge may be classified as having either trivial or SU(2) structure, with the former including S 4. We show that, in the absence of additional symmetries, the partition function depends non-trivially on all couplings in the trivial structure case, and (anti)-holomorphically on couplings in the SU(2) structure case. In both cases, this allows for ambiguities in the form of finite counterterms, whichmore » in principle render the partition function unphysical. However, we argue that on dimensional grounds, ambiguities are restricted to finite powers in relevant couplings, and can therefore be kept under control. On the other hand, for backgrounds preserving supercharges of opposite chiralities, the partition function is completely independent of all couplings. In this case, the background admits an R-symmetry, and the partition function is physical, in agreement with the results obtained in the rigid limit of new minimal supergravity. Based on a systematic analysis of supersymmetric invariants, we also demonstrate that N = 1 localization is not possible for backgrounds without R-symmetry.« less

  8. Partition of volatile organic compounds from air and from water into plant cuticular matrix: An LFER analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platts, J.A.; Abraham, M.H.

    The partitioning of organic compounds between air and foliage and between water and foliage is of considerable environmental interest. The purpose of this work is to show that partitioning into the cuticular matrix of one particular species can be satisfactorily modeled by general equations the authors have previously developed and, hence, that the same general equations could be used to model partitioning into other plant materials of the same or different species. The general equations are linear free energy relationships that employ descriptors for polarity/polarizability, hydrogen bond acidity and basicity, dispersive effects, and volume. They have been applied to themore » partition of 62 very varied organic compounds between cuticular matrix of the tomato fruit, Lycopersicon esculentum, and either air (MX{sub a}) or water (MX{sub w}). Values of log MX{sub a} covering a range of 12.4 log units are correlated with a standard deviation of 0.232 log unit, and values of log MX{sub w} covering a range of 7.6 log unit are correlated with an SD of 0.236 log unit. Possibilities are discussed for the prediction of new air-plant cuticular matrix and water-plant cuticular matrix partition values on the basis of the equations developed.« less

  9. Equilibrium partitioning of organic compounds to OASIS HLB® as a function of compound concentration, pH, temperature and salinity.

    PubMed

    Jeong, Yoonah; Schäffer, Andreas; Smith, Kilian

    2017-05-01

    Oasis hydrophilic lipophilic balance ® (Oasis HLB) is commonly employed in solid phase extraction (SPE) of environmental contaminants and within polar organic chemical integrative passive samplers (POCIS). In this study batch experiments were carried out to evaluate the relative affinity of a range of relevant organic pollutants to Oasis HLB in aqueous systems. The influence of sorbate concentration, temperature, pH, and salinity on the equilibrium sorption was investigated. Equilibrium partition ratios (K D ) of 28 compounds were determined, ranging over three orders of magnitude from 1.16 × 10 3  L/kg (atenolol) to 1.07 × 10 6  L/kg (isoproturon). The Freundlich model was able to describe the equilibrium partitioning to Oasis HLB, and an analysis of the thermodynamic parameters revealed the spontaneous and exothermic nature of the partitioning process. Ionic strength had only a minor effect on the partitioning, whereas pH had a considerable effect but only for ionizable compounds. The results show that apolar interactions between the Oasis HLB and analyte mainly determine the equilibrium partitioning. These research findings can be used to optimize the application of SPE and POCIS for analyses of environmental contaminants even in complex mixtures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Functional analysis and classification of phytoplankton based on data from an automated flow cytometer.

    PubMed

    Malkassian, Anthony; Nerini, David; van Dijk, Mark A; Thyssen, Melilotus; Mante, Claude; Gregori, Gerald

    2011-04-01

    Analytical flow cytometry (FCM) is well suited for the analysis of phytoplankton communities in fresh and sea waters. The measurement of light scatter and autofluorescence properties of particles by FCM provides optical fingerprints, which enables different phytoplankton groups to be separated. A submersible version of the CytoSense flow cytometer (the CytoSub) has been designed for in situ autonomous sampling and analysis, making it possible to monitor phytoplankton at a short temporal scale and obtain accurate information about its dynamics. For data analysis, a manual clustering is usually performed a posteriori: data are displayed on histograms and scatterplots, and group discrimination is made by drawing and combining regions (gating). The purpose of this study is to provide greater objectivity in the data analysis by applying a nonmanual and consistent method to automatically discriminate clusters of particles. In other words, we seek for partitioning methods based on the optical fingerprints of each particle. As the CytoSense is able to record the full pulse shape for each variable, it quickly generates a large and complex dataset to analyze. The shape, length, and area of each curve were chosen as descriptors for the analysis. To test the developed method, numerical experiments were performed on simulated curves. Then, the method was applied and validated on phytoplankton cultures data. Promising results have been obtained with a mixture of various species whose optical fingerprints overlapped considerably and could not be accurately separated using manual gating. Copyright © 2011 International Society for Advancement of Cytometry.

  11. Experimental partitioning of rare earth elements and scandium among armalcolite, ilmenite, olivine and mare basalt liquid

    NASA Technical Reports Server (NTRS)

    Irving, A. J.; Merrill, R. B.; Singleton, D. E.

    1978-01-01

    An experimental study was carried out to measure partition coefficients for two rare-earth elements (Sm and Tm) and Sc among armalcolite, ilmenite, olivine and liquid coexisting in a system modeled on high-Ti mare basalt 74275. This 'primitive' sample was chosen for study because its major and trace element chemistry as well as its equilibrium phase relations at atmospheric pressure are known from previous studies. Beta-track analytical techniques were used so that partition coefficients could be measured in an environment whose bulk trace element composition is similar to that of the natural basalt. Partition coefficients for Cr and Mn were determined in the same experiments by microprobe analysis. The only equilibrium partial melting model appears to be one in which ilmenite is initially present in the source region but is consumed by melting before segregation of the high-Ti mare basalt liquid from the residue.

  12. Analytical prediction of the interior noise for cylindrical models of aircraft fuselages for prescribed exterior noise fields. Phase 2: Models for sidewall trim, stiffened structures and cabin acoustics with floor partition

    NASA Technical Reports Server (NTRS)

    Pope, L. D.; Wilby, E. G.

    1982-01-01

    An airplane interior noise prediction model is developed to determine the important parameters associated with sound transmission into the interiors of airplanes, and to identify apropriate noise control methods. Models for stiffened structures, and cabin acoustics with floor partition are developed. Validation studies are undertaken using three test articles: a ring stringer stiffened cylinder, an unstiffened cylinder with floor partition, and ring stringer stiffened cylinder with floor partition and sidewall trim. The noise reductions of the three test articles are computed using the heoretical models and compared to measured values. A statistical analysis of the comparison data indicates that there is no bias in the predictions although a substantial random error exists so that a discrepancy of more than five or six dB can be expected for about one out of three predictions.

  13. Copula-based analysis of rhythm

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. Lanfredi

    2016-06-01

    In this paper we establish stochastic profiles of the rhythm for three languages: English, Japanese and Spanish. We model the increase or decrease of the acoustical energy, collected into three bands coming from the acoustic signal. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination of the partitions corresponding to the three marginal processes, one for each band of energy, and the partition coming from to the multivariate Markov chain. Then, all the partitions are linked using a copula, in order to estimate the transition probabilities.

  14. Significant Scales in Community Structure

    NASA Astrophysics Data System (ADS)

    Traag, V. A.; Krings, G.; van Dooren, P.

    2013-10-01

    Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of ``significance'' of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine ``good'' resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationality plays no role.

  15. Definition of an Acceptable Glass composition Region (AGCR) via an Index System and a Partitioning Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D. K.; Taylor, A. S.; Edwards, T.B.

    2005-06-26

    The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less

  16. Equilibrium water and solute uptake in silicone hydrogels.

    PubMed

    Liu, D E; Dursch, T J; Oh, Y; Bregante, D T; Chan, S Y; Radke, C J

    2015-05-01

    Equilibrium water content of and solute partitioning in silicone hydrogels (SiHys) are investigated using gravimetric analysis, fluorescence confocal laser-scanning microscopy (FCLSM), and back extraction with UV/Vis-absorption spectrophotometry. Synthesized silicone hydrogels consist of silicone monomer, hydrophilic monomer, cross-linking agent, and triblock-copolymer macromer used as an amphiphilic compatibilizer to prevent macrophase separation. In all cases, immiscibility of the silicone and hydrophilic polymers results in microphase-separated morphologies. To investigate solute uptake in each of the SiHy microphases, equilibrium partition coefficients are obtained for two hydrophilic solutes (i.e., theophylline and caffeine dissolved in aqueous phosphate-buffered saline) and two oleophilic solutes (i.e., Nile Red and Bodipy Green dissolved in silicone oil), respectively. Measured water contents and aqueous-solute partition coefficients increase linearly with increasing solvent-free hydrophilic-polymer volume fraction. Conversely, oleophilic-solute partition coefficients decrease linearly with rising solvent-free hydrophilic-polymer volume fraction (i.e., decreasing hydrophobic silicone-polymer fraction). We quantitatively predict equilibrium SiHy water and solute uptake assuming that water and aqueous solutes reside only in hydrophilic microdomains, whereas oleophilic solutes partition predominately into silicone microdomains. Predicted water contents and solute partition coefficients are in excellent agreement with experiment. Our new procedure permits a priori estimation of SiHy water contents and solute partition coefficients based solely on properties of silicone and hydrophilic homopolymer hydrogels, eliminating the need for further mixed-polymer-hydrogel experiments. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  17. Protein-coding genes combined with long noncoding RNA as a novel transcriptome molecular staging model to predict the survival of patients with esophageal squamous cell carcinoma.

    PubMed

    Guo, Jin-Cheng; Wu, Yang; Chen, Yang; Pan, Feng; Wu, Zhi-Yong; Zhang, Jia-Sheng; Wu, Jian-Yi; Xu, Xiu-E; Zhao, Jian-Mei; Li, En-Min; Zhao, Yi; Xu, Li-Yan

    2018-04-09

    Esophageal squamous cell carcinoma (ESCC) is the predominant subtype of esophageal carcinoma in China. This study was to develop a staging model to predict outcomes of patients with ESCC. Using Cox regression analysis, principal component analysis (PCA), partitioning clustering, Kaplan-Meier analysis, receiver operating characteristic (ROC) curve analysis, and classification and regression tree (CART) analysis, we mined the Gene Expression Omnibus database to determine the expression profiles of genes in 179 patients with ESCC from GSE63624 and GSE63622 dataset. Univariate cox regression analysis of the GSE63624 dataset revealed that 2404 protein-coding genes (PCGs) and 635 long non-coding RNAs (lncRNAs) were associated with the survival of patients with ESCC. PCA categorized these PCGs and lncRNAs into three principal components (PCs), which were used to cluster the patients into three groups. ROC analysis demonstrated that the predictive ability of PCG-lncRNA PCs when applied to new patients was better than that of the tumor-node-metastasis staging (area under ROC curve [AUC]: 0.69 vs. 0.65, P < 0.05). Accordingly, we constructed a molecular disaggregated model comprising one lncRNA and two PCGs, which we designated as the LSB staging model using CART analysis in the GSE63624 dataset. This LSB staging model classified the GSE63622 dataset of patients into three different groups, and its effectiveness was validated by analysis of another cohort of 105 patients. The LSB staging model has clinical significance for the prognosis prediction of patients with ESCC and may serve as a three-gene staging microarray.

  18. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.

  19. A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.

    PubMed

    Brusco, Michael J; Shireman, Emilie; Steinley, Douglas

    2017-09-01

    The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Clustering Financial Time Series by Network Community Analysis

    NASA Astrophysics Data System (ADS)

    Piccardi, Carlo; Calatroni, Lisa; Bertoni, Fabio

    In this paper, we describe a method for clustering financial time series which is based on community analysis, a recently developed approach for partitioning the nodes of a network (graph). A network with N nodes is associated to the set of N time series. The weight of the link (i, j), which quantifies the similarity between the two corresponding time series, is defined according to a metric based on symbolic time series analysis, which has recently proved effective in the context of financial time series. Then, searching for network communities allows one to identify groups of nodes (and then time series) with strong similarity. A quantitative assessment of the significance of the obtained partition is also provided. The method is applied to two distinct case-studies concerning the US and Italy Stock Exchange, respectively. In the US case, the stability of the partitions over time is also thoroughly investigated. The results favorably compare with those obtained with the standard tools typically used for clustering financial time series, such as the minimal spanning tree and the hierarchical tree.

  1. Statistical Inference on Optimal Points to Evaluate Multi-State Classification Systems

    DTIC Science & Technology

    2014-09-18

    vs2+ ( dbcm3 ˆ 2 ) *vm3+( dbcs3 ˆ 2 ) * vs3 80 VETA <−VBCA+VBC EETA<−EBCA−EBC 82 W<− (EETA−TV) / s q r t ( VETA ) # T e s t p−v a l u e − t o compare t o a...event set, E = (ε1, ε2, ..., εk) to k distinct elements of a label set, L = (l1, l2, ..., lk) . These partitions may be referred to as classes. For...set of features, F = ( f1, f2, ..., fm) . These features are then used to assign the different elements from E to the respective labels, L , (A : E → F

  2. Satellite classification and segmentation using non-additive entropy

    NASA Astrophysics Data System (ADS)

    Assirati, Lucas; Souto Martinez, Alexandre; Martinez Bruno, Odemir

    2014-03-01

    Here we compare the Boltzmann-Gibbs-Shannon (standard) with the Tsallis entropy on the pattern recognition and segmentation of colored images obtained by satellites, via "Google Earth". By segmentation we mean particionate an image to locate regions of interest. Here, we discriminate and define an image partition classes according to a training basis. This training basis consists of three pattern classes: aquatic, urban and vegetation regions. Our numerical experiments demonstrate that the Tsallis entropy, used as a feature vector composed of distinct entropic indexes q outperforms the standard entropy. There are several applications of our proposed methodology, once satellite images can be used to monitor migration form rural to urban regions, agricultural activities, oil spreading on the ocean etc.

  3. Classification of multipartite entanglement via negativity fonts

    NASA Astrophysics Data System (ADS)

    Sharma, S. Shelly; Sharma, N. K.

    2012-04-01

    Partial transposition of state operator is a well-known tool to detect quantum correlations between two parts of a composite system. In this paper, the global partial transpose (GPT) is linked to conceptually multipartite underlying structures in a state—the negativity fonts. If K-way negativity fonts with nonzero determinants exist, then selective partial transposition of a pure state, involving K of the N qubits (K⩽N), yields an operator with negative eigenvalues, identifying K-body correlations in the state. Expansion of GPT in terms of K-way partially transposed (KPT) operators reveals the nature of intricate intrinsic correlations in the state. Classification criteria for multipartite entangled states based on the underlying structure of global partial transpose of canonical state are proposed. The number of N-partite entanglement types for an N-qubit system is found to be 2N-1-N+2, while the number of major entanglement classes is 2N-1-1. Major classes for three- and four-qubit states are listed. Subclasses are determined by the number and type of negativity fonts in canonical states.

  4. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  5. Structural partitioning of complex structures in the medium-frequency range. An application to an automotive vehicle

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2011-02-01

    In a recent work [ Journal of Sound and Vibration 323 (2009) 849-863] the authors presented an energy-density field approach for the vibroacoustic analysis of complex structures in the low and medium frequency ranges. In this approach, a local vibroacoustic energy model as well as a simplification of this model were constructed. In this paper, firstly an extension of the previous theory is performed in order to include the case of general input forces and secondly, a structural partitioning methodology is presented along with a set of tools used for the construction of a partitioning. Finally, an application is presented for an automotive vehicle.

  6. Multi-viewpoint clustering analysis

    NASA Technical Reports Server (NTRS)

    Mehrotra, Mala; Wild, Chris

    1993-01-01

    In this paper, we address the feasibility of partitioning rule-based systems into a number of meaningful units to enhance the comprehensibility, maintainability and reliability of expert systems software. Preliminary results have shown that no single structuring principle or abstraction hierarchy is sufficient to understand complex knowledge bases. We therefore propose the Multi View Point - Clustering Analysis (MVP-CA) methodology to provide multiple views of the same expert system. We present the results of using this approach to partition a deployed knowledge-based system that navigates the Space Shuttle's entry. We also discuss the impact of this approach on verification and validation of knowledge-based systems.

  7. On the difficulty to delimit disease risk hot spots

    NASA Astrophysics Data System (ADS)

    Charras-Garrido, M.; Azizi, L.; Forbes, F.; Doyle, S.; Peyrard, N.; Abrial, D.

    2013-06-01

    Representing the health state of a region is a helpful tool to highlight spatial heterogeneity and localize high risk areas. For ease of interpretation and to determine where to apply control procedures, we need to clearly identify and delineate homogeneous regions in terms of disease risk, and in particular disease risk hot spots. However, even if practical purposes require the delineation of different risk classes, such a classification does not correspond to a reality and is thus difficult to estimate. Working with grouped data, a first natural choice is to apply disease mapping models. We apply a usual disease mapping model, producing continuous estimations of the risks that requires a post-processing classification step to obtain clearly delimited risk zones. We also apply a risk partition model that build a classification of the risk levels in a one step procedure. Working with point data, we will focus on the scan statistic clustering method. We illustrate our article with a real example concerning the bovin spongiform encephalopathy (BSE) an animal disease whose zones at risk are well known by the epidemiologists. We show that in this difficult case of a rare disease and a very heterogeneous population, the different methods provide risk zones that are globally coherent. But, related to the dichotomy between the need and the reality, the exact delimitation of the risk zones, as well as the corresponding estimated risks are quite different.

  8. STACCATO: a novel solution to supernova photometric classification with biased training sets

    NASA Astrophysics Data System (ADS)

    Revsbech, E. A.; Trotta, R.; van Dyk, D. A.

    2018-01-01

    We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. - a diffusion map combined with a random forest classifier - to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.

  9. A fuzzy neural network for intelligent data processing

    NASA Astrophysics Data System (ADS)

    Xie, Wei; Chu, Feng; Wang, Lipo; Lim, Eng Thiam

    2005-03-01

    In this paper, we describe an incrementally generated fuzzy neural network (FNN) for intelligent data processing. This FNN combines the features of initial fuzzy model self-generation, fast input selection, partition validation, parameter optimization and rule-base simplification. A small FNN is created from scratch -- there is no need to specify the initial network architecture, initial membership functions, or initial weights. Fuzzy IF-THEN rules are constantly combined and pruned to minimize the size of the network while maintaining accuracy; irrelevant inputs are detected and deleted, and membership functions and network weights are trained with a gradient descent algorithm, i.e., error backpropagation. Experimental studies on synthesized data sets demonstrate that the proposed Fuzzy Neural Network is able to achieve accuracy comparable to or higher than both a feedforward crisp neural network, i.e., NeuroRule, and a decision tree, i.e., C4.5, with more compact rule bases for most of the data sets used in our experiments. The FNN has achieved outstanding results for cancer classification based on microarray data. The excellent classification result for Small Round Blue Cell Tumors (SRBCTs) data set is shown. Compared with other published methods, we have used a much fewer number of genes for perfect classification, which will help researchers directly focus their attention on some specific genes and may lead to discovery of deep reasons of the development of cancers and discovery of drugs.

  10. Sarment: Python modules for HMM analysis and partitioning of sequences.

    PubMed

    Guéguen, Laurent

    2005-08-15

    Sarment is a package of Python modules for easy building and manipulation of sequence segmentations. It provides efficient implementation of usual algorithms for hidden Markov Model computation, as well as for maximal predictive partitioning. Owing to its very large variety of criteria for computing segmentations, Sarment can handle many kinds of models. Because of object-oriented programming, the results of the segmentation are very easy tomanipulate.

  11. Atmospheric emissions from the Deepwater Horizon spill constrain air-water partitioning, hydrocarbon fate, and leak rate

    NASA Astrophysics Data System (ADS)

    Ryerson, T. B.; Aikin, K. C.; Angevine, W. M.; Atlas, E. L.; Blake, D. R.; Brock, C. A.; Fehsenfeld, F. C.; Gao, R.-S.; de Gouw, J. A.; Fahey, D. W.; Holloway, J. S.; Lack, D. A.; Lueb, R. A.; Meinardi, S.; Middlebrook, A. M.; Murphy, D. M.; Neuman, J. A.; Nowak, J. B.; Parrish, D. D.; Peischl, J.; Perring, A. E.; Pollack, I. B.; Ravishankara, A. R.; Roberts, J. M.; Schwarz, J. P.; Spackman, J. R.; Stark, H.; Warneke, C.; Watts, L. A.

    2011-04-01

    The fate of deepwater releases of gas and oil mixtures is initially determined by solubility and volatility of individual hydrocarbon species; these attributes determine partitioning between air and water. Quantifying this partitioning is necessary to constrain simulations of gas and oil transport, to predict marine bioavailability of different fractions of the gas-oil mixture, and to develop a comprehensive picture of the fate of leaked hydrocarbons in the marine environment. Analysis of airborne atmospheric data shows massive amounts (˜258,000 kg/day) of hydrocarbons evaporating promptly from the Deepwater Horizon spill; these data collected during two research flights constrain air-water partitioning, thus bioavailability and fate, of the leaked fluid. This analysis quantifies the fraction of surfacing hydrocarbons that dissolves in the water column (˜33% by mass), the fraction that does not dissolve, and the fraction that evaporates promptly after surfacing (˜14% by mass). We do not quantify the leaked fraction lacking a surface expression; therefore, calculation of atmospheric mass fluxes provides a lower limit to the total hydrocarbon leak rate of 32,600 to 47,700 barrels of fluid per day, depending on reservoir fluid composition information. This study demonstrates a new approach for rapid-response airborne assessment of future oil spills.

  12. Effects of low urea concentrations on protein-water interactions.

    PubMed

    Ferreira, Luisa A; Povarova, Olga I; Stepanenko, Olga V; Sulatskaya, Anna I; Madeira, Pedro P; Kuznetsova, Irina M; Turoverov, Konstantin K; Uversky, Vladimir N; Zaslavsky, Boris Y

    2017-01-01

    Solvent properties of aqueous media (dipolarity/polarizability, hydrogen bond donor acidity, and hydrogen bond acceptor basicity) were measured in the coexisting phases of Dextran-PEG aqueous two-phase systems (ATPSs) containing .5 and 2.0 M urea. The differences between the electrostatic and hydrophobic properties of the phases in the ATPSs were quantified by analysis of partitioning of the homologous series of sodium salts of dinitrophenylated amino acids with aliphatic alkyl side chains. Furthermore, partitioning of eleven different proteins in the ATPSs was studied. The analysis of protein partition behavior in a set of ATPSs with protective osmolytes (sorbitol, sucrose, trehalose, and TMAO) at the concentration of .5 M, in osmolyte-free ATPS, and in ATPSs with .5 or 2.0 M urea in terms of the solvent properties of the phases was performed. The results show unambiguously that even at the urea concentration of .5 M, this denaturant affects partitioning of all proteins (except concanavalin A) through direct urea-protein interactions and via its effect on the solvent properties of the media. The direct urea-protein interactions seem to prevail over the urea effects on the solvent properties of water at the concentration of .5 M urea and appear to be completely dominant at 2.0 M urea concentration.

  13. Direct optimization, affine gap costs, and node stability.

    PubMed

    Aagesen, Lone

    2005-09-01

    The outcome of a phylogenetic analysis based on DNA sequence data is highly dependent on the homology-assignment step and may vary with alignment parameter costs. Robustness to changes in parameter costs is therefore a desired quality of a data set because the final conclusions will be less dependent on selecting a precise optimal cost set. Here, node stability is explored in relationship to separate versus combined analysis in three different data sets, all including several data partitions. Robustness to changes in cost sets is measured as number of successive changes that can be made in a given cost set before a specific clade is lost. The changes are in all cases base change cost, gap penalties, and adding/removing/changing affine gap costs. When combining data partitions, the number of clades that appear in the entire parameter space is not remarkably increased, in some cases this number even decreased. However, when combining data partitions the trees from cost sets including affine gap costs were always more similar than the trees were from cost sets without affine gap costs. This was not the case when the data partitions were analyzed independently. When data sets were combined approximately 80% of the clades found under cost sets including affine gap costs resisted at least one change to the cost set.

  14. C-Depth Method to Determine Diffusion Coefficient and Partition Coefficient of PCB in Building Materials.

    PubMed

    Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping

    2015-10-20

    Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.

  15. Dynamic Partitioning of a GPI-Anchored Protein in Glycosphingolipid-Rich Microdomains Imaged by Single-Quantum Dot Tracking

    PubMed Central

    Pinaud, Fabien; Michalet, Xavier; Iyer, Gopal; Margeat, Emmanuel; Moore, Hsiao-Ping; Weiss, Shimon

    2009-01-01

    Recent experimental developments have led to a revision of the classical fluid mosaic model proposed by Singer and Nicholson 35 years ago. In particular, it is now well established that lipids and proteins diffuse heterogeneously in cell plasma membranes. Their complex motion patterns reflect the dynamic structure and composition of the membrane itself, as well as the presence of the underlying cytoskeleton scaffold and that of the extracellular matrix. How the structural organization of plasma membranes influences the diffusion of individual proteins remains a challenging, yet central question for cell signaling and its regulation. Here we have developed a raft-associated glycosylphosphatidyl Inositol-anchored avidin test probe (Av-GPI), whose diffusion patterns indirectly reports on the structure and dynamics of putative raft microdomains in the membrane of HeLa cells. Labeling with quantum dots (qdots) allowed high-resolution and long-term tracking of individual Av-GPI and the classification of their various diffusive behaviors. Using dual-color total internal reflection fluorescence (TIRF) microscopy, we studied the correlation between the diffusion of individual Av-GPI and the location of glycosphingolipid GM1-rich microdomains and caveolae. We show that Av-GPI exhibit a fast and a slow diffusion regime in different membrane regions, and that slowing down of their diffusion is correlated with entry in GM1-rich microdomains located in close proximity to, but distinct, from caveolae. We further show that Av-GPI dynamically partition in and out of these microdomains in a cholesterol-dependent manner. Our results provide direct evidence that cholesterol/sphingolipid-rich microdomains can compartmentalize the diffusion of GPI-anchored proteins in living cells and that the dynamic partitioning raft model appropriately describes the diffusive behavior of some raft-associated proteins across the plasma membrane. PMID:19416475

  16. Dynamic partitioning of a glycosyl-phosphatidylinositol-anchored protein in glycosphingolipid-rich microdomains imaged by single-quantum dot tracking.

    PubMed

    Pinaud, Fabien; Michalet, Xavier; Iyer, Gopal; Margeat, Emmanuel; Moore, Hsiao-Ping; Weiss, Shimon

    2009-06-01

    Recent experimental developments have led to a revision of the classical fluid mosaic model proposed by Singer and Nicholson more than 35 years ago. In particular, it is now well established that lipids and proteins diffuse heterogeneously in cell plasma membranes. Their complex motion patterns reflect the dynamic structure and composition of the membrane itself, as well as the presence of the underlying cytoskeleton scaffold and that of the extracellular matrix. How the structural organization of plasma membranes influences the diffusion of individual proteins remains a challenging, yet central, question for cell signaling and its regulation. Here we have developed a raft-associated glycosyl-phosphatidyl-inositol-anchored avidin test probe (Av-GPI), whose diffusion patterns indirectly report on the structure and dynamics of putative raft microdomains in the membrane of HeLa cells. Labeling with quantum dots (qdots) allowed high-resolution and long-term tracking of individual Av-GPI and the classification of their various diffusive behaviors. Using dual-color total internal reflection fluorescence (TIRF) microscopy, we studied the correlation between the diffusion of individual Av-GPI and the location of glycosphingolipid GM1-rich microdomains and caveolae. We show that Av-GPI exhibit a fast and a slow diffusion regime in different membrane regions, and that slowing down of their diffusion is correlated with entry in GM1-rich microdomains located in close proximity to, but distinct, from caveolae. We further show that Av-GPI dynamically partition in and out of these microdomains in a cholesterol-dependent manner. Our results provide direct evidence that cholesterol-/sphingolipid-rich microdomains can compartmentalize the diffusion of GPI-anchored proteins in living cells and that the dynamic partitioning raft model appropriately describes the diffusive behavior of some raft-associated proteins across the plasma membrane.

  17. The relationship between leaf area growth and biomass accumulation in Arabidopsis thaliana

    PubMed Central

    Weraduwage, Sarathi M.; Chen, Jin; Anozie, Fransisca C.; Morales, Alejandro; Weise, Sean E.; Sharkey, Thomas D.

    2015-01-01

    Leaf area growth determines the light interception capacity of a crop and is often used as a surrogate for plant growth in high-throughput phenotyping systems. The relationship between leaf area growth and growth in terms of mass will depend on how carbon is partitioned among new leaf area, leaf mass, root mass, reproduction, and respiration. A model of leaf area growth in terms of photosynthetic rate and carbon partitioning to different plant organs was developed and tested with Arabidopsis thaliana L. Heynh. ecotype Columbia (Col-0) and a mutant line, gigantea-2 (gi-2), which develops very large rosettes. Data obtained from growth analysis and gas exchange measurements was used to train a genetic programming algorithm to parameterize and test the above model. The relationship between leaf area and plant biomass was found to be non-linear and variable depending on carbon partitioning. The model output was sensitive to the rate of photosynthesis but more sensitive to the amount of carbon partitioned to growing thicker leaves. The large rosette size of gi-2 relative to that of Col-0 resulted from relatively small differences in partitioning to new leaf area vs. leaf thickness. PMID:25914696

  18. The relationship between leaf area growth and biomass accumulation in Arabidopsis thaliana

    DOE PAGES

    Weraduwage, Sarathi M.; Chen, Jin; Anozie, Fransisca C.; ...

    2015-04-09

    Leaf area growth determines the light interception capacity of a crop and is often used as a surrogate for plant growth in high-throughput phenotyping systems. The relationship between leaf area growth and growth in terms of mass will depend on how carbon is partitioned among new leaf area, leaf mass, root mass, reproduction, and respiration. A model of leaf area growth in terms of photosynthetic rate and carbon partitioning to different plant organs was developed and tested with Arabidopsis thaliana L. Heynh. ecotype Columbia (Col-0) and a mutant line, gigantea-2 (gi-2), which develops very large rosettes. Data obtained from growthmore » analysis and gas exchange measurements was used to train a genetic programming algorithm to parameterize and test the above model. The relationship between leaf area and plant biomass was found to be non-linear and variable depending on carbon partitioning. The model output was sensitive to the rate of photosynthesis but more sensitive to the amount of carbon partitioned to growing thicker leaves. The large rosette size of gi-2 relative to that of Col-0 resulted from relatively small differences in partitioning to new leaf area vs. leaf thickness.« less

  19. Fast Centrifugal Partition Chromatography Fractionation of Concentrated Agave (Agave salmiana) Sap to Obtain Saponins with Apoptotic Effect on Colon Cancer Cells.

    PubMed

    Santos-Zea, Liliana; Fajardo-Ramírez, Oscar R; Romo-López, Irasema; Gutiérrez-Uribe, Janet A

    2016-03-01

    Separation of potentially bioactive components from foods and plant extracts is one of the main challenges for their study. Centrifugal partition chromatography has been a successful technique for the screening and identification of molecules with bioactive potential, such as steroidal saponins. Agave is a source of steroidal saponins with anticancer potential, though the activity of these compounds in concentrated agave sap has not been yet explored. In this study, fast centrifugal partition chromatography (FCPC) was used coupled with in vitro tests on HT-29 cells as a screening procedure to identify apoptotic saponins from an acetonic extract of concentrated agave sap. The three most bioactive fractions obtained by FCPC at partition coefficients between 0.23 and 0.4 contained steroidal saponins, predominantly magueyoside b. Flow cytometry analysis determined that the fraction rich in kammogenin and manogenin glycosides induced apoptosis, but when gentrogenin and hecogenin glycosides were also found in the fraction, a necrotic effect was observed. In conclusion, this study provides the evidence that steroidal saponins in concentrated agave sap were potential inductors of apoptosis and that it was possible to separate them using fast centrifugal partition chromatography.

  20. Coexistence of three sympatric cormorants (Phalacrocorax spp.); partitioning of time as an ecological resource

    PubMed Central

    Mahendiran, Mylswamy

    2016-01-01

    Resource partitioning is well known along food and habitat for reducing competition among sympatric species, yet a study on temporal partitioning as a viable basis for reducing resource competition is not empirically investigated. Here, I attempt to identify the mechanism of temporal partitioning by intra- and interspecific diving analyses of three sympatric cormorant species at different freshwater wetlands around the Delhi region. Diving results indicated that cormorants opted for a shallow diving; consequently, they did not face any physiological stress. Moreover, diving durations were linked with seasons, foraging time and foraging habitats. Intraspecific comparison suggested that cormorants spent a longer time underwater in early hours of the day. Therefore, time spent for dive was higher in the forenoon than late afternoon, and the interspecific analysis also yielded a similar result. When Phalacrocorax niger and Phalacrocorax fuscicollis shared the same foraging habitat, they tended to differ in their foraging time (forenoon/afternoon). However, when P. niger and Phalacrocorax carbo shared the same foraging time, they tended to use different foraging habitats (lentic/lotic) leading to a mechanism of resource partitioning. Thus, sympatric cormorants effectively use time as a resource to exploit the food resources and successful coexistence. PMID:27293799

  1. Laboratory actinide partitioning - Whitlockite/liquid and influence of actinide concentration levels

    NASA Technical Reports Server (NTRS)

    Benjamin, T. M.; Jones, J. H.; Heuser, W. R.; Burnett, D. S.

    1983-01-01

    The partition coefficients between synthetic whitlockite (beta Ca-phosphate) and coexisting silicate melts are determined for the actinide elements Th, U and Pu. Experiments were performed at 1 bar pressure and 1250 C at oxygen fugacities from 10 to the -8.5 to 10 to the -0.7 bars, and partitioning was determined from trace element radiography combined with conventional electron microprobe analysis. Results show Pu to be more readily incorporated into crystalline phases than U or Th under reducing conditions, which is attributed to the observation that Pu exists primarily in the trivalent state, while U and Th are tetravalent. Corrected partition coefficients for whitlockite of 3.6, less than or equal to 0.6, 1.2, 0.5 and less than or equal to 0.002 are estimated for Pu(+3), Pu(+4), Th(+4), U(+4) and U(+6), respectively. Experiments performed at trace levels and percent levels of UO2 indicate that Si is involved in U substitution in whitlockite, and show a reduced partition coefficient at higher concentrations of U that can be explained by effects on melt structure or the fraction of tetravalent U.

  2. Estimating average annual per cent change in trend analysis

    PubMed Central

    Clegg, Limin X; Hankey, Benjamin F; Tiwari, Ram; Feuer, Eric J; Edwards, Brenda K

    2009-01-01

    Trends in incidence or mortality rates over a specified time interval are usually described by the conventional annual per cent change (cAPC), under the assumption of a constant rate of change. When this assumption does not hold over the entire time interval, the trend may be characterized using the annual per cent changes from segmented analysis (sAPCs). This approach assumes that the change in rates is constant over each time partition defined by the transition points, but varies among different time partitions. Different groups (e.g. racial subgroups), however, may have different transition points and thus different time partitions over which they have constant rates of change, making comparison of sAPCs problematic across groups over a common time interval of interest (e.g. the past 10 years). We propose a new measure, the average annual per cent change (AAPC), which uses sAPCs to summarize and compare trends for a specific time period. The advantage of the proposed AAPC is that it takes into account the trend transitions, whereas cAPC does not and can lead to erroneous conclusions. In addition, when the trend is constant over the entire time interval of interest, the AAPC has the advantage of reducing to both cAPC and sAPC. Moreover, because the estimated AAPC is based on the segmented analysis over the entire data series, any selected subinterval within a single time partition will yield the same AAPC estimate—that is it will be equal to the estimated sAPC for that time partition. The cAPC, however, is re-estimated using data only from that selected subinterval; thus, its estimate may be sensitive to the subinterval selected. The AAPC estimation has been incorporated into the segmented regression (free) software Joinpoint, which is used by many registries throughout the world for characterizing trends in cancer rates. Copyright © 2009 John Wiley & Sons, Ltd. PMID:19856324

  3. Total Storage and Landscape Partitioning of Soil Organic Carbon and Phytomass Carbon in Siberia

    NASA Astrophysics Data System (ADS)

    Siewert, M. B.; Hanisch, J.; Weiss, N.; Kuhry, P.; Hugelius, G.

    2014-12-01

    We present results of detailed partitioning of soil organic carbon (SOC) and phytomass carbon (PC) from two study sites in Siberia. The study sites in the Tundra (Kytalyk) and the Taiga (Spasskaya Pad) reflect two contrasting environments in the continuous permafrost zone. In total 57 individual field sites (24 and 33 per study site respectively) have have been sampled for SOC and PC along transects cutting across different land covers. In Kytalyk the sampling depth for the soil pedons was 1 m depth. In Spasskaya Pad where the active layer was significantly deeper, we aimed for 2 m depth or tried to include at least the top of the permafrost. Here the average depth of soil profiles was 152 cm. PC was sampled from 1x1 m ground coverage plots. In Spasskaya Pad tree phytomass was also estimated on a 5x5 m plot. The SOC storage was calculated separately for the intervals 0-30 cm, 30-100 cm and 100-200 cm (the latter only for Spasskaya Pad), as well as for organic layer vs. mineral soil, active layer vs. permafrost and for cryoturbated soil horizons. Landscape partitioning was performed by thematic up-scaling using a vegetation based land cover classification of very high resolution (2x2 m) satellite imagery. Non-Metric Multidimensional Scaling (NMDS) was used to explore the relationship of SOC with PC and different soil and permafrost related variables. The results show that the different land cover classes can be considered distinct storages of SOC, but that PC is not significantly related to total SOC storage. At both study sites the 30-100 cm SOC storage is more important for the total SOC storage than the 0-30 cm interval, and large portions of the total SOC are stored in the permafrost. The largest contribution comes from wetland pedons, but highly cryoturbated individual non-wetland pedons can match these. In Kytalyk the landscape partitioning of SOC mostly follows large scale geomorphological features, while in Spasskaya pad forest type also has a large influence.

  4. Crisis in Cataloging Revisited: The Year's Work in Subject Analysis, 1990.

    ERIC Educational Resources Information Center

    Young, James Bradford

    1991-01-01

    Reviews the 1990 literature that concerns subject analysis. Issues addressed include subject cataloging, including Library of Congress Subject Headings (LCSH); classification, including Dewey Decimal Classification (DDC), Library of Congress Classification, and classification in online systems; subject access, including the online use of…

  5. [Determination of equilibrium solubility and n-octanol/water partition coefficient of pulchinenosiden D by HPLC].

    PubMed

    Rao, Xiao-Yong; Yin, Shan; Zhang, Guo-Song; Luo, Xiao-Jian; Jian, Hui; Feng, Yu-Lin; Yang, Shi-Lin

    2014-05-01

    To determine the equilibrium solubility of pulchinenosiden D in different solvents and its n-octanol/water partition coefficients. Combining shaking flask method and high performance liquid chromatography (HPLC) to detect the n-octanol/water partition coefficients of pulchinenosiden D, the equilibrium solubility of pulchinenosiden D in six organic solvents and different pH buffer solution were determined by HPLC analysis. n-Octanol/water partition coefficients of pulchinenosiden D in different pH were greater than zero, the equilibrium solubility of pulchinenosiden D was increased with increase the pH of the buffer solution. The maximum equilibrium solubility of pulchinenosiden D was 255.89 g x L(-1) in methanol, and minimum equilibrium solubility of pulchinenosiden D was 0.20 g x L(-1) in acetonitrile. Under gastrointestinal physiological conditions, pulchinenosiden D exists in molecular state and it has good absorption but poor water-solubility, so increasing the dissolution rate of pulchinenosiden D may enhance its bioavailability.

  6. Quantum chemical and statistical study of megazol-derived compounds with trypanocidal activity

    NASA Astrophysics Data System (ADS)

    Rosselli, F. P.; Albuquerque, C. N.; da Silva, A. B. F.

    In this work we performed a structure-activity relationship (SAR) study with the aim to correlate molecular properties of the megazol compound and 10 of its analogs with the biological activity against Trypanosoma cruzi (trypanocidal or antichagasic activity) presented by these molecules. The biological activity indication was obtained from in vitro tests and the molecular properties (variables or descriptors) were obtained from the optimized chemical structures by using the PM3 semiempirical method. It was calculated ˜80 molecular properties selected among steric, constitutional, electronic, and lipophilicity properties. In order to reduce dimensionality and investigate which subset of variables (descriptors) would be more effective in classifying the compounds studied, according to their degree of trypanocidal activity, we employed statistical methodologies (pattern recognition and classification techniques) such as principal component analysis (PCA), hierarchical cluster analysis (HCA), K-nearest neighbor (KNN), and discriminant function analysis (DFA). These methods showed that the descriptors molecular mass (MM), energy of the second lowest unoccupied molecular orbital (LUMO+1), charge on the first nitrogen at substituent 2 (qN'), dihedral angles (D1 and D2), bond length between atom C4 and its substituent (L4), Moriguchi octanol-partition coefficient (MLogP), and length-to-breadth ratio (L/Bw) were the variables responsible for the separation between active and inactive compounds against T. cruzi. Afterwards, the PCA, KNN, and DFA models built in this work were used to perform trypanocidal activity predictions for eight new megazol analog compounds.

  7. Conditional adaptive Bayesian spectral analysis of nonstationary biomedical time series.

    PubMed

    Bruce, Scott A; Hall, Martica H; Buysse, Daniel J; Krafty, Robert T

    2018-03-01

    Many studies of biomedical time series signals aim to measure the association between frequency-domain properties of time series and clinical and behavioral covariates. However, the time-varying dynamics of these associations are largely ignored due to a lack of methods that can assess the changing nature of the relationship through time. This article introduces a method for the simultaneous and automatic analysis of the association between the time-varying power spectrum and covariates, which we refer to as conditional adaptive Bayesian spectrum analysis (CABS). The procedure adaptively partitions the grid of time and covariate values into an unknown number of approximately stationary blocks and nonparametrically estimates local spectra within blocks through penalized splines. CABS is formulated in a fully Bayesian framework, in which the number and locations of partition points are random, and fit using reversible jump Markov chain Monte Carlo techniques. Estimation and inference averaged over the distribution of partitions allows for the accurate analysis of spectra with both smooth and abrupt changes. The proposed methodology is used to analyze the association between the time-varying spectrum of heart rate variability and self-reported sleep quality in a study of older adults serving as the primary caregiver for their ill spouse. © 2017, The International Biometric Society.

  8. An empirical study of statistical properties of variance partition coefficients for multi-level logistic regression models

    USGS Publications Warehouse

    Li, Ji; Gray, B.R.; Bates, D.M.

    2008-01-01

    Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.

  9. Rare Earth Element Partition Coefficients from Enstatite/Melt Synthesis Experiments

    NASA Technical Reports Server (NTRS)

    Schwandt, Craig S.; McKay, Gordon A.

    1997-01-01

    Enstatite (En(80)Fs(19)Wo(01)) was synthesized from a hypersthene normative basaltic melt doped at the same time with La, Ce, Nd, Sm, Eu, Dy, Er, Yb and Lu. The rare earth element concentrations were measured in both the basaltic glass and the enstatite. Rare earth element concentrations in the glass were determined by electron microprobe analysis with uncertainties less than two percent relative. Rare earth element concentrations in enstatite were determined by secondary ion mass spectrometry with uncertainties less than five percent relative. The resulting rare earth element partition signature for enstatite is similar to previous calculated and composite low-Ca pigeonite signatures, but is better defined and differs in several details. The partition coefficients are consistent with crystal structural constraints.

  10. Temperature effects on the strainrange partitioning approach for creep-fatigue analysis

    NASA Technical Reports Server (NTRS)

    Halford, G. R.; Hirschberg, M. H.; Manson, S. S.

    1972-01-01

    Examination is made of the influence of temperature on the strainrange partitioning approach to creep-fatigue. Results for Cr-Mo steel and Type 316 stainless steel show the four partitioned strainrange-life relationships to be temperature insensitive to within a factor of two on cyclic life. Monotonic creep and tensile ductilities were also found to be temperature insensitive to within a factor of two. The approach provides bounds on cyclic life that can be readily established for any type of inelastic strain cycle. Continuous strain cycling results obtained over a broad range of high temperatures and frequencies are in excellent agreement with bounds provided by the approach. The observed transition from one bound to the other is also in good agreement with the approach.

  11. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  12. Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.

    PubMed

    Garnavi, Rahil; Aldeen, Mohammad; Bailey, James

    2012-11-01

    This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.

  13. Binary partition tree analysis based on region evolution and its application to tree simplification.

    PubMed

    Lu, Huihai; Woods, John C; Ghanbari, Mohammed

    2007-04-01

    Pyramid image representations via tree structures are recognized methods for region-based image analysis. Binary partition trees can be applied which document the merging process with small details found at the bottom levels and larger ones close to the root. Hindsight of the merging process is stored within the tree structure and provides the change histories of an image property from the leaf to the root node. In this work, the change histories are modelled by evolvement functions and their second order statistics are analyzed by using a knee function. Knee values show the reluctancy of each merge. We have systematically formulated these findings to provide a novel framework for binary partition tree analysis, where tree simplification is demonstrated. Based on an evolvement function, for each upward path in a tree, the tree node associated with the first reluctant merge is considered as a pruning candidate. The result is a simplified version providing a reduced solution space and still complying with the definition of a binary tree. The experiments show that image details are preserved whilst the number of nodes is dramatically reduced. An image filtering tool also results which preserves object boundaries and has applications for segmentation.

  14. Spatial partitioning of environmental correlates of avian biodiversity in the conterminous United States

    USGS Publications Warehouse

    O'Connor, R.J.; Jones, M.T.; White, D.; Hunsaker, C.; Loveland, Tom; Jones, Bruce; Preston, E.

    1996-01-01

    Classification and regression tree (CART) analysis was used to create hierarchically organized models of the distribution of bird species richness across the conterminous United States. Species richness data were taken from the Breeding Bird Survey and were related to climatic and land use data. We used a systematic spatial grid of approximately 12,500 hexagons, each approximately 640 square kilometres in area. Within each hexagon land use was characterized by the Loveland et al. land cover classification based on Advanced Very High Resolution Radiometer (AVHRR) data from NOAA polar orbiting meteorological satellites. These data were aggregated to yield fourteen land classes equivalent to an Anderson level II coverage; urban areas were added from the Digital Chart of the World. Each hexagon was characterized by climate data and landscape pattern metrics calculated from the land cover. A CART model then related the variation in species richness across the 1162 hexagons for which bird species richness data were available to the independent variables, yielding an R2-type goodness of fit metric of 47.5% deviance explained. The resulting model recognized eleven groups of hexagons, with species richness within each group determined by unique sequences of hierarchically constrained independent variables. Within the hierarchy, climate data accounted for more variability in the bird data, followed by land cover proportion, and then pattern metrics. The model was then used to predict species richness in all 12,500 hexagons of the conterminous United States yielding a map of the distribution of these eleven classes of bird species richness as determined by the environmental correlates. The potential for using this technique to interface biogeographic theory with the hierarchy theory of ecology is discussed. ?? 1996 Blackwell Science Ltd.

  15. DAMQT: A package for the analysis of electron density in molecules

    NASA Astrophysics Data System (ADS)

    López, Rafael; Rico, Jaime Fernández; Ramírez, Guillermo; Ema, Ignacio; Zorrilla, David

    2009-09-01

    DAMQT is a package for the analysis of the electron density in molecules and the fast computation of the density, density deformations, electrostatic potential and field, and Hellmann-Feynman forces. The method is based on the partition of the electron density into atomic fragments by means of a least deformation criterion. Each atomic fragment of the density is expanded in regular spherical harmonics times radial factors, which are piecewise represented in terms of analytical functions. This representation is used for the fast evaluation of the electrostatic potential and field generated by the electron density and nuclei, as well as for the computation of the Hellmann-Feynman forces on the nuclei. An analysis of the atomic and molecular deformations of the density can be also carried out, yielding a picture that connects with several concepts of the empirical structural chemistry. Program summaryProgram title: DAMQT1.0 Catalogue identifier: AEDL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv3 No. of lines in distributed program, including test data, etc.: 278 356 No. of bytes in distributed program, including test data, etc.: 31 065 317 Distribution format: tar.gz Programming language: Fortran90 and C++ Computer: Any Operating system: Linux, Windows (Xp, Vista) RAM: 190 Mbytes Classification: 16.1 External routines: Trolltech's Qt (4.3 or higher) ( http://www.qtsoftware.com/products), OpenGL (1.1 or higher) ( http://www.opengl.org/), GLUT 3.7 ( http://www.opengl.org/resources/libraries/glut/). Nature of problem: Analysis of the molecular electron density and density deformations, including fast evaluation of electrostatic potential, electric field and Hellmann-Feynman forces on nuclei. Solution method: The method of Deformed Atoms in Molecules, reported elsewhere [1], is used for partitioning the molecular electron density into atomic fragments, which are further expanded in spherical harmonics times radial factors. The partition is used for defining molecular density deformations and for the fast calculation of several properties associated to density. Restrictions: The current version is limited to 120 atoms, 2000 contracted functions, and l=5 in basis functions. Density must come from a LCAO calculation (any level) with spherical (not Cartesian) Gaussian functions. Unusual features: The program contains an OPEN statement to binary files (stream) in file GOPENMOL.F90. This statement has not a standard syntax in Fortran 90. Two possibilities are considered in conditional compilation: Intel's ifort and Fortran2003 standard. This latter is applied to compilers other than ifort (gfortran uses this one, for instance). Additional comments: The distribution file for this program is over 30 Mbytes and therefore is not delivered directly when download or e-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Largely dependent on the system size and the module run (from fractions of a second to hours). References: [1] J. Fernández Rico, R. López, I. Ema, G. Ramírez, J. Mol. Struct. (Theochem) 727 (2005) 115.

  16. Stratification of the severity of critically ill patients with classification trees

    PubMed Central

    2009-01-01

    Background Development of three classification trees (CT) based on the CART (Classification and Regression Trees), CHAID (Chi-Square Automatic Interaction Detection) and C4.5 methodologies for the calculation of probability of hospital mortality; the comparison of the results with the APACHE II, SAPS II and MPM II-24 scores, and with a model based on multiple logistic regression (LR). Methods Retrospective study of 2864 patients. Random partition (70:30) into a Development Set (DS) n = 1808 and Validation Set (VS) n = 808. Their properties of discrimination are compared with the ROC curve (AUC CI 95%), Percent of correct classification (PCC CI 95%); and the calibration with the Calibration Curve and the Standardized Mortality Ratio (SMR CI 95%). Results CTs are produced with a different selection of variables and decision rules: CART (5 variables and 8 decision rules), CHAID (7 variables and 15 rules) and C4.5 (6 variables and 10 rules). The common variables were: inotropic therapy, Glasgow, age, (A-a)O2 gradient and antecedent of chronic illness. In VS: all the models achieved acceptable discrimination with AUC above 0.7. CT: CART (0.75(0.71-0.81)), CHAID (0.76(0.72-0.79)) and C4.5 (0.76(0.73-0.80)). PCC: CART (72(69-75)), CHAID (72(69-75)) and C4.5 (76(73-79)). Calibration (SMR) better in the CT: CART (1.04(0.95-1.31)), CHAID (1.06(0.97-1.15) and C4.5 (1.08(0.98-1.16)). Conclusion With different methodologies of CTs, trees are generated with different selection of variables and decision rules. The CTs are easy to interpret, and they stratify the risk of hospital mortality. The CTs should be taken into account for the classification of the prognosis of critically ill patients. PMID:20003229

  17. Two Influential Primate Classifications Logically Aligned

    PubMed Central

    Franz, Nico M.; Pier, Naomi M.; Reeder, Deeann M.; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2016-01-01

    Classifications and phylogenies of perceived natural entities change in the light of new evidence. Taxonomic changes, translated into Code-compliant names, frequently lead to name:meaning dissociations across succeeding treatments. Classification standards such as the Mammal Species of the World (MSW) may experience significant levels of taxonomic change from one edition to the next, with potential costs to long-term, large-scale information integration. This circumstance challenges the biodiversity and phylogenetic data communities to express taxonomic congruence and incongruence in ways that both humans and machines can process, that is, to logically represent taxonomic alignments across multiple classifications. We demonstrate that such alignments are feasible for two classifications of primates corresponding to the second and third MSW editions. Our approach has three main components: (i) use of taxonomic concept labels, that is name sec. author (where sec. means according to), to assemble each concept hierarchy separately via parent/child relationships; (ii) articulation of select concepts across the two hierarchies with user-provided Region Connection Calculus (RCC-5) relationships; and (iii) the use of an Answer Set Programming toolkit to infer and visualize logically consistent alignments of these input constraints. Our use case entails the Primates sec. Groves (1993; MSW2–317 taxonomic concepts; 233 at the species level) and Primates sec. Groves (2005; MSW3–483 taxonomic concepts; 376 at the species level). Using 402 RCC-5 input articulations, the reasoning process yields a single, consistent alignment and 153,111 Maximally Informative Relations that constitute a comprehensive meaning resolution map for every concept pair in the Primates sec. MSW2/MSW3. The complete alignment, and various partitions thereof, facilitate quantitative analyses of name:meaning dissociation, revealing that nearly one in three taxonomic names are not reliable across treatments—in the sense of the same name identifying congruent taxonomic meanings. The RCC-5 alignment approach is potentially widely applicable in systematics and can achieve scalable, precise resolution of semantically evolving name usages in synthetic, next-generation biodiversity, and phylogeny data platforms. PMID:27009895

  18. Satellite altimetry in sea ice regions - detecting open water for estimating sea surface heights

    NASA Astrophysics Data System (ADS)

    Müller, Felix L.; Dettmering, Denise; Bosch, Wolfgang

    2017-04-01

    The Greenland Sea and the Farm Strait are transporting sea ice from the central Arctic ocean southwards. They are covered by a dynamic changing sea ice layer with significant influences on the Earth climate system. Between the sea ice there exist various sized open water areas known as leads, straight lined open water areas, and polynyas exhibiting a circular shape. Identifying these leads by satellite altimetry enables the extraction of sea surface height information. Analyzing the radar echoes, also called waveforms, provides information on the surface backscatter characteristics. For example waveforms reflected by calm water have a very narrow and single-peaked shape. Waveforms reflected by sea ice show more variability due to diffuse scattering. Here we analyze altimeter waveforms from different conventional pulse-limited satellite altimeters to separate open water and sea ice waveforms. An unsupervised classification approach employing partitional clustering algorithms such as K-medoids and memory-based classification methods such as K-nearest neighbor is used. The classification is based on six parameters derived from the waveform's shape, for example the maximum power or the peak's width. The open-water detection is quantitatively compared to SAR images processed while accounting for sea ice motion. The classification results are used to derive information about the temporal evolution of sea ice extent and sea surface heights. They allow to provide evidence on climate change relevant influences as for example Arctic sea level rise due to enhanced melting rates of Greenland's glaciers and an increasing fresh water influx into the Arctic ocean. Additionally, the sea ice cover extent analyzed over a long-time period provides an important indicator for a globally changing climate system.

  19. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the comparable compression levels.

  20. Use of HCA in Subproteome-immunization and Screening of Hybridoma Supernatants to Define Distinct Antibody Binding Patterns

    PubMed Central

    Szafran, Adam T.; Mancini, Maureen G.; Nickerson, Jeffrey A.; Edwards, Dean P.; Mancini, Michael A.

    2016-01-01

    Understanding the properties and functions of complex biological systems depends upon knowing the proteins present and the interactions between them. Recent advances in mass spectrometry have given us greater insights into the participating proteomes, however, monoclonal antibodies remain key to understanding the structures, functions, locations and macromolecular interactions of the involved proteins. The traditional single immunogen method to produce monoclonal antibodies using hybridoma technology are time, resource and cost intensive, limiting the number of reagents that are available. Using a high content analysis screening approach, we have developed a method in which a complex mixture of proteins (e.g., subproteome) is used to generate a panel of monoclonal antibodies specific to a subproteome located in a defined subcellular compartment such as the nucleus. The immunofluorescent images in the primary hybridoma screen are analyzed using an automated processing approach and classified using a recursive partitioning forest classification model derived from images obtained from the Human Protein Atlas. Using an ammonium sulfate purified nuclear matrix fraction as an example of reverse proteomics, we identified 866 hybridoma supernatants with a positive immunofluorescent signal. Of those, 402 produced a nuclear signal from which patterns similar to known nuclear matrix associated proteins were identified. Detailed here is our method, the analysis techniques, and a discussion of the application to further in vivo antibody production. PMID:26521976

  1. Use of HCA in subproteome-immunization and screening of hybridoma supernatants to define distinct antibody binding patterns.

    PubMed

    Szafran, Adam T; Mancini, Maureen G; Nickerson, Jeffrey A; Edwards, Dean P; Mancini, Michael A

    2016-03-01

    Understanding the properties and functions of complex biological systems depends upon knowing the proteins present and the interactions between them. Recent advances in mass spectrometry have given us greater insights into the participating proteomes, however, monoclonal antibodies remain key to understanding the structures, functions, locations and macromolecular interactions of the involved proteins. The traditional single immunogen method to produce monoclonal antibodies using hybridoma technology are time, resource and cost intensive, limiting the number of reagents that are available. Using a high content analysis screening approach, we have developed a method in which a complex mixture of proteins (e.g., subproteome) is used to generate a panel of monoclonal antibodies specific to a subproteome located in a defined subcellular compartment such as the nucleus. The immunofluorescent images in the primary hybridoma screen are analyzed using an automated processing approach and classified using a recursive partitioning forest classification model derived from images obtained from the Human Protein Atlas. Using an ammonium sulfate purified nuclear matrix fraction as an example of reverse proteomics, we identified 866 hybridoma supernatants with a positive immunofluorescent signal. Of those, 402 produced a nuclear signal from which patterns similar to known nuclear matrix associated proteins were identified. Detailed here is our method, the analysis techniques, and a discussion of the application to further in vivo antibody production. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. The Usefulness of Zone Division Using Belt Partition at the Entry Zone of MRI Machine Room: An Analysis of the Restrictive Effect of Dangerous Action Using a Questionnaire.

    PubMed

    Funada, Tatsuro; Shibuya, Tsubasa

    2016-08-01

    The American College of Radiology recommends dividing magnetic resonance imaging (MRI) machine rooms into four zones depending on the education level. However, structural limitations restrict us to apply such recommendation in most of the Japanese facilities. This study examines the effectiveness of the usage of a belt partition to create the zonal division by a questionnaire survey including three critical parameters. They are, the influence of individuals' background (relevance to MRI, years of experience, individuals' post, occupation [i.e., nurse or nursing assistant], outpatient section or ward), the presence or absence of a door or belt partition (opening or closing), and any four personnel scenarios that may be encountered during a visit to an MRI site (e.g., from visiting the MRI site to receive a patient) . In this survey, the influence of dangerous action is uncertain on individuals' backgrounds (maximum odds ratio: 6.3, 95% CI: 1.47-27.31) and the scenarios of personnel (maximum risk ratio: 2.4, 95% CI: 1.16-4.85). Conversely, the presence of the door and belt partition influences significantly (maximum risk ratio: 17.4, 95% CI: 7.94-17.38). For that reason, we suggest that visual impression has a strong influence on an individuals' actions. Even if structural limitations are present, zonal division by belt partition will provide a visual deterrent. Then, the partitioned zone will serve as a buffer zone. We conclude that if the belt partition is used properly, it is an inexpensive and effective safety management device for MRI rooms.

  3. Regression modeling of gas-particle partitioning of atmospheric oxidized mercury from temperature data

    NASA Astrophysics Data System (ADS)

    Cheng, Irene; Zhang, Leiming; Blanchard, Pierrette

    2014-10-01

    Models describing the partitioning of atmospheric oxidized mercury (Hg(II)) between the gas and fine particulate phases were developed as a function of temperature. The models were derived from regression analysis of the gas-particle partitioning parameters, defined by a partition coefficient (Kp) and Hg(II) fraction in fine particles (fPBM) and temperature data from 10 North American sites. The generalized model, log(1/Kp) = 12.69-3485.30(1/T) (R2 = 0.55; root-mean-square error (RMSE) of 1.06 m3/µg for Kp), predicted the observed average Kp at 7 of the 10 sites. Discrepancies between the predicted and observed average Kp were found at the sites impacted by large Hg sources because the model had not accounted for the different mercury speciation profile and aerosol compositions of different sources. Site-specific equations were also generated from average Kp and fPBM corresponding to temperature interval data. The site-specific models were more accurate than the generalized Kp model at predicting the observations at 9 of the 10 sites as indicated by RMSE of 0.22-0.5 m3/µg for Kp and 0.03-0.08 for fPBM. Both models reproduced the observed monthly average values, except for a peak in Hg(II) partitioning observed during summer at two locations. Weak correlations between the site-specific model Kp or fPBM and observations suggest the role of aerosol composition, aerosol water content, and relative humidity factors on Hg(II) partitioning. The use of local temperature data to parameterize Hg(II) partitioning in the proposed models potentially improves the estimation of mercury cycling in chemical transport models and elsewhere.

  4. Development of TLSER model and QSAR model for predicting partition coefficients of hydrophobic organic chemicals between low density polyethylene film and water.

    PubMed

    Liu, Huihui; Wei, Mengbi; Yang, Xianhai; Yin, Cen; He, Xiao

    2017-01-01

    Partition coefficients are vital parameters for measuring accurately the chemicals concentrations by passive sampling devices. Given the wide use of low density polyethylene (LDPE) film in passive sampling, we developed a theoretical linear solvation energy relationship (TLSER) model and a quantitative structure-activity relationship (QSAR) model for the prediction of the partition coefficient of chemicals between LDPE and water (K pew ). For chemicals with the octanol-water partition coefficient (log K ow ) <8, a TLSER model with V x (McGowan volume) and qA - (the most negative charge on O, N, S, X atoms) as descriptors was developed, but the model had relatively low determination coefficient (R 2 ) and cross-validated coefficient (Q 2 ). In order to further explore the theoretical mechanisms involved in the partition process, a QSAR model with four descriptors (MLOGP (Moriguchi octanol-water partition coeff.), P_VSA_s_3 (P_VSA-like on I-state, bin 3), Hy (hydrophilic factor) and NssO (number of atoms of type ssO)) was established, and statistical analysis indicated that the model had satisfactory goodness-of-fit, robustness and predictive ability. For chemicals with log K OW >8, a TLSER model with V x and a QSAR model with MLOGP as descriptor were developed. This is the first paper to explore the models for highly hydrophobic chemicals. The applicability domain of the models, characterized by the Euclidean distance-based method and Williams plot, covered a large number of structurally diverse chemicals, which included nearly all the common hydrophobic organic compounds. Additionally, through mechanism interpretation, we explored the structural features those governing the partition behavior of chemicals between LDPE and water. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Uncertain Henry's law constants compromise equilibrium partitioning calculations of atmospheric oxidation products

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Yuan, Tiange; Wood, Stephen A.; Goss, Kai-Uwe; Li, Jingyi; Ying, Qi; Wania, Frank

    2017-06-01

    Gas-particle partitioning governs the distribution, removal, and transport of organic compounds in the atmosphere and the formation of secondary organic aerosol (SOA). The large variety of atmospheric species and their wide range of properties make predicting this partitioning equilibrium challenging. Here we expand on earlier work and predict gas-organic and gas-aqueous phase partitioning coefficients for 3414 atmospherically relevant molecules using COSMOtherm, SPARC Performs Automated Reasoning in Chemistry (SPARC), and poly-parameter linear free-energy relationships. The Master Chemical Mechanism generated the structures by oxidizing primary emitted volatile organic compounds. Predictions for gas-organic phase partitioning coefficients (KWIOM/G) by different methods are on average within 1 order of magnitude of each other, irrespective of the numbers of functional groups, except for predictions by COSMOtherm and SPARC for compounds with more than three functional groups, which have a slightly higher discrepancy. Discrepancies between predictions of gas-aqueous partitioning (KW/G) are much larger and increase with the number of functional groups in the molecule. In particular, COSMOtherm often predicts much lower KW/G for highly functionalized compounds than the other methods. While the quantum-chemistry-based COSMOtherm accounts for the influence of intra-molecular interactions on conformation, highly functionalized molecules likely fall outside of the applicability domain of the other techniques, which at least in part rely on empirical data for calibration. Further analysis suggests that atmospheric phase distribution calculations are sensitive to the partitioning coefficient estimation method, in particular to the estimated value of KW/G. The large uncertainty in KW/G predictions for highly functionalized organic compounds needs to be resolved to improve the quantitative treatment of SOA formation.

  6. [Partial stomach partitioning gastrojejunostomy in the treatment of the malignant gastric outlet obstruction].

    PubMed

    Abdel-lah-Fernández, Omar; Parreño-Manchado, Felipe Carlos; García-Plaza, Asunción; Álvarez-Delgado, Alberto

    2015-01-01

    In patients with unresectable gastric cancer and outlet obstruction syndrome, gastric partitioning gastrojejunostomy is an alternative, which could avoid the drawbacks of the standard techniques. Comparison of antroduodenal stent, conventional gastrojejunostomy and gastric partitioning gastrojejunostomy. A retrospective, cross-sectional study was conducted on patients with unresectable distal gastric cancer and gastric outlet obstruction, treated with the three different techniques over the last 12 years, comparing results based on oral tolerance and complications. An analysis was performed on the results using the Student-t test for independent variables. The 22 patients were divided in 3 groups: group I (6 cases) stent, group II (9 cases) conventional gastrojejunostomy, and group III (7 cases) gastric partitioning gastrojejunostomy, respectively. The stent allows a shorter "postoperative" stay and early onset of oral tolerance (P<0.05), however, the gastric partitioning gastrojejunostomy achieve normal diet at 15th day (P<0.05). The mortality rate was higher in the stent group (33%) compared with surgical techniques, with a morbidity of 4/6 (66.7%) in Group I, 6/9 (66.7%) Group II, and 3/7 (42%) Group III. Re-interventions: 2/6 Group I, 3/9 Group II, and 0/7 Group III. The median survival was superior in the gastric partitioning gastrojejunostomy, achieving an overall survival of 6.5 months. The gastric partitioning gastrojejunostomy for treatment of gastric outlet obstruction in unresectable advanced gastric cancer is a safe technique, allowing a more complete diet with lower morbidity and improved survival. Copyright © 2015 Academia Mexicana de Cirugía A.C. Published by Masson Doyma México S.A. All rights reserved.

  7. Hyperspectral imaging with wavelet transform for classification of colon tissue biopsy samples

    NASA Astrophysics Data System (ADS)

    Masood, Khalid

    2008-08-01

    Automatic classification of medical images is a part of our computerised medical imaging programme to support the pathologists in their diagnosis. Hyperspectral data has found its applications in medical imagery. Its usage is increasing significantly in biopsy analysis of medical images. In this paper, we present a histopathological analysis for the classification of colon biopsy samples into benign and malignant classes. The proposed study is based on comparison between 3D spectral/spatial analysis and 2D spatial analysis. Wavelet textural features in the wavelet domain are used in both these approaches for classification of colon biopsy samples. Experimental results indicate that the incorporation of wavelet textural features using a support vector machine, in 2D spatial analysis, achieve best classification accuracy.

  8. Assessing the effects of architectural variations on light partitioning within virtual wheat–pea mixtures

    PubMed Central

    Barillot, Romain; Escobar-Gutiérrez, Abraham J.; Fournier, Christian; Huynh, Pierre; Combes, Didier

    2014-01-01

    Background and Aims Predicting light partitioning in crop mixtures is a critical step in improving the productivity of such complex systems, and light interception has been shown to be closely linked to plant architecture. The aim of the present work was to analyse the relationships between plant architecture and light partitioning within wheat–pea (Triticum aestivum–Pisum sativum) mixtures. An existing model for wheat was utilized and a new model for pea morphogenesis was developed. Both models were then used to assess the effects of architectural variations in light partitioning. Methods First, a deterministic model (L-Pea) was developed in order to obtain dynamic reconstructions of pea architecture. The L-Pea model is based on L-systems formalism and consists of modules for ‘vegetative development’ and ‘organ extension’. A tripartite simulator was then built up from pea and wheat models interfaced with a radiative transfer model. Architectural parameters from both plant models, selected on the basis of their contribution to leaf area index (LAI), height and leaf geometry, were then modified in order to generate contrasting architectures of wheat and pea. Key results By scaling down the analysis to the organ level, it could be shown that the number of branches/tillers and length of internodes significantly determined the partitioning of light within mixtures. Temporal relationships between light partitioning and the LAI and height of the different species showed that light capture was mainly related to the architectural traits involved in plant LAI during the early stages of development, and in plant height during the onset of interspecific competition. Conclusions In silico experiments enabled the study of the intrinsic effects of architectural parameters on the partitioning of light in crop mixtures of wheat and pea. The findings show that plant architecture is an important criterion for the identification/breeding of plant ideotypes, particularly with respect to light partitioning. PMID:24907314

  9. Analysis of ParB-centromere interactions by multiplex SPR imaging reveals specific patterns for binding ParB in six centromeres of Burkholderiales chromosomes and plasmids

    PubMed Central

    Pillet, Flavien; Passot, Fanny Marie

    2017-01-01

    Bacterial centromeres–also called parS, are cis-acting DNA sequences which, together with the proteins ParA and ParB, are involved in the segregation of chromosomes and plasmids. The specific binding of ParB to parS nucleates the assembly of a large ParB/DNA complex from which ParA—the motor protein, segregates the sister replicons. Closely related families of partition systems, called Bsr, were identified on the chromosomes and large plasmids of the multi-chromosomal bacterium Burkholderia cenocepacia and other species from the order Burkholeriales. The centromeres of the Bsr partition families are 16 bp palindromes, displaying similar base compositions, notably a central CG dinucleotide. Despite centromeres bind the cognate ParB with a narrow specificity, weak ParB-parS non cognate interactions were nevertheless detected between few Bsr partition systems of replicons not belonging to the same genome. These observations suggested that Bsr partition systems could have a common ancestry but that evolution mostly erased the possibilities of cross-reactions between them, in particular to prevent replicon incompatibility. To detect novel similarities between Bsr partition systems, we have analyzed the binding of six Bsr parS sequences and a wide collection of modified derivatives, to their cognate ParB. The study was carried out by Surface Plasmon Resonance imaging (SPRi) mulitplex analysis enabling a systematic survey of each nucleotide position within the centromere. We found that in each parS some positions could be changed while maintaining binding to ParB. Each centromere displays its own pattern of changes, but some positions are shared more or less widely. In addition from these changes we could speculate evolutionary links between these centromeres. PMID:28562673

  10. Analysis of ParB-centromere interactions by multiplex SPR imaging reveals specific patterns for binding ParB in six centromeres of Burkholderiales chromosomes and plasmids.

    PubMed

    Pillet, Flavien; Passot, Fanny Marie; Pasta, Franck; Anton Leberre, Véronique; Bouet, Jean-Yves

    2017-01-01

    Bacterial centromeres-also called parS, are cis-acting DNA sequences which, together with the proteins ParA and ParB, are involved in the segregation of chromosomes and plasmids. The specific binding of ParB to parS nucleates the assembly of a large ParB/DNA complex from which ParA-the motor protein, segregates the sister replicons. Closely related families of partition systems, called Bsr, were identified on the chromosomes and large plasmids of the multi-chromosomal bacterium Burkholderia cenocepacia and other species from the order Burkholeriales. The centromeres of the Bsr partition families are 16 bp palindromes, displaying similar base compositions, notably a central CG dinucleotide. Despite centromeres bind the cognate ParB with a narrow specificity, weak ParB-parS non cognate interactions were nevertheless detected between few Bsr partition systems of replicons not belonging to the same genome. These observations suggested that Bsr partition systems could have a common ancestry but that evolution mostly erased the possibilities of cross-reactions between them, in particular to prevent replicon incompatibility. To detect novel similarities between Bsr partition systems, we have analyzed the binding of six Bsr parS sequences and a wide collection of modified derivatives, to their cognate ParB. The study was carried out by Surface Plasmon Resonance imaging (SPRi) mulitplex analysis enabling a systematic survey of each nucleotide position within the centromere. We found that in each parS some positions could be changed while maintaining binding to ParB. Each centromere displays its own pattern of changes, but some positions are shared more or less widely. In addition from these changes we could speculate evolutionary links between these centromeres.

  11. Optical and Gravimetric Partitioning of Coastal Ocean Suspended Particulate Inorganic Matter (PIM)

    NASA Astrophysics Data System (ADS)

    Stavn, R. H.; Zhang, X.; Falster, A. U.; Gray, D. J.; Rick, J. J.; Gould, R. W., Jr.

    2016-02-01

    Recent work on the composition of suspended particulates of estuarine and coastal waters increases our capabilities to investigate the biogeochemal processes occurring in these waters. The biogeochemical properties associated with the particulates involve primarily sorption/desorption of dissolved matter onto the particle surfaces, which vary with the types of particulates. Therefore, the breakdown into chemical components of suspended matter will greatly expand the biogeochemistry of the coastal ocean region. The gravimetric techniques for these studies are here expanded and refined. In addition, new optical inversions greatly expand our capabilities to study spatial extent of the components of suspended particulate matter. The partitioning of a gravimetric PIM determination into clay minerals and amorphous silica is aided by electron microprobe analysis. The amorphous silica is further partitioned into contributions by detrital material and by the tests of living diatoms based on an empirical formula relating the chlorophyll content of cultured living diatoms in log phase growth to their frustules determined after gravimetric analysis of the ashed diatom residue. The optical inversion of composition of suspended particulates is based on the entire volume scattering function (VSF) measured in the field with a Multispectral Volume Scattering Meter and a LISST 100 meter. The VSF is partitioned into an optimal combination of contributions by particle subpopulations, each of which is uniquely represented by a refractive index and a log-normal size distribution. These subpopulations are aggregated to represent the two components of PIM using the corresponding refractive indices and sizes which also yield a particle size distribution for the two components. The gravimetric results of partitioning PIM into clay minerals and amorphous silica confirm the optical inversions from the VSF.

  12. Determination of partition coefficients using 1 H NMR spectroscopy and time domain complete reduction to amplitude-frequency table (CRAFT) analysis.

    PubMed

    Soulsby, David; Chica, Jeryl A M

    2017-08-01

    We have developed a simple, direct and novel method for the determination of partition coefficients and partitioning behavior using 1 H NMR spectroscopy combined with time domain complete reduction to amplitude-frequency tables (CRAFT). After partitioning into water and 1-octanol using standard methods, aliquots from each layer are directly analyzed using either proton or selective excitation NMR experiments. Signal amplitudes for each compound from each layer are then extracted directly from the time domain data in an automated fashion and analyzed using the CRAFT software. From these amplitudes, log P and log D 7.4 values can be calculated directly. Phase, baseline and internal standard issues, which can be problematic when Fourier transformed data are used, are unimportant when using time domain data. Furthermore, analytes can contain impurities because only a single resonance is examined and need not be UV active. Using this approach, we examined a variety of pharmaceutically relevant compounds and determined partition coefficients that are in excellent agreement with literature values. To demonstrate the utility of this approach, we also examined salicylic acid in more detail demonstrating an aggregation effect as a function of sample loading and partition coefficient behavior as a function of pH value. This method provides a valuable addition to the medicinal chemist toolbox for determining these important constants. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Dynamically heterogenous partitions and phylogenetic inference: an evaluation of analytical strategies with cytochrome b and ND6 gene sequences in cranes.

    PubMed

    Krajewski, C; Fain, M G; Buckley, L; King, D G

    1999-11-01

    ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.

  14. Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.

    PubMed

    Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid

    2017-03-01

    The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.

  15. An Evidence-based Forensic Taxonomy of Windows Phone Dating Apps.

    PubMed

    Cahyani, Niken Dwi Wahyu; Choo, Kim-Kwang Raymond; Ab Rahman, Nurul Hidayah; Ashman, Helen

    2018-05-21

    Advances in technologies including development of smartphone features have contributed to the growth of mobile applications, including dating apps. However, online dating services can be misused. To support law enforcement investigations, a forensic taxonomy that provides a systematic classification of forensic artifacts from Windows Phone 8 (WP8) dating apps is presented in this study. The taxonomy has three categories, namely: Apps Categories, Artifacts Categories, and Data Partition Categories. This taxonomy is built based on the findings from a case study of 28 mobile dating apps, using mobile forensic tools. The dating app taxonomy can be used to inform future studies of dating and related apps, such as those from Android and iOS platforms. © 2018 American Academy of Forensic Sciences.

  16. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-07-17

    The explosion of biological data has largely influenced the focus of today’s biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  17. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-06-01

    The explosion of biological data has largely influenced the focus of today's biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  18. Topological structures in the equities market network

    PubMed Central

    Leibon, Gregory; Pauls, Scott; Rockmore, Daniel; Savell, Robert

    2008-01-01

    We present a new method for articulating scale-dependent topological descriptions of the network structure inherent in many complex systems. The technique is based on “partition decoupled null models,” a new class of null models that incorporate the interaction of clustered partitions into a random model and generalize the Gaussian ensemble. As an application, we analyze a correlation matrix derived from 4 years of close prices of equities in the New York Stock Exchange (NYSE) and National Association of Securities Dealers Automated Quotation (NASDAQ). In this example, we expose (i) a natural structure composed of 2 interacting partitions of the market that both agrees with and generalizes standard notions of scale (e.g., sector and industry) and (ii) structure in the first partition that is a topological manifestation of a well-known pattern of capital flow called “sector rotation.” Our approach gives rise to a natural form of multiresolution analysis of the underlying time series that naturally decomposes the basic data in terms of the effects of the different scales at which it clusters. We support our conclusions and show the robustness of the technique with a successful analysis on a simulated network with an embedded topological structure. The equities market is a prototypical complex system, and we expect that our approach will be of use in understanding a broad class of complex systems in which correlation structures are resident.

  19. Estimates of Octanol-Water Partitioning for Thousands of Dissolved Organic Species in Oil Sands Process-Affected Water.

    PubMed

    Zhang, Kun; Pereira, Alberto S; Martin, Jonathan W

    2015-07-21

    In this study, the octanol-water distribution ratios (DOW, that is, apparent KOW at pH 8.4) of 2114 organic species in oil sands process-affected water were estimated by partitioning to polydimethylsiloxane (PDMS) coated stir bars and analysis by ultrahigh resolution orbitrap mass spectrometry in electrospray positive ((+)) and negative ((-)) ionization modes. At equilibrium, the majority of species in OSPW showed negligible partitioning to PDMS (i.e., DOW <1), however estimated DOW's for some species ranged up to 100,000. Most organic acids detected in ESI- had negligible partitioning, although some naphthenic acids (O2(-) species) had estimated DOW ranging up to 100. Polar neutral and basic compounds detected in ESI+ generally partitioned to PDMS to a greater extent than organic acids. Among these species, DOW was greatest among 3 groups: up to 1000 for mono-oxygenated species (O(+) species), up to 127,000 for NO(+) species, and up to 203,000 for SO(+) species. A positive relationship was observed between DOW and carbon number, and a negative relationship was observed with the number of double bonds (or rings). The results highlight that nonacidic compounds in OSPW are generally more hydrophobic than naphthenic acids and that some may be highly bioaccumulative and contribute to toxicity.

  20. An Initial Analysis of LANDSAT-4 Thematic Mapper Data for the Discrimination of Agricultural, Forested Wetland, and Urban Land Covers

    NASA Technical Reports Server (NTRS)

    Quattrochi, D. A.

    1984-01-01

    An initial analysis of LANDSAT 4 Thematic Mapper (TM) data for the discrimination of agricultural, forested wetland, and urban land covers is conducted using a scene of data collected over Arkansas and Tennessee. A classification of agricultural lands derived from multitemporal LANDSAT Multispectral Scanner (MSS) data is compared with a classification of TM data for the same area. Results from this comparative analysis show that the multitemporal MSS classification produced an overall accuracy of 80.91% while the TM classification yields an overall classification accuracy of 97.06% correct.

  1. Compositional data analysis as a robust tool to delineate hydrochemical facies within and between gas-bearing aquifers

    NASA Astrophysics Data System (ADS)

    Owen, D. Des. R.; Pawlowsky-Glahn, V.; Egozcue, J. J.; Buccianti, A.; Bradd, J. M.

    2016-08-01

    Isometric log ratios of proportions of major ions, derived from intuitive sequential binary partitions, are used to characterize hydrochemical variability within and between coal seam gas (CSG) and surrounding aquifers in a number of sedimentary basins in the USA and Australia. These isometric log ratios are the coordinates corresponding to an orthonormal basis in the sample space (the simplex). The characteristic proportions of ions, as described by linear models of isometric log ratios, can be used for a mathematical-descriptive classification of water types. This is a more informative and robust method of describing water types than simply classifying a water type based on the dominant ions. The approach allows (a) compositional distinctions between very similar water types to be made and (b) large data sets with a high degree of variability to be rapidly assessed with respect to particular relationships/compositions that are of interest. A major advantage of these techniques is that major and minor ion components can be comprehensively assessed and subtle processes—which may be masked by conventional techniques such as Stiff diagrams, Piper plots, and classic ion ratios—can be highlighted. Results show that while all CSG groundwaters are dominated by Na, HCO3, and Cl ions, the proportions of other ions indicate they can evolve via different means and the particular proportions of ions within total or subcompositions can be unique to particular basins. Using isometric log ratios, subtle differences in the behavior of Na, K, and Cl between CSG water types and very similar Na-HCO3 water types in adjacent aquifers are also described. A complementary pair of isometric log ratios, derived from a geochemically-intuitive sequential binary partition that is designed to reflect compositional variability within and between CSG groundwater, is proposed. These isometric log ratios can be used to model a hydrochemical pathway associated with methanogenesis and/or to delineate groundwater associated with high gas concentrations.

  2. Implementation of a partitioned algorithm for simulation of large CSI problems

    NASA Technical Reports Server (NTRS)

    Alvin, Kenneth F.; Park, K. C.

    1991-01-01

    The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.

  3. Cost component analysis.

    PubMed

    Lörincz, András; Póczos, Barnabás

    2003-06-01

    In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.

  4. Classification Based on Hierarchical Linear Models: The Need for Incorporation of Social Contexts in Classification Analysis

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Qui

    2009-01-01

    Many areas in educational and psychological research involve the use of classification statistical analysis. For example, school districts might be interested in attaining variables that provide optimal prediction of school dropouts. In psychology, a researcher might be interested in the classification of a subject into a particular psychological…

  5. A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image

    NASA Astrophysics Data System (ADS)

    Su, Junying

    2011-11-01

    A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.

  6. Self-assembled monolayers improve protein distribution on holey carbon cryo-EM supports

    PubMed Central

    Meyerson, Joel R.; Rao, Prashant; Kumar, Janesh; Chittori, Sagar; Banerjee, Soojay; Pierson, Jason; Mayer, Mark L.; Subramaniam, Sriram

    2014-01-01

    Poor partitioning of macromolecules into the holes of holey carbon support grids frequently limits structural determination by single particle cryo-electron microscopy (cryo-EM). Here, we present a method to deposit, on gold-coated carbon grids, a self-assembled monolayer whose surface properties can be controlled by chemical modification. We demonstrate the utility of this approach to drive partitioning of ionotropic glutamate receptors into the holes, thereby enabling 3D structural analysis using cryo-EM methods. PMID:25403871

  7. Effects of two classification strategies on a Benthic Community Index for streams in the Northern Lakes and Forests Ecoregion

    USGS Publications Warehouse

    Butcher, Jason T.; Stewart, Paul M.; Simon, Thomas P.

    2003-01-01

    Ninety-four sites were used to analyze the effects of two different classification strategies on the Benthic Community Index (BCI). The first, a priori classification, reflected the wetland status of the streams; the second, a posteriori classification, used a bio-environmental analysis to select classification variables. Both classifications were examined by measuring classification strength and testing differences in metric values with respect to group membership. The a priori (wetland) classification strength (83.3%) was greater than the a posteriori (bio-environmental) classification strength (76.8%). Both classifications found one metric that had significant differences between groups. The original index was modified to reflect the wetland classification by re-calibrating the scoring criteria for percent Crustacea and Mollusca. A proposed refinement to the original Benthic Community Index is suggested. This study shows the importance of using hypothesis-driven classifications, as well as exploratory statistical analysis, to evaluate alternative ways to reveal environmental variability in biological assessment tools.

  8. Comparison of Source Partitioning Methods for CO2 and H2O Fluxes Based on High Frequency Eddy Covariance Data

    NASA Astrophysics Data System (ADS)

    Klosterhalfen, Anne; Moene, Arnold; Schmidt, Marius; Ney, Patrizia; Graf, Alexander

    2017-04-01

    Source partitioning of eddy covariance (EC) measurements of CO2 into respiration and photosynthesis is routinely used for a better understanding of the exchange of greenhouse gases, especially between terrestrial ecosystems and the atmosphere. The most frequently used methods are usually based either on relations of fluxes to environmental drivers or on chamber measurements. However, they often depend strongly on assumptions or invasive measurements and do usually not offer partitioning estimates for latent heat fluxes into evaporation and transpiration. SCANLON and SAHU (2008) and SCANLON and KUSTAS (2010) proposed an promising method to estimate the contributions of transpiration and evaporation using measured high frequency time series of CO2 and H2O fluxes - no extra instrumentation necessary. This method (SK10 in the following) is based on the spatial separation and relative strength of sources and sinks of CO2 and water vapor among the sub-canopy and canopy. Assuming that air from those sources and sinks is not yet perfectly mixed before reaching EC sensors, partitioning is estimated based on the separate application of the flux-variance similarity theory to the stomatal and non-stomatal components of the regarded fluxes, as well as on additional assumptions on stomatal water use efficiency (WUE). The CO2 partitioning method after THOMAS et al. (2008) (TH08 in the following) also follows the argument that the dissimilarities of sources and sinks in and below a canopy affect the relation between H2O and CO2 fluctuations. Instead of involving assumptions on WUE, TH08 directly screens their scattergram for signals of joint respiration and evaporation events and applies a conditional sampling methodology. In spite of their different main targets (H2O vs. CO2), both methods can yield partitioning estimates on both fluxes. We therefore compare various sub-methods of SK10 and TH08 including own modifications (e.g., cluster analysis) to each other, to established source partitioning methods, and to chamber measurements at various agroecosystems. Further, profile measurements and a canopy-resolving Large Eddy Simulation model are used to test the assumptions involved in SK10. Scanlon, T.M., Kustas, W.P., 2010. Partitioning carbon dioxide and water vapor fluxes using correlation analysis. Agricultural and Forest Meteorology 150 (1), 89-99. Scanlon, T.M., Sahu, P., 2008. On the correlation structure of water vapor and carbon dioxide in the atmospheric surface layer: A basis for flux partitioning. Water Resources Research 44 (10), W10418, 15 pp. Thomas, C., Martin, J.G., Goeckede, M., Siqueira, M.B., Foken, T., Law, B.E., Loescher H.W., Katul, G., 2008. Estimating daytime subcanopy respiration from conditional sampling methods applied to multi-scalar high frequency turbulence time series. Agricultural and Forest Meteorology 148 (8-9), 1210-1229.

  9. Chemical amplification based on fluid partitioning

    DOEpatents

    Anderson, Brian L [Lodi, CA; Colston, Jr., Billy W.; Elkin, Chris [San Ramon, CA

    2006-05-09

    A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.

  10. Active control of sound transmission through a double panel partition

    NASA Astrophysics Data System (ADS)

    Sas, P.; Bao, C.; Augusztinovicz, F.; Desmet, W.

    1995-03-01

    The feasibility of improving the insertion loss of lightweight double panel partitions by using small loudspeakers as active noise control sources inside the air gap between both panels of the partition is investigated analytically, numerically and experimentally in this paper. A theoretical analysis of the mechanisms of the fluid-structure interaction of double panel structures is presented in order to gain insight into the physical phenomena underlying the behaviour of a coupled vibro-acoustic system controlled by active methods. The analysis, based on modal coupling theory, enables one to derive some qualitative predictions concerning the potentials and limitations of the proposed approach. The theoretical analysis is valid only for geometrically simple structures. For more complex geometries, numerical simulations are required. Therefore the potential use of active noise control inside double panel structures has been analyzed by using coupled finite element and boundary element methods. To verify the conclusions drawn from the theoretical analysis and the numerical calculation and, above all, to demonstrate the potential of the proposed approach, experiments have been conducted with a laboratory set-up. The performance of the proposed approach was evaluated in terms of relative insertion loss measurements. It is shown that a considerable improvement of the insertion loss has been achieved around the lightly damped resonances of the system for the frequency range investigated (60-220 Hz).

  11. An Application of the Patient Rule-Induction Method for Evaluating the Contribution of the Apolipoprotein E and Lipoprotein Lipase Genes to Predicting Ischemic Heart Disease

    PubMed Central

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G.; Tybjærg-Hansen, Anne; Sing, Charles F.

    2007-01-01

    Different combinations of genetic and environmental risk factors are known to contribute to the complex etiology of ischemic heart disease (IHD) in different subsets of individuals. We employed the Patient Rule-Induction Method (PRIM) to select the combination of risk factors and risk factor values that identified each of 16 mutually exclusive partitions of individuals having significantly different levels of risk of IHD. PRIM balances two competing objectives: (1) finding partitions where the risk of IHD is high and (2) maximizing the number of IHD cases explained by the partitions. A sequential PRIM analysis was applied to data on the incidence of IHD collected over 8 years for a sample of 5,455 unrelated individuals from the Copenhagen City Heart Study (CCHS) to assess the added value of variation in two candidate susceptibility genes beyond the traditional, lipid and body mass index risk factors for IHD. An independent sample of 362 unrelated individuals also from the city of Copenhagen was used to test the model obtained for each of the hypothesized partitions. PMID:17436307

  12. Discrete wavelet approach to multifractality

    NASA Astrophysics Data System (ADS)

    Isaacson, Susana I.; Gabbanelli, Susana C.; Busch, Jorge R.

    2000-12-01

    The use of wavelet techniques for the multifractal analysis generalizes the box counting approach, and in addition provides information on eventual deviations of multifractal behavior. By the introduction of a wavelet partition function Wq and its corresponding free energy (beta) (q), the discrepancies between (beta) (q) and the multifractal free energy r(q) are shown to be indicative of these deviations. We study with Daubechies wavelets (D4) some 1D examples previously treated with Haar wavelets, and we apply the same ideas to some 2D Monte Carlo configurations, that simulate a solution under the action of an attractive potential. In this last case, we study the influence in the multifractal spectra and partition functions of four physical parameters: the intensity of the pairwise potential, the temperature, the range of the model potential, and the concentration of the solution. The wavelet partition function Wq carries more information about the cluster statistics than the multifractal partition function Zq, and the location of its peaks contributes to the determination of characteristic sales of the measure. In our experiences, the information provided by Daubechies wavelet sis slightly more accurate than the one obtained by Haar wavelets.

  13. Involvement of abscisic acid in correlative control of flower abscission in soybean

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yarrow, G.L.

    1985-01-01

    Studies were carried out in three parts: (1) analysis of endogenous abscisic acid (ABA) in abscising and non-abscising flowers, (2) partitioning of radio-labelled ABA and photoassimilates within the soybean raceme, and (3) shading experiments, wherein endogenous levels, metabolism and partitioning of ABA were determined. Endogenous concentrations of ABA failed to show any consistent relationship to abscission of soybean flowers. Partitioning of radiolabelled ABA and photoassimilates displayed consistently higher sink strengths (% DPM) for both /sup 3/H-ABA and /sup 14/C-photoassimilates for non-abscising flowers than for abscising flowers within control racemes. Shading flowers with aluminum foil, 48 hrs prior to sampling, resultedmore » in lowered endogenous ABA concentrations at 12, 17 and 22 days after anthesis (DAA), but not at 0 or 4 DAA. No differences were found in the catabolism of /sup 3/H-ABA between shaded (abscising) and non-shaded (non-abscising) flowers. Reduced partitioning of ABA and photoassimilates to shaded flowers resulted when shades were applied at 0, 4, 12, and 17 DAA, but not at 22 DAA.« less

  14. Bayesian estimation of post-Messinian divergence times in Balearic Island lizards.

    PubMed

    Brown, R P; Terrasa, B; Pérez-Mellado, V; Castro, J A; Hoskisson, P A; Picornell, A; Ramon, M M

    2008-07-01

    Phylogenetic relationships and timings of major cladogenesis events are investigated in the Balearic Island lizards Podarcislilfordi and P.pityusensis using 2675bp of mitochondrial and nuclear DNA sequences. Partitioned Bayesian and Maximum Parsimony analyses provided a well-resolved phylogeny with high node-support values. Bayesian MCMC estimation of node dates was investigated by comparing means of posterior distributions from different subsets of the sequence against the most robust analysis which used multiple partitions and allowed for rate heterogeneity among branches under a rate-drift model. Evolutionary rates were systematically underestimated and thus divergence times overestimated when sequences containing lower numbers of variable sites were used (based on ingroup node constraints). The following analyses allowed the best recovery of node times under the constant-rate (i.e., perfect clock) model: (i) all cytochrome b sequence (partitioned by codon position), (ii) cytochrome b (codon position 3 alone), (iii) NADH dehydrogenase (subunits 1 and 2; partitioned by codon position), (iv) cytochrome b and NADH dehydrogenase sequence together (six gene-codon partitions), (v) all unpartitioned sequence, (vi) a full multipartition analysis (nine partitions). Of these, only (iv) and (vi) performed well under the rate-drift model. These findings have significant implications for dating of recent divergence times in other taxa. The earliest P.lilfordi cladogenesis event (divergence of Menorcan populations), occurred before the end of the Pliocene, some 2.6Ma. Subsequent events led to a West Mallorcan lineage (2.0Ma ago), followed 1.2Ma ago by divergence of populations from the southern part of the Cabrera archipelago from a widely-distributed group from north Cabrera, northern and southern Mallorcan islets. Divergence within P.pityusensis is more recent with the main Ibiza and Formentera clades sharing a common ancestor at about 1.0Ma ago. Climatic and sea level changes are likely to have initiated cladogenesis, with lineages making secondary contact during periodic landbridge formation. This oscillating cross-archipelago pattern in which ancient divergence is followed by repeated contact resembles that seen between East-West refugia populations from mainland Europe.

  15. Determination of partition coefficient and analysis of nitrophenols by three-phase liquid-phase microextraction coupled with capillary electrophoresis.

    PubMed

    Sanagi, Mohd Marsin; Miskam, Mazidatulakmam; Wan Ibrahim, Wan Aini; Hermawan, Dadan; Aboul-Enein, Hassan Y

    2010-07-01

    A three-phase hollow fiber liquid-phase microextraction method coupled with CE was developed and used for the determination of partition coefficients and analysis of selected nitrophenols in water samples. The selected nitrophenols were extracted from 14 mL of aqueous solution (donor solution) with the pH adjusted to pH 3 into an organic phase (1-octanol) immobilized in the pores of the hollow fiber and finally backextracted into 40.0 microL of the acceptor phase (NaOH) at pH 12.0 located inside the lumen of the hollow fiber. The extractions were carried out under the following optimum conditions: donor solution, 0.05 M H(3)PO(4), pH 3.0; organic solvent, 1-octanol; acceptor solution, 40 microL of 0.1 M NaOH, pH 12.0; agitation rate, 1050 rpm; extraction time, 15 min. Under optimized conditions, the calibration curves for the analytes were linear in the range of 0.05-0.30 mg/L with r(2)>0.9900 and LODs were in the range of 0.01-0.04 mg/L with RSDs of 1.25-2.32%. Excellent enrichment factors of up to 398-folds were obtained. It was found that the partition coefficient (K(a/d)) values were high for 2-nitrophenol, 3-nitrophenol, 4-nitrophenol, 2,4-dinitrophenol and 2,6-dinitrophenol and that the individual partition coefficients (K(org/d) and K(a/org)) promoted efficient simultaneous extraction from the donor through the organic phase and further into the acceptor phase. The developed method was successfully applied for the analysis of water samples.

  16. A Mapping from the Human Factors Analysis and Classification System (DOD-HFACS) to the Domains of Human Systems Integration (HSI)

    DTIC Science & Technology

    2009-11-01

    Equation Chapter 1 Section 1 A MAPPING FROM THE HUMAN FACTORS ANALYSIS AND CLASSIFICATION SYSTEM (DOD...OMB control number. 1. REPORT DATE NOV 2009 2. REPORT TYPE 3. DATES COVERED 4. TITLE AND SUBTITLE A Mapping from the Human Factors Analysis ...7 The Human Factors Analysis and Classification System .................................................. 7 Mapping of DoD

  17. A partition-limited model for the plant uptake of organic contaminants from soil and water

    USGS Publications Warehouse

    Chiou, C.T.; Sheng, G.; Manes, M.

    2001-01-01

    In dealing with the passive transport of organic contaminants from soils to plants (including crops), a partition-limited model is proposed in which (i) the maximum (equilibrium) concentration of a contaminant in any location in the plant is determined by partition equilibrium with its concentration in the soil interstitial water, which in turn is determined essentially by the concentration in the soil organic matter (SOM) and (ii) the extent of approach to partition equilibrium, as measured by the ratio of the contaminant concentrations in plant water and soil interstitial water, ??pt (??? 1), depends on the transport rate of the contaminant in soil water into the plant and the volume of soil water solution that is required for the plant contaminant level to reach equilibrium with the external soil-water phase. Through reasonable estimates of plant organic-water compositions and of contaminant partition coefficients with various plant components, the model accounts for calculated values of ??pt in several published crop-contamination studies, including near-equilibrium values (i.e., ??pt ??? 1) for relatively water-soluble contaminants and lower values for much less soluble contaminants; the differences are attributed to the much higher partition coefficients of the less soluble compounds between plant lipids and plant water, which necessitates much larger volumes of the plant water transport for achieving the equilibrium capacities. The model analysis indicates that for plants with high water contents the plant-water phase acts as the major reservoir for highly water-soluble contaminants. By contrast, the lipid in a plant, even at small amounts, is usually the major reservoir for highly water-insoluble contaminants.

  18. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    This research program deals with the application of high-performance computing methods for the analysis of complete jet engines. We have entitled this program by applying the two dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition, and solution capabilities were successfully tested. We then focused attention on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion that results from these structural displacements. This is treated by a new arbitrary Lagrangian-Eulerian (ALE) technique that models the fluid mesh motion as that of a fictitious mass-spring network. New partitioned analysis procedures to treat this coupled three-component problem are developed. These procedures involved delayed corrections and subcycling. Preliminary results on the stability, accuracy, and MPP computational efficiency are reported.

  19. Canonical Commonality Analysis.

    ERIC Educational Resources Information Center

    Leister, K. Dawn

    Commonality analysis is a method of partitioning variance that has advantages over more traditional "OVA" methods. Commonality analysis indicates the amount of explanatory power that is "unique" to a given predictor variable and the amount of explanatory power that is "common" to or shared with at least one predictor…

  20. Chemical amplification based on fluid partitioning in an immiscible liquid

    DOEpatents

    Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.

    2010-09-28

    A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.

  1. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.

    2015-05-15

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less

  2. Instantons on ALE spaces and orbifold partitions

    NASA Astrophysics Data System (ADS)

    Dijkgraaf, Robbert; Sułkowski, Piotr

    2008-03-01

    We consider Script N = 4 theories on ALE spaces of Ak-1 type. As is well known, their partition functions coincide with Ak-1 affine characters. We show that these partition functions are equal to the generating functions of some peculiar classes of partitions which we introduce under the name 'orbifold partitions'. These orbifold partitions turn out to be related to the generalized Frobenius partitions introduced by G. E. Andrews some years ago. We relate the orbifold partitions to the blended partitions and interpret explicitly in terms of a free fermion system.

  3. Integrated transcriptome sequencing and dynamic analysis reveal carbon source partitioning between terpenoid and oil accumulation in developing Lindera glauca fruits.

    PubMed

    Niu, Jun; Chen, Yinlei; An, Jiyong; Hou, Xinyu; Cai, Jian; Wang, Jia; Zhang, Zhixiang; Lin, Shanzhi

    2015-10-08

    Lindera glauca fruits (LGF) with the abundance of terpenoid and oil has emerged as a novel specific material for industrial and medicinal application in China, but the complex regulatory mechanisms of carbon source partitioning into terpenoid biosynthetic pathway (TBP) and oil biosynthetic pathway (OBP) in developing LGF is still unknown. Here we perform the analysis of contents and compositions of terpenoid and oil from 7 stages of developing LGF to characterize a dramatic difference in temporal accumulative patterns. The resulting 3 crucial samples at 50, 125 and 150 days after flowering (DAF) were selected for comparative deep transcriptome analysis. By Illumina sequencing, the obtained approximately 81 million reads are assembled into 69,160 unigenes, among which 174, 71, 81 and 155 unigenes are implicated in glycolysis, pentose phosphate pathway (PPP), TBP and OBP, respectively. Integrated differential expression profiling and qRT-PCR, we specifically characterize the key enzymes and transcription factors (TFs) involved in regulating carbon allocation ratios for terpenoid or oil accumulation in developing LGF. These results contribute to our understanding of the regulatory mechanisms of carbon source partitioning between terpenoid and oil in developing LGF, and to the improvement of resource utilization and molecular breeding for L. glauca.

  4. Apparatus for chemical amplification based on fluid partitioning in an immiscible liquid

    DOEpatents

    Anderson, Brian L [Lodi, CA; Colston, Bill W [San Ramon, CA; Elkin, Christopher J [San Ramon, CA

    2012-05-08

    A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.

  5. Method for chemical amplification based on fluid partitioning in an immiscible liquid

    DOEpatents

    Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.

    2015-06-02

    A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.

  6. Method for chemical amplification based on fluid partitioning in an immiscible liquid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.

    A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.

  7. Malay sentiment analysis based on combined classification approaches and Senti-lexicon algorithm.

    PubMed

    Al-Saffar, Ahmed; Awang, Suryanti; Tao, Hai; Omar, Nazlia; Al-Saiagh, Wafaa; Al-Bared, Mohammed

    2018-01-01

    Sentiment analysis techniques are increasingly exploited to categorize the opinion text to one or more predefined sentiment classes for the creation and automated maintenance of review-aggregation websites. In this paper, a Malay sentiment analysis classification model is proposed to improve classification performances based on the semantic orientation and machine learning approaches. First, a total of 2,478 Malay sentiment-lexicon phrases and words are assigned with a synonym and stored with the help of more than one Malay native speaker, and the polarity is manually allotted with a score. In addition, the supervised machine learning approaches and lexicon knowledge method are combined for Malay sentiment classification with evaluating thirteen features. Finally, three individual classifiers and a combined classifier are used to evaluate the classification accuracy. In experimental results, a wide-range of comparative experiments is conducted on a Malay Reviews Corpus (MRC), and it demonstrates that the feature extraction improves the performance of Malay sentiment analysis based on the combined classification. However, the results depend on three factors, the features, the number of features and the classification approach.

  8. Malay sentiment analysis based on combined classification approaches and Senti-lexicon algorithm

    PubMed Central

    Awang, Suryanti; Tao, Hai; Omar, Nazlia; Al-Saiagh, Wafaa; Al-bared, Mohammed

    2018-01-01

    Sentiment analysis techniques are increasingly exploited to categorize the opinion text to one or more predefined sentiment classes for the creation and automated maintenance of review-aggregation websites. In this paper, a Malay sentiment analysis classification model is proposed to improve classification performances based on the semantic orientation and machine learning approaches. First, a total of 2,478 Malay sentiment-lexicon phrases and words are assigned with a synonym and stored with the help of more than one Malay native speaker, and the polarity is manually allotted with a score. In addition, the supervised machine learning approaches and lexicon knowledge method are combined for Malay sentiment classification with evaluating thirteen features. Finally, three individual classifiers and a combined classifier are used to evaluate the classification accuracy. In experimental results, a wide-range of comparative experiments is conducted on a Malay Reviews Corpus (MRC), and it demonstrates that the feature extraction improves the performance of Malay sentiment analysis based on the combined classification. However, the results depend on three factors, the features, the number of features and the classification approach. PMID:29684036

  9. Determination of zircon/melt trace element partition coefficients from SIMS analysis of melt inclusions in zircon

    NASA Astrophysics Data System (ADS)

    Thomas, J. B.; Bodnar, R. J.; Shimizu, N.; Sinha, A. K.

    2002-09-01

    Partition coefficients ( zircon/meltD M) for rare earth elements (REE) (La, Ce, Nd, Sm, Dy, Er and Yb) and other trace elements (Ba, Rb, B, Sr, Ti, Y and Nb) between zircon and melt have been calculated from secondary ion mass spectrometric (SIMS) analyses of zircon/melt inclusion pairs. The melt inclusion-mineral (MIM) technique shows that D REE increase in compatibility with increasing atomic number, similar to results of previous studies. However, D REE determined using the MIM technique are, in general, lower than previously reported values. Calculated D REE indicate that light REE with atomic numbers less than Sm are incompatible in zircon and become more incompatible with decreasing atomic number. This behavior is in contrast to most previously published results which indicate D > 1 and define a flat partitioning pattern for elements from La through Sm. The partition coefficients for the heavy REE determined using the MIM technique are lower than previously published results by factors of ≈15 to 20 but follow a similar trend. These differences are thought to reflect the effects of mineral and/or glass contaminants in samples from earlier studies which employed bulk analysis techniques. D REE determined using the MIM technique agree well with values predicted using the equations of Brice (1975), which are based on the size and elasticity of crystallographic sites. The presence of Ce 4+ in the melt results in elevated D Ce compared to neighboring REE due to the similar valence and size of Ce 4+ and Zr 4+. Predicted zircon/meltD values for Ce 4+ and Ce 3+ indicate that the Ce 4+/Ce 3+ ratios of the melt ranged from about 10 -3 to 10 -2. Partition coefficients for other trace elements determined in this study increase in compatibility in the order Ba < Rb < B < Sr < Ti < Y < Nb, with Ba, Rb, B and Sr showing incompatible behavior (D M < 1.0), and Ti, Y and Nb showing compatible behavior (D M > 1.0). The effect of partition coefficients on melt evolution during petrogenetic modeling was examined using partition coefficients determined in this study and compared to trends obtained using published partition coefficients. The lower D REE determined in this study result in smaller REE bulk distribution coefficients, for a given mineral assemblage, compared to those calculated using previously reported values. As an example, fractional crystallization of an assemblage composed of 35% hornblende, 64.5% plagioclase and 0.5% zircon produces a melt that becomes increasingly more enriched in Yb using the D Yb from this study. Using D Yb from Fujimaki (1986) results in a melt that becomes progressively depleted in Yb during crystallization.

  10. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.

  11. AUTOCLASS III - AUTOMATIC CLASS DISCOVERY FROM DATA

    NASA Technical Reports Server (NTRS)

    Cheeseman, P. C.

    1994-01-01

    The program AUTOCLASS III, Automatic Class Discovery from Data, uses Bayesian probability theory to provide a simple and extensible approach to problems such as classification and general mixture separation. Its theoretical basis is free from ad hoc quantities, and in particular free of any measures which alter the data to suit the needs of the program. As a result, the elementary classification model used lends itself easily to extensions. The standard approach to classification in much of artificial intelligence and statistical pattern recognition research involves partitioning of the data into separate subsets, known as classes. AUTOCLASS III uses the Bayesian approach in which classes are described by probability distributions over the attributes of the objects, specified by a model function and its parameters. The calculation of the probability of each object's membership in each class provides a more intuitive classification than absolute partitioning techniques. AUTOCLASS III is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or omitted. The user specifies a class probability distribution function by associating attribute sets with supplied likelihood function terms. AUTOCLASS then searches in the space of class numbers and parameters for the maximally probable combination. It returns the set of class probability function parameters, and the class membership probabilities for each data instance. AUTOCLASS III is written in Common Lisp, and is designed to be platform independent. This program has been successfully run on Symbolics and Explorer Lisp machines. It has been successfully used with the following implementations of Common LISP on the Sun: Franz Allegro CL, Lucid Common Lisp, and Austin Kyoto Common Lisp and similar UNIX platforms; under the Lucid Common Lisp implementations on VAX/VMS v5.4, VAX/Ultrix v4.1, and MIPS/Ultrix v4, rev. 179; and on the Macintosh personal computer. The minimum Macintosh required is the IIci. This program will not run under CMU Common Lisp or VAX/VMS DEC Common Lisp. A minimum of 8Mb of RAM is required for Macintosh platforms and 16Mb for workstations. The standard distribution medium for this program is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 3.5 inch diskette in Macintosh format. An electronic copy of the documentation is included on the distribution medium. AUTOCLASS was developed between March 1988 and March 1992. It was initially released in May 1991. Sun is a trademark of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation. Macintosh is a trademark of Apple Computer, Inc. Allegro CL is a registered trademark of Franz, Inc.

  12. Model‐based analysis of the influence of catchment properties on hydrologic partitioning across five mountain headwater subcatchments

    PubMed Central

    Wagener, Thorsten; McGlynn, Brian

    2015-01-01

    Abstract Ungauged headwater basins are an abundant part of the river network, but dominant influences on headwater hydrologic response remain difficult to predict. To address this gap, we investigated the ability of a physically based watershed model (the Distributed Hydrology‐Soil‐Vegetation Model) to represent controls on metrics of hydrologic partitioning across five adjacent headwater subcatchments. The five study subcatchments, located in Tenderfoot Creek Experimental Forest in central Montana, have similar climate but variable topography and vegetation distribution. This facilitated a comparative hydrology approach to interpret how parameters that influence partitioning, detected via global sensitivity analysis, differ across catchments. Model parameters were constrained a priori using existing regional information and expert knowledge. Influential parameters were compared to perceptions of catchment functioning and its variability across subcatchments. Despite between‐catchment differences in topography and vegetation, hydrologic partitioning across all metrics and all subcatchments was sensitive to a similar subset of snow, vegetation, and soil parameters. Results also highlighted one subcatchment with low certainty in parameter sensitivity, indicating that the model poorly represented some complexities in this subcatchment likely because an important process is missing or poorly characterized in the mechanistic model. For use in other basins, this method can assess parameter sensitivities as a function of the specific ungauged system to which it is applied. Overall, this approach can be employed to identify dominant modeled controls on catchment response and their agreement with system understanding. PMID:27642197

  13. Differential partition of virulent Aeromonas salmonicida and attenuated derivatives possessing specific cell surface alterations in polymer aqueous-phase systems

    NASA Technical Reports Server (NTRS)

    Van Alstine, J. M.; Trust, T. J.; Brooks, D. E.

    1986-01-01

    Two-polymer aqueous-phase systems in which partitioning of biological matter between the phases occurs according to surface properties such as hydrophobicity, charge, and lipid composition are used to compare the surface properties of strains of the fish pathogen Aeromonas salmonicida. The differential ability of strains to produce a surface protein array crucial to their virulence, the A layer, and to produce smooth lipopolysaccharide is found to be important in the partitioning behavior of Aeromonas salmonicida. The presence of the A layer is shown to decrease the surface hydrophilicity of the pathogen, and to increase specifically its surface affinity for fatty acid esters of polyethylene glycol. The method has application to the analysis of surface properties crucial to bacterial virulence, and to the selection of strains and mutants with specific surface characteristics.

  14. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  15. An analysis of metropolitan land-use by machine processing of earth resources technology satellite data

    NASA Technical Reports Server (NTRS)

    Mausel, P. W.; Todd, W. J.; Baumgardner, M. F.

    1976-01-01

    A successful application of state-of-the-art remote sensing technology in classifying an urban area into its broad land use classes is reported. This research proves that numerous urban features are amenable to classification using ERTS multispectral data automatically processed by computer. Furthermore, such automatic data processing (ADP) techniques permit areal analysis on an unprecedented scale with a minimum expenditure of time. Also, classification results obtained using ADP procedures are consistent, comparable, and replicable. The results of classification are compared with the proposed U. S. G. S. land use classification system in order to determine the level of classification that is feasible to obtain through ERTS analysis of metropolitan areas.

  16. A new algorithm for agile satellite-based acquisition operations

    NASA Astrophysics Data System (ADS)

    Bunkheila, Federico; Ortore, Emiliano; Circi, Christian

    2016-06-01

    Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.

  17. Stencils and problem partitionings: Their influence on the performance of multiple processor systems

    NASA Technical Reports Server (NTRS)

    Reed, D. A.; Adams, L. M.; Patrick, M. L.

    1986-01-01

    Given a discretization stencil, partitioning the problem domain is an important first step for the efficient solution of partial differential equations on multiple processor systems. Partitions are derived that minimize interprocessor communication when the number of processors is known a priori and each domain partition is assigned to a different processor. This partitioning technique uses the stencil structure to select appropriate partition shapes. For square problem domains, it is shown that non-standard partitions (e.g., hexagons) are frequently preferable to the standard square partitions for a variety of commonly used stencils. This investigation is concluded with a formalization of the relationship between partition shape, stencil structure, and architecture, allowing selection of optimal partitions for a variety of parallel systems.

  18. Model-based recursive partitioning to identify risk clusters for metabolic syndrome and its components: findings from the International Mobility in Aging Study

    PubMed Central

    Pirkle, Catherine M; Wu, Yan Yan; Zunzunegui, Maria-Victoria; Gómez, José Fernando

    2018-01-01

    Objective Conceptual models underpinning much epidemiological research on ageing acknowledge that environmental, social and biological systems interact to influence health outcomes. Recursive partitioning is a data-driven approach that allows for concurrent exploration of distinct mixtures, or clusters, of individuals that have a particular outcome. Our aim is to use recursive partitioning to examine risk clusters for metabolic syndrome (MetS) and its components, in order to identify vulnerable populations. Study design Cross-sectional analysis of baseline data from a prospective longitudinal cohort called the International Mobility in Aging Study (IMIAS). Setting IMIAS includes sites from three middle-income countries—Tirana (Albania), Natal (Brazil) and Manizales (Colombia)—and two from Canada—Kingston (Ontario) and Saint-Hyacinthe (Quebec). Participants Community-dwelling male and female adults, aged 64–75 years (n=2002). Primary and secondary outcome measures We apply recursive partitioning to investigate social and behavioural risk factors for MetS and its components. Model-based recursive partitioning (MOB) was used to cluster participants into age-adjusted risk groups based on variabilities in: study site, sex, education, living arrangements, childhood adversities, adult occupation, current employment status, income, perceived income sufficiency, smoking status and weekly minutes of physical activity. Results 43% of participants had MetS. Using MOB, the primary partitioning variable was participant sex. Among women from middle-incomes sites, the predicted proportion with MetS ranged from 58% to 68%. Canadian women with limited physical activity had elevated predicted proportions of MetS (49%, 95% CI 39% to 58%). Among men, MetS ranged from 26% to 41% depending on childhood social adversity and education. Clustering for MetS components differed from the syndrome and across components. Study site was a primary partitioning variable for all components except HDL cholesterol. Sex was important for most components. Conclusion MOB is a promising technique for identifying disease risk clusters (eg, vulnerable populations) in modestly sized samples. PMID:29500203

  19. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Sherrill, C. David

    2014-07-01

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.

  20. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu

    2014-07-28

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work throughmore » the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.« less

  1. Inference and Analysis of Population Structure Using Genetic Data and Network Theory.

    PubMed

    Greenbaum, Gili; Templeton, Alan R; Bar-David, Shirli

    2016-04-01

    Clustering individuals to subpopulations based on genetic data has become commonplace in many genetic studies. Inference about population structure is most often done by applying model-based approaches, aided by visualization using distance-based approaches such as multidimensional scaling. While existing distance-based approaches suffer from a lack of statistical rigor, model-based approaches entail assumptions of prior conditions such as that the subpopulations are at Hardy-Weinberg equilibria. Here we present a distance-based approach for inference about population structure using genetic data by defining population structure using network theory terminology and methods. A network is constructed from a pairwise genetic-similarity matrix of all sampled individuals. The community partition, a partition of a network to dense subgraphs, is equated with population structure, a partition of the population to genetically related groups. Community-detection algorithms are used to partition the network into communities, interpreted as a partition of the population to subpopulations. The statistical significance of the structure can be estimated by using permutation tests to evaluate the significance of the partition's modularity, a network theory measure indicating the quality of community partitions. To further characterize population structure, a new measure of the strength of association (SA) for an individual to its assigned community is presented. The strength of association distribution (SAD) of the communities is analyzed to provide additional population structure characteristics, such as the relative amount of gene flow experienced by the different subpopulations and identification of hybrid individuals. Human genetic data and simulations are used to demonstrate the applicability of the analyses. The approach presented here provides a novel, computationally efficient model-free method for inference about population structure that does not entail assumption of prior conditions. The method is implemented in the software NetStruct (available at https://giligreenbaum.wordpress.com/software/). Copyright © 2016 by the Genetics Society of America.

  2. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  3. Dynamic Seascapes Predict the Marine Occurrence of an Endangered Species

    NASA Astrophysics Data System (ADS)

    Breece, M.; Fox, D. A.; Dunton, K. J.; Frisk, M. G.; Jordaan, A.; Oliver, M. J.

    2016-02-01

    Landscapes are powerful environmental partitions that index complex biogeochemical processes that drive terrestrial species distributions. However, translating landscapes into seascapes requires that the dynamic nature of the fluid environment be reflected in spatial and temporal boundaries such that seascapes can be used in marine species distribution models and conservation decisions. A seascape product derived from satellite ocean color and sea surface temperature partitioned mid-Atlantic coastal waters on scales commensurate with the Atlantic Sturgeon Acipenser oxyrinchus oxyrinchus coastal migration. The seascapes were then matched with acoustic telemetry records of Atlantic Sturgeon to determine seascape selectivity. To test our model, we used real-time satellite seascape maps to normalize the sampling of an autonomous underwater vehicle that resampled similar geographic regions with time varying seascape classifications. We found that Atlantic Sturgeon exhibited preference for one seascape class over those available in the coastal ocean, indicating selection for environmental properties that co-varied with the dynamic seascape class rather than geographical location. The recent listing of Atlantic Sturgeon as Endangered throughout much of their United States range has highlighted the need for improved understanding of their occurrence in marine waters to reduce interactions with various anthropogenic stressors. Narrow dynamic migration corridors may enable seascapes to be used as a daily decision tool by industry and managers to reduce interactions with this Endangered Species during coastal migrations.

  4. Dynamic Seascapes Predict the Marine Occurrence of an Endangered Species

    NASA Astrophysics Data System (ADS)

    Breece, M.; Fox, D. A.; Dunton, K. J.; Frisk, M. G.; Jordaan, A.; Oliver, M. J.

    2016-12-01

    Landscapes are powerful environmental partitions that index complex biogeochemical processes that drive terrestrial species distributions. However, translating landscapes into seascapes requires that the dynamic nature of the fluid environment be reflected in spatial and temporal boundaries such that seascapes can be used in marine species distribution models and conservation decisions. A seascape product derived from satellite ocean color and sea surface temperature partitioned mid-Atlantic coastal waters on scales commensurate with the Atlantic Sturgeon Acipenser oxyrinchus oxyrinchus coastal migration. The seascapes were then matched with acoustic telemetry records of Atlantic Sturgeon to determine seascape selectivity. To test our model, we used real-time satellite seascape maps to normalize the sampling of an autonomous underwater vehicle that resampled similar geographic regions with time varying seascape classifications. We found that Atlantic Sturgeon exhibited preference for one seascape class over those available in the coastal ocean, indicating selection for environmental properties that co-varied with the dynamic seascape class rather than geographical location. The recent listing of Atlantic Sturgeon as Endangered throughout much of their United States range has highlighted the need for improved understanding of their occurrence in marine waters to reduce interactions with various anthropogenic stressors. Narrow dynamic migration corridors may enable seascapes to be used as a daily decision tool by industry and managers to reduce interactions with this Endangered Species during coastal migrations.

  5. The Japanese Histologic Classification and T-score in the Oxford Classification system could predict renal outcome in Japanese IgA nephropathy patients.

    PubMed

    Kaihan, Ahmad Baseer; Yasuda, Yoshinari; Katsuno, Takayuki; Kato, Sawako; Imaizumi, Takahiro; Ozeki, Takaya; Hishida, Manabu; Nagata, Takanobu; Ando, Masahiko; Tsuboi, Naotake; Maruyama, Shoichi

    2017-12-01

    The Oxford Classification is utilized globally, but has not been fully validated. In this study, we conducted a comparative analysis between the Oxford Classification and Japanese Histologic Classification (JHC) to predict renal outcome in Japanese patients with IgA nephropathy (IgAN). A retrospective cohort study including 86 adult IgAN patients was conducted. The Oxford Classification and the JHC were evaluated by 7 independent specialists. The JHC, MEST score in the Oxford Classification, and crescents were analyzed in association with renal outcome, defined as a 50% increase in serum creatinine. In multivariate analysis without the JHC, only the T score was significantly associated with renal outcome. While, a significant association was revealed only in the JHC on multivariate analysis with JHC. The JHC and T score in the Oxford Classification were associated with renal outcome among Japanese patients with IgAN. Superiority of the JHC as a predictive index should be validated with larger study population and cohort studies in different ethnicities.

  6. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  7. Waveform classification and statistical analysis of seismic precursors to the July 2008 Vulcanian Eruption of Soufrière Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Rodgers, Mel; Smith, Patrick; Pyle, David; Mather, Tamsin

    2016-04-01

    Understanding the transition between quiescence and eruption at dome-forming volcanoes, such as Soufrière Hills Volcano (SHV), Montserrat, is important for monitoring volcanic activity during long-lived eruptions. Statistical analysis of seismic events (e.g. spectral analysis and identification of multiplets via cross-correlation) can be useful for characterising seismicity patterns and can be a powerful tool for analysing temporal changes in behaviour. Waveform classification is crucial for volcano monitoring, but consistent classification, both during real-time analysis and for retrospective analysis of previous volcanic activity, remains a challenge. Automated classification allows consistent re-classification of events. We present a machine learning (random forest) approach to rapidly classify waveforms that requires minimal training data. We analyse the seismic precursors to the July 2008 Vulcanian explosion at SHV and show systematic changes in frequency content and multiplet behaviour that had not previously been recognised. These precursory patterns of seismicity may be interpreted as changes in pressure conditions within the conduit during magma ascent and could be linked to magma flow rates. Frequency analysis of the different waveform classes supports the growing consensus that LP and Hybrid events should be considered end members of a continuum of low-frequency source processes. By using both supervised and unsupervised machine-learning methods we investigate the nature of waveform classification and assess current classification schemes.

  8. Characterization of Escherichia coli isolates from different fecal sources by means of classification tree analysis of fatty acid methyl ester (FAME) profiles.

    PubMed

    Seurinck, Sylvie; Deschepper, Ellen; Deboch, Bishaw; Verstraete, Willy; Siciliano, Steven

    2006-03-01

    Microbial source tracking (MST) methods need to be rapid, inexpensive and accurate. Unfortunately, many MST methods provide a wealth of information that is difficult to interpret by the regulators who use this information to make decisions. This paper describes the use of classification tree analysis to interpret the results of a MST method based on fatty acid methyl ester (FAME) profiles of Escherichia coli isolates, and to present results in a format readily interpretable by water quality managers. Raw sewage E. coli isolates and animal E. coli isolates from cow, dog, gull, and horse were isolated and their FAME profiles collected. Correct classification rates determined with leaveone-out cross-validation resulted in an overall low correct classification rate of 61%. A higher overall correct classification rate of 85% was obtained when the animal isolates were pooled together and compared to the raw sewage isolates. Bootstrap aggregation or adaptive resampling and combining of the FAME profile data increased correct classification rates substantially. Other MST methods may be better suited to differentiate between different fecal sources but classification tree analysis has enabled us to distinguish raw sewage from animal E. coli isolates, which previously had not been possible with other multivariate methods such as principal component analysis and cluster analysis.

  9. A multilayered approach for the analysis of perinatal mortality using different classification systems.

    PubMed

    Gordijn, Sanne J; Korteweg, Fleurisca J; Erwich, Jan Jaap H M; Holm, Jozien P; van Diem, Mariet Th; Bergman, Klasien A; Timmer, Albertus

    2009-06-01

    Many classification systems for perinatal mortality are available, all with their own strengths and weaknesses: none of them has been universally accepted. We present a systematic multilayered approach for the analysis of perinatal mortality based on information related to the moment of death, the conditions associated with death and the underlying cause of death, using a combination of representatives of existing classification systems. We compared the existing classification systems regarding their definition of the perinatal period, level of complexity, inclusion of maternal, foetal and/or placental factors and whether they focus at a clinical or pathological viewpoint. Furthermore, we allocated the classification systems to one of three categories: 'when', 'what' or 'why', dependent on whether the allocation of the individual cases of perinatal mortality is based on the moment of death ('when'), the clinical conditions associated with death ('what'), or the underlying cause of death ('why'). A multilayered approach for the analysis and classification of perinatal mortality is possible by using combinations of existing systems; for example the Wigglesworth or Nordic Baltic ('when'), ReCoDe ('what') and Tulip ('why') classification systems. This approach is useful not only for in depth analysis of perinatal mortality in the developed world but also for analysis of perinatal mortality in the developing countries, where resources to investigate death are often limited.

  10. Comparing ecoregional classifications for natural areas management in the Klamath Region, USA

    USGS Publications Warehouse

    Sarr, Daniel A.; Duff, Andrew; Dinger, Eric C.; Shafer, Sarah L.; Wing, Michael; Seavy, Nathaniel E.; Alexander, John D.

    2015-01-01

    We compared three existing ecoregional classification schemes (Bailey, Omernik, and World Wildlife Fund) with two derived schemes (Omernik Revised and Climate Zones) to explore their effectiveness in explaining species distributions and to better understand natural resource geography in the Klamath Region, USA. We analyzed presence/absence data derived from digital distribution maps for trees, amphibians, large mammals, small mammals, migrant birds, and resident birds using three statistical analyses of classification accuracy (Analysis of Similarity, Canonical Analysis of Principal Coordinates, and Classification Strength). The classifications were roughly comparable in classification accuracy, with Omernik Revised showing the best overall performance. Trees showed the strongest fidelity to the classifications, and large mammals showed the weakest fidelity. We discuss the implications for regional biogeography and describe how intermediate resolution ecoregional classifications may be appropriate for use as natural areas management domains.

  11. Regional Climate Modeling over the Marmara Region, Turkey, with Improved Land Cover Data

    NASA Astrophysics Data System (ADS)

    Sertel, E.; Robock, A.

    2007-12-01

    Land surface controls the partitioning of available energy at the surface between sensible and latent heat,and controls partitioning of available water between evaporation and runoff. Current land cover data available within the regional climate models such as Regional Atmospheric Modeling System (RAMS), the Fifth-Generation NCAR/Penn State Mesoscale Model (MM5) and Weather Research and Forecasting (WRF) was obtained from 1- km Advanced Very High Resolution Radiometer satellite images spanning April 1992 through March 1993 with an unsupervised classification technique. These data are not up-to-date and are not accurate for all regions and some land cover types such as urban areas. Here we introduce new, up-to-date and accurate land cover data for the Marmara Region, Turkey derived from Landsat Enhanced Thematic Mapper images into the WRF regional climate model. We used several image processing techniques to create accurate land cover data from Landsat images obtained between 2001 and 2005. First, all images were atmospherically and radiometrically corrected to minimize contamination effects of atmospheric particles and systematic errors. Then, geometric correction was performed for each image to eliminate geometric distortions and define images in a common coordinate system. Finally, unsupervised and supervised classification techniques were utilized to form the most accurate land cover data yet for the study area. Accuracy assessments of the classifications were performed using error matrix and kappa statistics to find the best classification results. Maximum likelihood classification method gave the most accurate results over the study area. We compared the new land cover data with the default WRF land cover data. WRF land cover data cannot represent urban areas in the cities of Istanbul, Izmit, and Bursa. As an example, both original satellite images and new land cover data showed the expansion of urban areas into the Istanbul metropolitan area, but in the WRF land cover data only a limited area along the Bosporus is shown as urban. In addition, the new land cover data indicate that the northern part of Istanbul is covered by evergreen and deciduous forest (verified by ground truth data), but the WRF data indicate that most of this region is croplands. In the northern part of the Marmara Region, there is bare ground as a result of open mining activities and this class can be identified in our land cover data, whereas the WRF data indicated this region as woodland. We then used this new data set to conduct WRF simulations for one main and two nested domains, where the inner-most domain represents the Marmara Region with 3 km horizontal resolution. The vertical domain of both main and nested domains extends over 28 vertical levels. Initial and boundary conditions were obtained from National Centers for Environmental Prediction-Department of Energy Reanalysis II and the Noah model was selected as the land surface model. Two model simulations were conducted; one with available land cover data and one with the newly created land cover data. Using detailed meteorological station data within the study area, we find that the simulation with the new land cover data set produces better temperature and precipitation simulations for the region, showing the value of accurate land cover data and that changing land cover data can be an important influence on local climate change.

  12. Aircraft interior noise models - Sidewall trim, stiffened structures, and cabin acoustics with floor partition

    NASA Technical Reports Server (NTRS)

    Pope, L. D.; Wilby, E. G.; Willis, C. M.; Mayes, W. H.

    1983-01-01

    As part of the continuing development of an aircraft interior noise prediction model, in which a discrete modal representation and power flow analysis are used, theoretical results are considered for inclusion of sidewall trim, stiffened structures, and cabin acoustics with floor partition. For validation purposes, predictions of the noise reductions for three test articles (a bare ring-stringer stiffened cylinder, an unstiffened cylinder with floor and insulation, and a ring-stringer stiffened cylinder with floor and sidewall trim) are compared with measurements.

  13. Polymers as Reference Partitioning Phase: Polymer Calibration for an Analytically Operational Approach To Quantify Multimedia Phase Partitioning.

    PubMed

    Gilbert, Dorothea; Witt, Gesine; Smedes, Foppe; Mayer, Philipp

    2016-06-07

    Polymers are increasingly applied for the enrichment of hydrophobic organic chemicals (HOCs) from various types of samples and media in many analytical partitioning-based measuring techniques. We propose using polymers as a reference partitioning phase and introduce polymer-polymer partitioning as the basis for a deeper insight into partitioning differences of HOCs between polymers, calibrating analytical methods, and consistency checking of existing and calculation of new partition coefficients. Polymer-polymer partition coefficients were determined for polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and organochlorine pesticides (OCPs) by equilibrating 13 silicones, including polydimethylsiloxane (PDMS) and low-density polyethylene (LDPE) in methanol-water solutions. Methanol as cosolvent ensured that all polymers reached equilibrium while its effect on the polymers' properties did not significantly affect silicone-silicone partition coefficients. However, we noticed minor cosolvent effects on determined polymer-polymer partition coefficients. Polymer-polymer partition coefficients near unity confirmed identical absorption capacities of several PDMS materials, whereas larger deviations from unity were indicated within the group of silicones and between silicones and LDPE. Uncertainty in polymer volume due to imprecise coating thickness or the presence of fillers was identified as the source of error for partition coefficients. New polymer-based (LDPE-lipid, PDMS-air) and multimedia partition coefficients (lipid-water, air-water) were calculated by applying the new concept of a polymer as reference partitioning phase and by using polymer-polymer partition coefficients as conversion factors. The present study encourages the use of polymer-polymer partition coefficients, recognizing that polymers can serve as a linking third phase for a quantitative understanding of equilibrium partitioning of HOCs between any two phases.

  14. Quantified degree of eccentricity of aortic valve calcification predicts risk of paravalvular regurgitation and response to balloon post-dilation after self-expandable transcatheter aortic valve replacement.

    PubMed

    Park, Jun-Bean; Hwang, In-Chang; Lee, Whal; Han, Jung-Kyu; Kim, Chi-Hoon; Lee, Seung-Pyo; Yang, Han-Mo; Park, Eun-Ah; Kim, Hyung-Kwan; Chiam, Paul T L; Kim, Yong-Jin; Koo, Bon-Kwon; Sohn, Dae-Won; Ahn, Hyuk; Kang, Joon-Won; Park, Seung-Jung; Kim, Hyo-Soo

    2018-05-15

    Limited data exist regarding the impact of aortic valve calcification (AVC) eccentricity on the risk of paravalvular regurgitation (PVR) and response to balloon post-dilation (BPD) after transcatheter aortic valve replacement (TAVR). We investigated the prognostic value of AVC eccentricity in predicting the risk of PVR and response to BPD in patients undergoing TAVR. We analyzed 85 patients with severe aortic stenosis who underwent self-expandable TAVR (43 women; 77.2±7.1years). AVC was quantified as the total amount of calcification (total AVC load) and as the eccentricity of calcium (EoC) using calcium volume scoring with contrast computed tomography angiography (CTA). The EoC was defined as the maximum absolute difference in calcium volume scores between 2 adjacent sectors (bi-partition method) or between sectors based on leaflets (leaflet-based method). Total AVC load and bi-partition EoC, but not leaflet-based EoC, were significant predictors for the occurrence of ≥moderate PVR, and bi-partition EoC had a better predictive value than total AVC load (area under the curve [AUC]=0.863 versus 0.760, p for difference=0.006). In multivariate analysis, bi-partition EoC was an independent predictor for the risk of ≥moderate PVR regardless of perimeter oversizing index. The greater bi-partition EoC was the only significant parameter to predict poor response to BPD (AUC=0.775, p=0.004). Pre-procedural assessment of AVC eccentricity using CTA as "bi-partition EoC" provides useful predictive information on the risk of significant PVR and response to BPD in patients undergoing TAVR with self-expandable valves. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A Lego Mindstorms NXT based test bench for multiagent exploratory systems and distributed network partitioning

    NASA Astrophysics Data System (ADS)

    Patil, Riya Raghuvir

    Networks of communicating agents require distributed algorithms for a variety of tasks in the field of network analysis and control. For applications such as swarms of autonomous vehicles, ad hoc and wireless sensor networks, and such military and civilian applications as exploring and patrolling a robust autonomous system that uses a distributed algorithm for selfpartitioning can be significantly helpful. A single team of autonomous vehicles in a field may need to self-dissemble into multiple teams, conducive to completing multiple control tasks. Moreover, because communicating agents are subject to changes, namely, addition or failure of an agent or link, a distributed or decentralized algorithm is favorable over having a central agent. A framework to help with the study of self-partitioning of such multi agent systems that have most basic mobility model not only saves our time in conception but also gives us a cost effective prototype without negotiating the physical realization of the proposed idea. In this thesis I present my work on the implementation of a flexible and distributed stochastic partitioning algorithm on the LegoRTM Mindstorms' NXT on a graphical programming platform using National Instruments' LabVIEW(TM) forming a team of communicating agents via NXT-Bee radio module. We single out mobility, communication and self-partition as the core elements of the work. The goal is to randomly explore a precinct for reference sites. Agents who have discovered the reference sites announce their target acquisition to form a network formed based upon the distance of each agent with the other wherein the self-partitioning begins to find an optimal partition. Further, to illustrate the work, an experimental test-bench of five Lego NXT robots is presented.

  16. Multi-locus phylogeny of dolphins in the subfamily Lissodelphininae: character synergy improves phylogenetic resolution

    PubMed Central

    Harlin-Cognato, April D; Honeycutt, Rodney L

    2006-01-01

    Background Dolphins of the genus Lagenorhynchus are anti-tropically distributed in temperate to cool waters. Phylogenetic analyses of cytochrome b sequences have suggested that the genus is polyphyletic; however, many relationships were poorly resolved. In this study, we present a combined-analysis phylogenetic hypothesis for Lagenorhynchus and members of the subfamily Lissodelphininae, which is derived from two nuclear and two mitochondrial data sets and the addition of 34 individuals representing 9 species. In addition, we characterize with parsimony and Bayesian analyses the phylogenetic utility and interaction of characters with statistical measures, including the utility of highly consistent (non-homoplasious) characters as a conservative measure of phylogenetic robustness. We also explore the effects of removing sources of character conflict on phylogenetic resolution. Results Overall, our study provides strong support for the monophyly of the subfamily Lissodelphininae and the polyphyly of the genus Lagenorhynchus. In addition, the simultaneous parsimony analysis resolved and/or improved resolution for 12 nodes including: (1) L. albirostris, L. acutus; (2) L. obscurus and L. obliquidens; and (3) L. cruciger and L. australis. In addition, the Bayesian analysis supported the monophyly of the Cephalorhynchus, and resolved ambiguities regarding the relationship of L. australis/L. cruciger to other members of the genus Lagenorhynchus. The frequency of highly consistent characters varied among data partitions, but the rate of evolution was consistent within data partitions. Although the control region was the greatest source of character conflict, removal of this data partition impeded phylogenetic resolution. Conclusion The simultaneous analysis approach produced a more robust phylogenetic hypothesis for Lagenorhynchus than previous studies, thus supporting a phylogenetic approach employing multiple data partitions that vary in overall rate of evolution. Even in cases where there was apparent conflict among characters, our data suggest a synergistic interaction in the simultaneous analysis, and speak against a priori exclusion of data because of potential conflicts, primarily because phylogenetic results can be less robust. For example, the removal of the control region, the putative source of character conflict, produced spurious results with inconsistencies among and within topologies from parsimony and Bayesian analyses. PMID:17078887

  17. Bi-Partition of Shared Binary Decision Diagrams

    DTIC Science & Technology

    2002-12-01

    independently. Such BDDs are considered as a special case of partitioned BDDs [6], [12], [13] and free BDDs ( FBDDs ) [7], [8]. Note that BDD nomenclature...shi, 214-8571 Japan. a)E-mail: sasao@cse.kyutech.ac.jp Applications of partitioned SBDDs are similar to that of partitioned BDDs and FBDDs . When...partitioned SBDD is more canonical than partitioned BDDs and free BDDs ( FBDDs ). We developed a heuristic bi-partition algorithm for SBDDs, and showed cases

  18. Choosing the best partition of the output from a large-scale simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Challacombe, Chelsea Jordan; Casleton, Emily Michele

    Data partitioning becomes necessary when a large-scale simulation produces more data than can be feasibly stored. The goal is to partition the data, typically so that every element belongs to one and only one partition, and store summary information about the partition, either a representative value plus an estimate of the error or a distribution. Once the partitions are determined and the summary information stored, the raw data is discarded. This process can be performed in-situ; meaning while the simulation is running. When creating the partitions there are many decisions that researchers must make. For instance, how to determine oncemore » an adequate number of partitions have been created, how are the partitions created with respect to dividing the data, or how many variables should be considered simultaneously. In addition, decisions must be made for how to summarize the information within each partition. Because of the combinatorial number of possible ways to partition and summarize the data, a method of comparing the different possibilities will help guide researchers into choosing a good partitioning and summarization scheme for their application.« less

  19. Information analysis of a spatial database for ecological land classification

    NASA Technical Reports Server (NTRS)

    Davis, Frank W.; Dozier, Jeff

    1990-01-01

    An ecological land classification was developed for a complex region in southern California using geographic information system techniques of map overlay and contingency table analysis. Land classes were identified by mutual information analysis of vegetation pattern in relation to other mapped environmental variables. The analysis was weakened by map errors, especially errors in the digital elevation data. Nevertheless, the resulting land classification was ecologically reasonable and performed well when tested with higher quality data from the region.

  20. A hybrid segmentation method for partitioning the liver based on 4D DCE-MR images

    NASA Astrophysics Data System (ADS)

    Zhang, Tian; Wu, Zhiyi; Runge, Jurgen H.; Lavini, Cristina; Stoker, Jaap; van Gulik, Thomas; Cieslak, Kasia P.; van Vliet, Lucas J.; Vos, Frans M.

    2018-03-01

    The Couinaud classification of hepatic anatomy partitions the liver into eight functionally independent segments. Detection and segmentation of the hepatic vein (HV), portal vein (PV) and inferior vena cava (IVC) plays an important role in the subsequent delineation of the liver segments. To facilitate pharmacokinetic modeling of the liver based on the same data, a 4D DCE-MR scan protocol was selected. This yields images with high temporal resolution but low spatial resolution. Since the liver's vasculature consists of many tiny branches, segmentation of these images is challenging. The proposed framework starts with registration of the 4D DCE-MRI series followed by region growing from manually annotated seeds in the main branches of key blood vessels in the liver. It calculates the Pearson correlation between the time intensity curves (TICs) of a seed and all voxels. A maximum correlation map for each vessel is obtained by combining the correlation maps for all branches of the same vessel through a maximum selection per voxel. The maximum correlation map is incorporated in a level set scheme to individually delineate the main vessels. Subsequently, the eight liver segments are segmented based on three vertical intersecting planes fit through the three skeleton branches of HV and IVC's center of mass as well as a horizontal plane fit through the skeleton of PV. Our segmentation regarding delineation of the vessels is more accurate than the results of two state-of-the-art techniques on five subjects in terms of the average symmetric surface distance (ASSD) and modified Hausdorff distance (MHD). Furthermore, the proposed liver partitioning achieves large overlap with manual reference segmentations (expressed in Dice Coefficient) in all but a small minority of segments (mean values between 87% and 94% for segments 2-8). The lower mean overlap for segment 1 (72%) is due to the limited spatial resolution of our DCE-MR scan protocol.

  1. The software application and classification algorithms for welds radiograms analysis

    NASA Astrophysics Data System (ADS)

    Sikora, R.; Chady, T.; Baniukiewicz, P.; Grzywacz, B.; Lopato, P.; Misztal, L.; Napierała, L.; Piekarczyk, B.; Pietrusewicz, T.; Psuj, G.

    2013-01-01

    The paper presents a software implementation of an Intelligent System for Radiogram Analysis (ISAR). The system has to support radiologists in welds quality inspection. The image processing part of software with a graphical user interface and a welds classification part are described with selected classification results. Classification was based on a few algorithms: an artificial neural network, a k-means clustering, a simplified k-means and a rough sets theory.

  2. An initial analysis of LANDSAT 4 Thematic Mapper data for the classification of agricultural, forested wetland, and urban land covers

    NASA Technical Reports Server (NTRS)

    Quattrochi, D. A.; Anderson, J. E.; Brannon, D. P.; Hill, C. L.

    1982-01-01

    An initial analysis of LANDSAT 4 thematic mapper (TM) data for the delineation and classification of agricultural, forested wetland, and urban land covers was conducted. A study area in Poinsett County, Arkansas was used to evaluate a classification of agricultural lands derived from multitemporal LANDSAT multispectral scanner (MSS) data in comparison with a classification of TM data for the same area. Data over Reelfoot Lake in northwestern Tennessee were utilized to evaluate the TM for delineating forested wetland species. A classification of the study area was assessed for accuracy in discriminating five forested wetland categories. Finally, the TM data were used to identify urban features within a small city. A computer generated classification of Union City, Tennessee was analyzed for accuracy in delineating urban land covers. An evaluation of digitally enhanced TM data using principal components analysis to facilitate photointerpretation of urban features was also performed.

  3. Characterizing Heterogeneity within Head and Neck Lesions Using Cluster Analysis of Multi-Parametric MRI Data.

    PubMed

    Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O

    2015-01-01

    To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.

  4. Association of West Nile virus illness and urban landscapes in Chicago and Detroit.

    PubMed

    Ruiz, Marilyn O; Walker, Edward D; Foster, Erik S; Haramis, Linn D; Kitron, Uriel D

    2007-03-12

    West Nile virus infection in humans in urban areas of the Midwestern United States has exhibited strong spatial clustering during epidemic years. We derived urban landscape classes from the physical and socio-economic factors hypothesized to be associated with West Nile Virus (WNV) transmission and compared those to human cases of illness in 2002 in Chicago and Detroit. The objectives were to improve understanding of human exposure to virus-infected mosquitoes in the urban context, and to assess the degree to which environmental factors found to be important in Chicago were also found in Detroit. Five urban classes that partitioned the urban space were developed for each city region. The classes had many similarities in the two settings. In both regions, the WNV case rate was considerably higher in the urban class associated with the Inner Suburbs, where 1940-1960 era housing dominates, vegetation cover is moderate, and population density is moderate. The land cover mapping approach played an important role in the successful and consistent classification of the urban areas. The analysis demonstrates how urban form and past land use decisions can influence transmission of a vector-borne virus. In addition, the results are helpful to develop hypotheses regarding urban landscape features and WNV transmission, they provide a structured method to stratify the urban areas to locate representative field study sites specifically for WNV, and this analysis contributes to the question of how the urban environment affects human health.

  5. Multiparametric approach to unravel the mechanism of Strombolian activity at a multivent system: Mt. Etna case study

    NASA Astrophysics Data System (ADS)

    Cannata, Andrea; Del Bello, Elisabetta; Kueppers, Ulrich; Privitera, Eugenio; Ricci, Tullio; Scarlato, Piergiorgio; Sciotto, Mariangela; Spina, Laura; Taddeucci, Jacopo; Pena Fernandez, Juan Jose; Sesterhenn, Joern

    2016-04-01

    On 5th July 2014 an eruptive fissure (hereafter referred to as EF) opened at the base of North-East Crater (NEC) of Mt. Etna. EF produced both Strombolian explosions and lava effusion. Thanks to the multiparametric experiment planned in the framework of MEDSUV project, we had the chance to acquire geophysical and volcanological data, in order to investigate the ongoing volcanic activity at EF. Temporary instruments (2 broadband seismometers, 2 microphones, 3-microphone arrays, a high-speed video camera and a thermal-camera) were deployed near the active vents during 15-16 July 2014 and were integrated with the data recorded by the permanent networks. Several kinds of studies are currently in progress, such as: frequency analysis by Fourier Transform and Short Time Fourier Transform to evaluate the spectral content of both seismic and acoustic signals; partitioning of seismic and acoustic energies, whose time variations could reflect changes in the volcanic dynamics; investigation on the intertimes between explosions to investigate their recurrence behaviour; classification of the waveforms of infrasound events. Furthermore, joint analysis of video signals and seismic-acoustic wavefields outlined relationships between pyroclasts ejection velocity, total erupted mass, peak explosion pressure, and air-ground motion coupling. This multiparametric approach allowed distinguishing and characterizing individually the behavior of the two vents active along the eruptive fissure via their thermal, visible and infrasonic signatures and shed light in the eruptive dynamics.

  6. Gold-standard for computer-assisted morphological sperm analysis.

    PubMed

    Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen

    2017-04-01

    Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Methane eddy covariance flux measurements from a low flying aircraft: Bridging the scale gap between local and regional emissions estimates

    NASA Astrophysics Data System (ADS)

    Sayres, D. S.; Dobosy, R.; Dumas, E. J.; Kochendorfer, J.; Wilkerson, J.; Anderson, J. G.

    2017-12-01

    The Arctic contains a large reservoir of organic matter stored in permafrost and clathrates. Varying geology and hydrology across the Arctic, even on small scales, can cause large variability in surface carbon fluxes and partitioning between methane and carbon dioxide. This makes upscaling from point source measurements such as small flux towers or chambers difficult. Ground based measurements can yield high temporal resolution and detailed information about a specific location, but due to the inaccessibility of most of the Arctic to date have only made measurements at very few sites. In August 2013, a small aircraft, flying low over the surface (5-30 m), and carrying an air turbulence probe and spectroscopic instruments to measure methane, carbon dioxide, nitrous oxide, water vapor and their isotopologues, flew over the North Slope of Alaska. During the six flights multiple comparisons were made with a ground based Eddy Covariance tower as well as three region surveys flights of fluxes over three areas each approximately 2500 km2. We present analysis using the Flux Fragment Method and surface landscape classification maps to relate the fluxes to different surface land types. We show examples of how we use the aircraft data to upscale from a eddy covariance tower and map spatial variability across different ecotopes.

  8. EEG amplitude modulation analysis for semi-automated diagnosis of Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Falk, Tiago H.; Fraga, Francisco J.; Trambaiolli, Lucas; Anghinah, Renato

    2012-12-01

    Recent experimental evidence has suggested a neuromodulatory deficit in Alzheimer's disease (AD). In this paper, we present a new electroencephalogram (EEG) based metric to quantitatively characterize neuromodulatory activity. More specifically, the short-term EEG amplitude modulation rate-of-change (i.e., modulation frequency) is computed for five EEG subband signals. To test the performance of the proposed metric, a classification task was performed on a database of 32 participants partitioned into three groups of approximately equal size: healthy controls, patients diagnosed with mild AD, and those with moderate-to-severe AD. To gauge the benefits of the proposed metric, performance results were compared with those obtained using EEG spectral peak parameters which were recently shown to outperform other conventional EEG measures. Using a simple feature selection algorithm based on area-under-the-curve maximization and a support vector machine classifier, the proposed parameters resulted in accuracy gains, relative to spectral peak parameters, of 21.3% when discriminating between the three groups and by 50% when mild and moderate-to-severe groups were merged into one. The preliminary findings reported herein provide promising insights that automated tools may be developed to assist physicians in very early diagnosis of AD as well as provide researchers with a tool to automatically characterize cross-frequency interactions and their changes with disease.

  9. Cryptic diversity in European bats.

    PubMed Central

    Mayer, F.; von Helversen, O.

    2001-01-01

    Different species of bat can be morphologically very similar. In order to estimate the amount of cryptic diversity among European bats we screened the intra- and interspecific genetic variation in 26 European vespertilionid bat species. We sequenced the DNA of subunit 1 of the mitochondrial protein NADH dehydrogenase (ND1) from several individuals of a species, which were sampled in a variety of geographical regions. A phylogeny based on the mitochondrial (mt) DNA data is in good agreement with the current classification in the family. Highly divergent mitochondrial lineages were found in two taxa, which differed in at least 11% of their ND1 sequence. The two mtDNA lineages in Plecotus austriacus correlated with the two subspecies Plecotus austriacus austriacus and Plecotus austriacus kolombatovici. The two mtDNA lineages in Myotis mystacinus were partitioned among two morphotypes. The evidence for two new bat species within Europe is discussed. Convergent adaptive evolution might have contributed to the morphological similarity among distantly related species if they occupy similar ecological niches. Closely related species may differ in their ecology but not necessarily in their morphology. On the other hand, two morphologically clearly different species (Eptesicus serotinus and Eptesicus nilssonii) were found to be genetically very similar. Neither morphological nor mitochondrial DNA sequence analysis alone can be guaranteed to identify species. PMID:11522202

  10. A Novel Two-Step Hierarchial Quantitative Structure-Activity ...

    EPA Pesticide Factsheets

    Background: Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Methods and results: A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET ; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments) . The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models t

  11. A Partitioning Algorithm for Block-Diagonal Matrices With Overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy Antoine Atenekeng Kahou; Laura Grigori; Masha Sosonkina

    2008-02-02

    We present a graph partitioning algorithm that aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in [3]. The graph partitioning algorithm partitions the graph of the input matrix into K partitions, such that every partition {Omega}{sub i} has at most two neighbors {Omega}{sub i-1} and {Omega}{sub i+1}. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile ismore » performed. An initial overlapped block-diagonal partition is obtained from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transferred between neighboring partitions. Experiments are performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach.« less

  12. The effect of oxygen fugacity on the partitioning of nickel and cobalt between olivine, silicate melt, and metal

    NASA Technical Reports Server (NTRS)

    Ehlers, Karin; Grove, Timothy L.; Sisson, Thomas W.; Recca, Steven I.; Zervas, Deborah A.

    1992-01-01

    The effect of oxygen fugacity, f(O2), on the partitioning behavior of Ni and Co between olivine, silicate melt, and metal was investigated in the CaO-MgO-Al2O3-SiO2-FeO-Na2O system, an analogue of a chondrule composition from an ordinary chondrite. The conditions were 1350 C and 1 atm, with values of f(O2) varying between 10 exp -5.5 and 10 exp -12.6 atm (i.e., the f(O2) range relevant for crystal/liquid processes in terrestrial planets and meteorite parent bodies). Results of chemical analysis showed that the values of the Ni and Co partitioning coefficients begin to decrease at values of f(O2) that are about 3.9 log units below the nickel-nickel oxide and cobalt-cobalt oxide buffers, respectively, near the metal saturation for the chondrule analogue composition.

  13. Tunable evolutions of shock absorption and energy partitioning in magnetic granular chains

    NASA Astrophysics Data System (ADS)

    Leng, Dingxin; Liu, Guijie; Sun, Lingyu

    2018-01-01

    In this paper, we investigate the tunable characteristics of shock waves propagating in one-dimensional magnetic granular chains at various chain lengths and magnetic flux densities. According to the Hertz contact theory and Maxwell principle, a discrete element model with coupling elastic and field-induced interaction potentials of adjacent magnetic grains is proposed. We also present hard-sphere approximation analysis to describe the energy partitioning features of magnetic granular chains. The results demonstrate that, for a fixed magnetic field strength, when the chain length is greater than two times of the wave width of the solitary wave, the chain length has little effect on the output energy of the system; for a fixed chain length, the shock absorption and energy partitioning features of magnetic granular chains are remarkably influenced by varying magnetic flux densities. This study implies that the magnetic granular chain is potential to construct adaptive shock absorption components for impulse mitigation.

  14. A Novel Space Partitioning Algorithm to Improve Current Practices in Facility Placement

    PubMed Central

    Jimenez, Tamara; Mikler, Armin R; Tiwari, Chetan

    2012-01-01

    In the presence of naturally occurring and man-made public health threats, the feasibility of regional bio-emergency contingency plans plays a crucial role in the mitigation of such emergencies. While the analysis of in-place response scenarios provides a measure of quality for a given plan, it involves human judgment to identify improvements in plans that are otherwise likely to fail. Since resource constraints and government mandates limit the availability of service provided in case of an emergency, computational techniques can determine optimal locations for providing emergency response assuming that the uniform distribution of demand across homogeneous resources will yield and optimal service outcome. This paper presents an algorithm that recursively partitions the geographic space into sub-regions while equally distributing the population across the partitions. For this method, we have proven the existence of an upper bound on the deviation from the optimal population size for sub-regions. PMID:23853502

  15. TriageTools: tools for partitioning and prioritizing analysis of high-throughput sequencing data.

    PubMed

    Fimereli, Danai; Detours, Vincent; Konopka, Tomasz

    2013-04-01

    High-throughput sequencing is becoming a popular research tool but carries with it considerable costs in terms of computation time, data storage and bandwidth. Meanwhile, some research applications focusing on individual genes or pathways do not necessitate processing of a full sequencing dataset. Thus, it is desirable to partition a large dataset into smaller, manageable, but relevant pieces. We present a toolkit for partitioning raw sequencing data that includes a method for extracting reads that are likely to map onto pre-defined regions of interest. We show the method can be used to extract information about genes of interest from DNA or RNA sequencing samples in a fraction of the time and disk space required to process and store a full dataset. We report speedup factors between 2.6 and 96, depending on settings and samples used. The software is available at http://www.sourceforge.net/projects/triagetools/.

  16. Community detection in complex networks by using membrane algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren

    Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.

  17. Improving Cluster Analysis with Automatic Variable Selection Based on Trees

    DTIC Science & Technology

    2014-12-01

    regression trees Daisy DISsimilAritY PAM partitioning around medoids PMA penalized multivariate analysis SPC sparse principal components UPGMA unweighted...unweighted pair-group average method ( UPGMA ). This method measures dissimilarities between all objects in two clusters and takes the average value

  18. Simple Spreadsheet Models For Interpretation Of Fractured Media Tracer Tests

    EPA Science Inventory

    An analysis of a gas-phase partitioning tracer test conducted through fractured media is discussed within this paper. The analysis employed matching eight simple mathematical models to the experimental data to determine transport parameters. All of the models tested; two porous...

  19. Correlations between chromatographic parameters and bioactivity predictors of potential herbicides.

    PubMed

    Janicka, Małgorzata

    2014-08-01

    Different liquid chromatography techniques, including reversed-phase liquid chromatography on Purosphere RP-18e, IAM.PC.DD2 and Cosmosil Cholester columns and micellar liqud chromatography with a Purosphere RP-8e column and using buffered sodium dodecyl sulfate-acetonitrile as the mobile phase, were applied to study the lipophilic properties of 15 newly synthesized phenoxyacetic and carbamic acid derivatives, which are potential herbicides. Chromatographic lipophilicity descriptors were used to extrapolate log k parameters (log kw and log km) and log k values. Partitioning lipophilicity descriptors, i.e., log P coefficients in an n-octanol-water system, were computed from the molecular structures of the tested compounds. Bioactivity descriptors, including partition coefficients in a water-plant cuticle system and water-human serum albumin and coefficients for human skin partition and permeation were calculated in silico by ACD/ADME software using the linear solvation energy relationship of Abraham. Principal component analysis was applied to describe similarities between various chromatographic and partitioning lipophilicities. Highly significant, predictive linear relationships were found between chromatographic parameters and bioactivity descriptors. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. An application of the patient rule-induction method for evaluating the contribution of the Apolipoprotein E and Lipoprotein Lipase genes to predicting ischemic heart disease.

    PubMed

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G; Tybjaerg-Hansen, Anne; Sing, Charles F

    2007-09-01

    Different combinations of genetic and environmental risk factors are known to contribute to the complex etiology of ischemic heart disease (IHD) in different subsets of individuals. We employed the Patient Rule-Induction Method (PRIM) to select the combination of risk factors and risk factor values that identified each of 16 mutually exclusive partitions of individuals having significantly different levels of risk of IHD. PRIM balances two competing objectives: (1) finding partitions where the risk of IHD is high and (2) maximizing the number of IHD cases explained by the partitions. A sequential PRIM analysis was applied to data on the incidence of IHD collected over 8 years for a sample of 5,455 unrelated individuals from the Copenhagen City Heart Study (CCHS) to assess the added value of variation in two candidate susceptibility genes beyond the traditional, lipid and body mass index risk factors for IHD. An independent sample of 362 unrelated individuals also from the city of Copenhagen was used to test the model obtained for each of the hypothesized partitions. Copyright (c) 2007 Wiley-Liss, Inc.

  1. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  2. Developing QSPR model of gas/particle partition coefficients of neutral poly-/perfluoroalkyl substances

    NASA Astrophysics Data System (ADS)

    Yuan, Quan; Ma, Guangcai; Xu, Ting; Serge, Bakire; Yu, Haiying; Chen, Jianrong; Lin, Hongjun

    2016-10-01

    Poly-/perfluoroalkyl substances (PFASs) are a class of synthetic fluorinated organic substances that raise increasing concern because of their environmental persistence, bioaccumulation and widespread presence in various environment media and organisms. PFASs can be released into the atmosphere through both direct and indirect sources, and the gas/particle partition coefficient (KP) is an important parameter that helps us to understand their atmospheric behavior. In this study, we developed a temperature-dependent predictive model for log KP of PFASs and analyzed the molecular mechanism that governs their partitioning equilibrium between gas phase and particle phase. All theoretical computation was carried out at B3LYP/6-31G (d, p) level based on neutral molecular structures by Gaussian 09 program package. The regression model has a good statistical performance and robustness. The application domain has also been defined according to OECD guidance. The mechanism analysis shows that electrostatic interaction and dispersion interaction play the most important role in the partitioning equilibrium. The developed model can be used to predict log KP values of neutral fluorotelomer alcohols and perfluor sulfonamides/sulfonamidoethanols with different substitutions at nitrogen atoms, providing basic data for their ecological risk assessment.

  3. Assessing the combined influence of TOC and black carbon in soil-air partitioning of PBDEs and DPs from the Indus River Basin, Pakistan.

    PubMed

    Ali, Usman; Mahmood, Adeel; Syed, Jabir Hussain; Li, Jun; Zhang, Gan; Katsoyiannis, Athanasios; Jones, Kevin C; Malik, Riffat Naseem

    2015-06-01

    Levels of polybrominated diphenyl ethers (PBDEs) and dechlorane plus (DPs) were investigated in the Indus River Basin from Pakistan. Concentrations of ∑PBDEs and ∑DPs were ranged between 0.05 and 2.38 and 0.002-0.53 ng g(-1) in the surface soils while 1.43-22.1 and 0.19-7.59 pg m(-3) in the passive air samples, respectively. Black carbon (fBC) and total organic carbon (fTOC) fractions were also measured and ranged between 0.73 and 1.75 and 0.04-0.2%, respectively. The statistical analysis revealed strong influence of fBC than fTOC on the distribution of PBDEs and DPs in the Indus River Basin soils. BDE's congener profile suggested the input of penta-bromodiphenylether (DE-71) commercial formulation in the study area. Soil-air partitioning of PBDEs were investigated by employing octanol-air partition coefficients (KOA) and black carbon-air partition coefficients (KBC-A). The results of both models suggested the combined influence of total organic carbon (absorption) and black carbon (adsorption) in the studied area. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. On factors structuring the flatfish assemblage in the southern North Sea

    NASA Astrophysics Data System (ADS)

    Piet, G. J.; Pfisterer, A. B.; Rijnsdorp, A. D.

    1998-09-01

    Ten species of flatfish were studied to see to what extent interspecific competition influences their diet or spatial distribution and whether the potential of these flatfish species to avoid interspecific competition through resource partitioning is constrained by specific morphological characteristics. For this, seven morphological characteristics were measured, diet composition was determined from gut content analyses and overlap in distribution was determined from the co-occurrence in trawl hauls. Canonical correspondence analysis revealed the morphological characteristics that were most strongly correlated with the diet composition. Based on these findings the mouth gape was considered to be the most important morphological constraint affecting the choice of food. Two resource dimensions were distinguished along which interspecific competition can act on the flatfish assemblage: the trophic dimension (diet composition) and the spatial dimension (distribution). Resource partitioning was observed along both dimensions separately and, more importantly, the degree of resource partitioning along the two dimensions was negatively correlated. Especially the latter was considered strong circumstantial evidence that interspecific competition is a major factor structuring the flatfish assemblage. Resource partitioning along the two resource dimensions increased with decreasing mouth gape, suggesting that interspecific competition mainly acts on the small-mouthed fish, i.e. juveniles.

  5. Trophic niche partitioning of littoral fish species from the rocky intertidal of Helgoland, Germany

    NASA Astrophysics Data System (ADS)

    Hielscher, N. N.; Malzahn, A. M.; Diekmann, R.; Aberle, N.

    2015-12-01

    During a 3-year field study, interspecific and interannual differences in the trophic ecology of littoral fish species were investigated in the rocky intertidal of Helgoland island (North Sea). We investigated trophic niche partitioning of common coexisting littoral fish species based on a multi-tracer approach using stable isotope and fatty acids in order to show differences and similarities in resource use and feeding modes. The results of the dual-tracer approach showed clear trophic niche partitioning of the five target fish species, the goldsinny wrasse Ctenolabrus rupestris, the sand goby Pomatoschistus minutus, the painted goby Pomatoschistus pictus, the short-spined sea scorpion Myoxocephalus scorpius and the long-spined sea scorpion Taurulus bubalis. Both stable isotopes and fatty acids showed distinct differences in the trophic ecology of the studied fish species. However, the combined use of the two techniques added an additional resolution on the interannual scale. The sand goby P. minutus showed the largest trophic plasticity with a pronounced variability between years. The present data analysis provides valuable information on trophic niche partitioning of fish species in the littoral zones of Helgoland and on complex benthic food webs in general.

  6. Quantitative investigation into the influence of temperature on carbide and austenite evolution during partitioning of a quenched and partitioned steel

    DOE PAGES

    Pierce, Dean T.; Coughlin, D. R.; Williamson, Don L.; ...

    2016-05-03

    Here, the influence of partitioning temperature on microstructural evolution during quenching and partitioning was investigated in a 0.38C-1.54Mn-1.48Si wt.% steel using Mössbauer spectroscopy and transmission electron microscopy. η-carbide formation occurs in the martensite during the quenching, holding, and partitioning steps. More effective carbon partitioning from martensite to austenite was observed at 450 than 400°C, resulting in lower martensite carbon contents, less carbide formation, and greater retained austenite amounts for short partitioning times. Conversely, greater austenite decomposition occurs at 450°C for longer partitioning times. Lastly, cementite forms during austenite decomposition and in the martensite for longer partitioning times at 450°C.

  7. Accuracy of the All Patient Refined Diagnosis Related Groups Classification System in Congenital Heart Surgery

    PubMed Central

    Parnell, Aimee S.; Shults, Justine; Gaynor, J. William; Leonard, Mary B.; Dai, Dingwei; Feudtner, Chris

    2015-01-01

    Background Administrative data is increasingly used to evaluate clinical outcomes and quality of care in pediatric congenital heart surgery (CHS) programs. Several published analyses of large pediatric administrative datasets have relied on the All Patient Refined Diagnosis Related Groups (APR-DRG, version 24) diagnostic classification system. The accuracy of this classification system for patients undergoing CHS is unclear. Methods We performed a retrospective cohort study of all 14,098 patients 0-5 years of age undergoing any of six selected congenital heart operations, ranging in complexity from isolated closure of a ventricular septal defect to single ventricle palliation, at 40 tertiary care pediatric centers in the Pediatric Health Information Systems database between 2007 and 2010. Assigned APR-DRGs (cardiac versus non-cardiac) were compared using chi-squared or Fisher's exact tests between those patients admitted during the first day of life versus later and between those receiving extracorporeal membrane oxygenation support versus not. Recursive partitioning was used to assess the greatest determinants of APR-DRG type in the model. Results Every patient admitted on day of life 1 was assigned to a non-cardiac APR-DRG (p < 0.001 for each procedure). Similarly, use of extracorporeal membrane oxygenation was highly associated with misclassification of congenital heart surgery patients into a non-cardiac APR-DRG (p < 0.001 for each procedure). Cases misclassified into a non-cardiac APR-DRG experienced a significantly increased mortality (p < 0.001). Conclusions In classifying patients undergoing congenital heart surgery, APR-DRG coding has systematic misclassifications, which may result in inaccurate reporting of CHS case volumes and mortality. PMID:24200398

  8. A Hybrid Supervised/Unsupervised Machine Learning Approach to Solar Flare Prediction

    NASA Astrophysics Data System (ADS)

    Benvenuto, Federico; Piana, Michele; Campi, Cristina; Massone, Anna Maria

    2018-01-01

    This paper introduces a novel method for flare forecasting, combining prediction accuracy with the ability to identify the most relevant predictive variables. This result is obtained by means of a two-step approach: first, a supervised regularization method for regression, namely, LASSO is applied, where a sparsity-enhancing penalty term allows the identification of the significance with which each data feature contributes to the prediction; then, an unsupervised fuzzy clustering technique for classification, namely, Fuzzy C-Means, is applied, where the regression outcome is partitioned through the minimization of a cost function and without focusing on the optimization of a specific skill score. This approach is therefore hybrid, since it combines supervised and unsupervised learning; realizes classification in an automatic, skill-score-independent way; and provides effective prediction performances even in the case of imbalanced data sets. Its prediction power is verified against NOAA Space Weather Prediction Center data, using as a test set, data in the range between 1996 August and 2010 December and as training set, data in the range between 1988 December and 1996 June. To validate the method, we computed several skill scores typically utilized in flare prediction and compared the values provided by the hybrid approach with the ones provided by several standard (non-hybrid) machine learning methods. The results showed that the hybrid approach performs classification better than all other supervised methods and with an effectiveness comparable to the one of clustering methods; but, in addition, it provides a reliable ranking of the weights with which the data properties contribute to the forecast.

  9. Extending a field-based Sonoran desert vegetation classification to a regional scale using optical and microwave satellite imagery

    NASA Astrophysics Data System (ADS)

    Shupe, Scott Marshall

    2000-10-01

    Vegetation mapping in and regions facilitates ecological studies, land management, and provides a record to which future land changes can be compared. Accurate and representative mapping of desert vegetation requires a sound field sampling program and a methodology to transform the data collected into a representative classification system. Time and cost constraints require that a remote sensing approach be used if such a classification system is to be applied on a regional scale. However, desert vegetation may be sparse and thus difficult to sense at typical satellite resolutions, especially given the problem of soil reflectance. This study was designed to address these concerns by conducting vegetation mapping research using field and satellite data from the US Army Yuma Proving Ground (USYPG) in Southwest Arizona. Line and belt transect data from the Army's Land Condition Trend Analysis (LCTA) Program were transformed into relative cover and relative density classification schemes using cluster analysis. Ordination analysis of the same data produced two and three-dimensional graphs on which the homogeneity of each vegetation class could be examined. It was found that the use of correspondence analysis (CA), detrended correspondence analysis (DCA), and non-metric multidimensional scaling (NMS) ordination methods was superior to the use of any single ordination method for helping to clarify between-class and within-class relationships in vegetation composition. Analysis of these between-class and within-class relationships were of key importance in examining how well relative cover and relative density schemes characterize the USYPG vegetation. Using these two classification schemes as reference data, maximum likelihood and artificial neural net classifications were then performed on a coregistered dataset consisting of a summer Landsat Thematic Mapper (TM) image, one spring and one summer ERS-1 microwave image, and elevation, slope, and aspect layers. Classifications using a combination of ERS-1 imagery and elevation, slope, and aspect data were superior to classifications carried out using Landsat TM data alone. In all classification iterations it was consistently found that the highest classification accuracy was obtained by using a combination of Landsat TM, ERS-1, and elevation, slope, and aspect data. Maximum likelihood classification accuracy was found to be higher than artificial neural net classification in all cases.

  10. Classification accuracy on the family planning participation status using kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Kurniawan, Dian; Suparti; Sugito

    2018-05-01

    Population growth in Indonesia has increased every year. According to the population census conducted by the Central Bureau of Statistics (BPS) in 2010, the population of Indonesia has reached 237.6 million people. Therefore, to control the population growth rate, the government hold Family Planning or Keluarga Berencana (KB) program for couples of childbearing age. The purpose of this program is to improve the health of mothers and children in order to manifest prosperous society by controlling births while ensuring control of population growth. The data used in this study is the updated family data of Semarang city in 2016 that conducted by National Family Planning Coordinating Board (BKKBN). From these data, classifiers with kernel discriminant analysis will be obtained, and also classification accuracy will be obtained from that method. The result of the analysis showed that normal kernel discriminant analysis gives 71.05 % classification accuracy with 28.95 % classification error. Whereas triweight kernel discriminant analysis gives 73.68 % classification accuracy with 26.32 % classification error. Using triweight kernel discriminant for data preprocessing of family planning participation of childbearing age couples in Semarang City of 2016 can be stated better than with normal kernel discriminant.

  11. Joint multifractal analysis based on the partition function approach: analytical analysis, numerical simulation and empirical application

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing

    2015-10-01

    Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.

  12. A detailed comparison of analysis processes for MCC-IMS data in disease classification—Automated methods can replace manual peak annotations

    PubMed Central

    Horsch, Salome; Kopczynski, Dominik; Kuthe, Elias; Baumbach, Jörg Ingo; Rahmann, Sven

    2017-01-01

    Motivation Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column—ion mobility spectrometry (MCC-IMS) is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process. Method We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios. Results The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology. PMID:28910313

  13. Fate of heavy metals during municipal solid waste incineration.

    PubMed

    Abanades, S; Flamant, G; Gagnepain, B; Gauthier, D

    2002-02-01

    A thermodynamic analysis was performed to determine whether it is suitable to predict the heavy metal (HM) speciation during the Municipal Solid Waste Incineration process. The fate of several selected metals (Cd, Pb, Zn, Cr, Hg, As, Cu, Co, Ni) during incineration was theoretically investigated. The equilibrium analysis predicted the metal partitioning during incineration and determined the impact of operating conditions (temperature and gas composition) on their speciation. The study of the gas composition influence was based on the effects of the contents of oxygen (reducing or oxidising conditions) and chlorine on the HM partitioning. The theoretical HM speciation which was calculated in a complex system representing a burning sample of Municipal Solid Waste can explain the real partitioning (obtained from literature results) of all metals among the various ashes except for Pb. Then, the results of the thermodynamic study were compared with those of characterisation of real incinerator residues, using complementary techniques (chemical extraction series and X-ray micro-analyses). These analysis were performed to determine experimentally the speciation of the three representative metals Cr, Pb, and Zn. The agreement is good for Cr and Zn but not for Pb again, which mainly shows unleachable chemical speciations in the residues. Pb tends to remain in the bottom ash whereas thermodynamics often predicts its complete volatilisation under chlorides, and thus its presence exclusively in fly ash.

  14. Supervised DNA Barcodes species classification: analysis, comparisons and results

    PubMed Central

    2014-01-01

    Background Specific fragments, coming from short portions of DNA (e.g., mitochondrial, nuclear, and plastid sequences), have been defined as DNA Barcode and can be used as markers for organisms of the main life kingdoms. Species classification with DNA Barcode sequences has been proven effective on different organisms. Indeed, specific gene regions have been identified as Barcode: COI in animals, rbcL and matK in plants, and ITS in fungi. The classification problem assigns an unknown specimen to a known species by analyzing its Barcode. This task has to be supported with reliable methods and algorithms. Methods In this work the efficacy of supervised machine learning methods to classify species with DNA Barcode sequences is shown. The Weka software suite, which includes a collection of supervised classification methods, is adopted to address the task of DNA Barcode analysis. Classifier families are tested on synthetic and empirical datasets belonging to the animal, fungus, and plant kingdoms. In particular, the function-based method Support Vector Machines (SVM), the rule-based RIPPER, the decision tree C4.5, and the Naïve Bayes method are considered. Additionally, the classification results are compared with respect to ad-hoc and well-established DNA Barcode classification methods. Results A software that converts the DNA Barcode FASTA sequences to the Weka format is released, to adapt different input formats and to allow the execution of the classification procedure. The analysis of results on synthetic and real datasets shows that SVM and Naïve Bayes outperform on average the other considered classifiers, although they do not provide a human interpretable classification model. Rule-based methods have slightly inferior classification performances, but deliver the species specific positions and nucleotide assignments. On synthetic data the supervised machine learning methods obtain superior classification performances with respect to the traditional DNA Barcode classification methods. On empirical data their classification performances are at a comparable level to the other methods. Conclusions The classification analysis shows that supervised machine learning methods are promising candidates for handling with success the DNA Barcoding species classification problem, obtaining excellent performances. To conclude, a powerful tool to perform species identification is now available to the DNA Barcoding community. PMID:24721333

  15. Two Influential Primate Classifications Logically Aligned.

    PubMed

    Franz, Nico M; Pier, Naomi M; Reeder, Deeann M; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2016-07-01

    Classifications and phylogenies of perceived natural entities change in the light of new evidence. Taxonomic changes, translated into Code-compliant names, frequently lead to name:meaning dissociations across succeeding treatments. Classification standards such as the Mammal Species of the World (MSW) may experience significant levels of taxonomic change from one edition to the next, with potential costs to long-term, large-scale information integration. This circumstance challenges the biodiversity and phylogenetic data communities to express taxonomic congruence and incongruence in ways that both humans and machines can process, that is, to logically represent taxonomic alignments across multiple classifications. We demonstrate that such alignments are feasible for two classifications of primates corresponding to the second and third MSW editions. Our approach has three main components: (i) use of taxonomic concept labels, that is name sec. author (where sec. means according to), to assemble each concept hierarchy separately via parent/child relationships; (ii) articulation of select concepts across the two hierarchies with user-provided Region Connection Calculus (RCC-5) relationships; and (iii) the use of an Answer Set Programming toolkit to infer and visualize logically consistent alignments of these input constraints. Our use case entails the Primates sec. Groves (1993; MSW2-317 taxonomic concepts; 233 at the species level) and Primates sec. Groves (2005; MSW3-483 taxonomic concepts; 376 at the species level). Using 402 RCC-5 input articulations, the reasoning process yields a single, consistent alignment and 153,111 Maximally Informative Relations that constitute a comprehensive meaning resolution map for every concept pair in the Primates sec. MSW2/MSW3. The complete alignment, and various partitions thereof, facilitate quantitative analyses of name:meaning dissociation, revealing that nearly one in three taxonomic names are not reliable across treatments-in the sense of the same name identifying congruent taxonomic meanings. The RCC-5 alignment approach is potentially widely applicable in systematics and can achieve scalable, precise resolution of semantically evolving name usages in synthetic, next-generation biodiversity, and phylogeny data platforms. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society of Systematic Biologists.

  16. Comparative Analysis of RF Emission Based Fingerprinting Techniques for ZigBee Device Classification

    DTIC Science & Technology

    quantify the differences invarious RF fingerprinting techniques via comparative analysis of MDA/ML classification results. The findings herein demonstrate...correct classification rates followed by COR-DNA and then RF-DNA in most test cases and especially in low Eb/N0 ranges, where ZigBee is designed to operate.

  17. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  18. Strength Analysis on Ship Ladder Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Budianto; Wahyudi, M. T.; Dinata, U.; Ruddianto; Eko P., M. M.

    2018-01-01

    In designing the ship’s structure, it should refer to the rules in accordance with applicable classification standards. In this case, designing Ladder (Staircase) on a Ferry Ship which is set up, it must be reviewed based on the loads during ship operations, either during sailing or at port operations. The classification rules in ship design refer to the calculation of the structure components described in Classification calculation method and can be analysed using the Finite Element Method. Classification Regulations used in the design of Ferry Ships used BKI (Bureau of Classification Indonesia). So the rules for the provision of material composition in the mechanical properties of the material should refer to the classification of the used vessel. The analysis in this structure used program structure packages based on Finite Element Method. By using structural analysis on Ladder (Ladder), it obtained strength and simulation structure that can withstand load 140 kg both in static condition, dynamic, and impact. Therefore, the result of the analysis included values of safety factors in the ship is to keep the structure safe but the strength of the structure is not excessive.

  19. Determining the Metal/Silicate Partition Coefficient of Germanium: Implications for Core and Mantle Differentiation.

    NASA Technical Reports Server (NTRS)

    King, C.; Righter, K.; Danielson, L.; Pando, K.; Lee, C.

    2010-01-01

    Currently there are several hypotheses for the thermal state of the early Earth. Some hypothesize a shallow magma ocean, or deep magma ocean, or heterogeneous accretion which requires no magma ocean at all. Previous models are unable to account for Ge depletion in Earth's mantle relative to CI chondrites. In this study, the element Ge is used to observe the way siderophile elements partition into the metallic core. The purpose of this research is to provide new data for Ge and to further test these models for Earth's early stages. The partition coefficients (D(sub Ge) = c(sub metal)/c(sub silicate), where D = partition coefficient of Ge and c = concentration of Ge in the metal and silicate, respectively) of siderophile elements were studied by performing series of high pressure, high temperature experiments. They are also dependent on oxygen fugacity, and metal and silicate composition. Ge is a moderately siderophile element found in both the mantle and core, and has yet to be studied systematically at high temperatures. Moreover, previous work has been limited by the low solubility of Ge in silicate melts (less than 100 ppm and close to detection limits for electron microprobe analysis). Reported here are results from 14 experiments studying the partitioning of Ge between silicate and metallic liquids. The Ge concentrations were then analyzed using Laser Ablation Inductively Coupled Mass Spectrometry (LA-ICP-MS) which is sensitive enough to detect ppm levels of Ge in the silicate melt.

  20. Partition behavior of virgin olive oil phenolic compounds in oil-brine mixtures during thermal processing for fish canning.

    PubMed

    Sacchi, Raffaele; Paduano, Antonello; Fiore, Francesca; Della Medaglia, Dorotea; Ambrosino, Maria Luisa; Medina, Isabel

    2002-05-08

    The chemical modifications and partitioning toward the brine phase (5% salt) of major phenol compounds of extra virgin olive oil (EVOO) were studied in a model system formed by sealed cans filled with oil-brine mixtures (5:1, v/v) simulating canned-in-oil food systems. Filled cans were processed in an industrial plant using two sterilization conditions commonly used during fish canning. The partitioning of phenolic compounds toward brine induced by thermal processing was studied by reversed-phase high-performance liquid chromatographic analysis of the phenol fraction extracted from oils and brine. Hydroxytyrosol (1), tyrosol (2), and the complex phenolic compounds containing 1 and 2 (i.e., the dialdehydic form of decarboxymethyl oleuropein aglycon 3, the dialdehydic form of decarboxymethyl ligstroside aglycon 4, and the oleuropein aglycon 6) decreased in the oily phase after sterilization with a marked partitioning toward the brine phase. The increase of the total amount of 1 and 2 after processing, as well as the presence of elenolic acid 7 released in brine, revealed the hydrolysis of the ester bond of hydrolyzable phenolic compounds 3, 4, and 6 during thermal processing. Both phenomena (partitioning toward the water phase and hydrolysis) contribute to explain the loss of phenolic compounds exhibited by EVOO used as filling medium in canned foods, as well as the protection of n-3 polyunsaturated fatty acids in canned-in-EVOO fish products.

  1. Determining metal origins and availability in fluvial deposits by analysis of geochemical baselines and solid-solution partitioning measurements and modelling.

    PubMed

    Vijver, Martina G; Spijker, Job; Vink, Jos P M; Posthuma, Leo

    2008-12-01

    Metals in floodplain soils and sediments (deposits) can originate from lithogenic and anthropogenic sources, and their availability for uptake in biota is hypothesized to depend on both origin and local sediment conditions. In criteria-based environmental risk assessments, these issues are often neglected, implying local risks to be often over-estimated. Current problem definitions in river basin management tend to require a refined, site-specific focus, resulting in a need to address both aspects. This paper focuses on the determination of local environmental availabilities of metals in fluvial deposits by addressing both the origins of the metals and their partitioning over the solid and solution phases. The environmental availability of metals is assumed to be a key force influencing exposure levels in field soils and sediments. Anthropogenic enrichments of Cu, Zn and Pb in top layers could be distinguished from lithogenic background concentrations and described using an aluminium-proxy. Cd in top layers was attributed to anthropogenic enrichment almost fully. Anthropogenic enrichments for Cu and Zn appeared further to be also represented by cold 2M HNO3 extraction of site samples. For Pb the extractions over-estimated the enrichments. Metal partitioning was measured, and measurements were compared to predictions generated by an empirical regression model and by a mechanistic-kinetic model. The partitioning models predicted metal partitioning in floodplain deposits within about one order of magnitude, though a large inter-sample variability was found for Pb.

  2. Radio-metabolite analysis of carbon-11 biochemical partitioning to non-structural carbohydrates for integrated metabolism and transport studies.

    PubMed

    Babst, Benjamin A; Karve, Abhijit A; Judt, Tatjana

    2013-06-01

    Metabolism and phloem transport of carbohydrates are interactive processes, yet each is often studied in isolation from the other. Carbon-11 ((11)C) has been successfully used to study transport and allocation processes dynamically over time. There is a need for techniques to determine metabolic partitioning of newly fixed carbon that are compatible with existing non-invasive (11)C-based methodologies for the study of phloem transport. In this report, we present methods using (11)C-labeled CO2 to trace carbon partitioning to the major non-structural carbohydrates in leaves-sucrose, glucose, fructose and starch. High-performance thin-layer chromatography (HPTLC) was adapted to provide multisample throughput, raising the possibility of measuring different tissues of the same individual plant, or for screening multiple plants. An additional advantage of HPTLC was that phosphor plate imaging of radioactivity had a much higher sensitivity and broader range of sensitivity than radio-HPLC detection, allowing measurement of (11)C partitioning to starch, which was previously not possible. Because of the high specific activity of (11)C and high sensitivity of detection, our method may have additional applications in the study of rapid metabolic responses to environmental changes that occur on a time scale of minutes. The use of this method in tandem with other (11)C assays for transport dynamics and whole-plant partitioning makes a powerful combination of tools to study carbohydrate metabolism and whole-plant transport as integrated processes.

  3. Consequences of Common Topological Rearrangements for Partition Trees in Phylogenomic Inference.

    PubMed

    Chernomor, Olga; Minh, Bui Quang; von Haeseler, Arndt

    2015-12-01

    In phylogenomic analysis the collection of trees with identical score (maximum likelihood or parsimony score) may hamper tree search algorithms. Such collections are coined phylogenetic terraces. For sparse supermatrices with a lot of missing data, the number of terraces and the number of trees on the terraces can be very large. If terraces are not taken into account, a lot of computation time might be unnecessarily spent to evaluate many trees that in fact have identical score. To save computation time during the tree search, it is worthwhile to quickly identify such cases. The score of a species tree is the sum of scores for all the so-called induced partition trees. Therefore, if the topological rearrangement applied to a species tree does not change the induced partition trees, the score of these partition trees is unchanged. Here, we provide the conditions under which the three most widely used topological rearrangements (nearest neighbor interchange, subtree pruning and regrafting, and tree bisection and reconnection) change the topologies of induced partition trees. During the tree search, these conditions allow us to quickly identify whether we can save computation time on the evaluation of newly encountered trees. We also introduce the concept of partial terraces and demonstrate that they occur more frequently than the original "full" terrace. Hence, partial terrace is the more important factor of timesaving compared to full terrace. Therefore, taking into account the above conditions and the partial terrace concept will help to speed up the tree search in phylogenomic inference.

  4. Unsupervised segmentation of MRI knees using image partition forests

    NASA Astrophysics Data System (ADS)

    Marčan, Marija; Voiculescu, Irina

    2016-03-01

    Nowadays many people are affected by arthritis, a condition of the joints with limited prevention measures, but with various options of treatment the most radical of which is surgical. In order for surgery to be successful, it can make use of careful analysis of patient-based models generated from medical images, usually by manual segmentation. In this work we show how to automate the segmentation of a crucial and complex joint -- the knee. To achieve this goal we rely on our novel way of representing a 3D voxel volume as a hierarchical structure of partitions which we have named Image Partition Forest (IPF). The IPF contains several partition layers of increasing coarseness, with partitions nested across layers in the form of adjacency graphs. On the basis of a set of properties (size, mean intensity, coordinates) of each node in the IPF we classify nodes into different features. Values indicating whether or not any particular node belongs to the femur or tibia are assigned through node filtering and node-based region growing. So far we have evaluated our method on 15 MRI knee images. Our unsupervised segmentation compared against a hand-segmented gold standard has achieved an average Dice similarity coefficient of 0.95 for femur and 0.93 for tibia, and an average symmetric surface distance of 0.98 mm for femur and 0.73 mm for tibia. The paper also discusses ways to introduce stricter morphological and spatial conditioning in the bone labelling process.

  5. Partitioning and lipophilicity in quantitative structure-activity relationships.

    PubMed Central

    Dearden, J C

    1985-01-01

    The history of the relationship of biological activity to partition coefficient and related properties is briefly reviewed. The dominance of partition coefficient in quantitation of structure-activity relationships is emphasized, although the importance of other factors is also demonstrated. Various mathematical models of in vivo transport and binding are discussed; most of these involve partitioning as the primary mechanism of transport. The models describe observed quantitative structure-activity relationships (QSARs) well on the whole, confirming that partitioning is of key importance in in vivo behavior of a xenobiotic. The partition coefficient is shown to correlate with numerous other parameters representing bulk, such as molecular weight, volume and surface area, parachor and calculated indices such as molecular connectivity; this is especially so for apolar molecules, because for polar molecules lipophilicity factors into both bulk and polar or hydrogen bonding components. The relationship of partition coefficient to chromatographic parameters is discussed, and it is shown that such parameters, which are often readily obtainable experimentally, can successfully supplant partition coefficient in QSARs. The relationship of aqueous solubility with partition coefficient is examined in detail. Correlations are observed, even with solid compounds, and these can be used to predict solubility. The additive/constitutive nature of partition coefficient is discussed extensively, as are the available schemes for the calculation of partition coefficient. Finally the use of partition coefficient to provide structural information is considered. It is shown that partition coefficient can be a valuable structural tool, especially if the enthalpy and entropy of partitioning are available. PMID:3905374

  6. Object-Based Random Forest Classification of Land Cover from Remotely Sensed Imagery for Industrial and Mining Reclamation

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.

    2018-04-01

    The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.

  7. ASIST SIG/CR Classification Workshop 2000: Classification for User Support and Learning.

    ERIC Educational Resources Information Center

    Soergel, Dagobert

    2001-01-01

    Reports on papers presented at the 62nd Annual Meeting of ASIST (American Society for Information Science and Technology) for the Special Interest Group in Classification Research (SIG/CR). Topics include types of knowledge; developing user-oriented classifications, including domain analysis; classification in the user interface; and automatic…

  8. Proposition of a Classification of Adult Patients with Hemiparesis in Chronic Phase.

    PubMed

    Chantraine, Frédéric; Filipetti, Paul; Schreiber, Céline; Remacle, Angélique; Kolanowski, Elisabeth; Moissenet, Florent

    2016-01-01

    Patients who have developed hemiparesis as a result of a central nervous system lesion, often experience reduced walking capacity and worse gait quality. Although clinically, similar gait patterns have been observed, presently, no clinically driven classification has been validated to group these patients' gait abnormalities at the level of the hip, knee and ankle joints. This study has thus intended to put forward a new gait classification for adult patients with hemiparesis in chronic phase, and to validate its discriminatory capacity. Twenty-six patients with hemiparesis were included in this observational study. Following a clinical examination, a clinical gait analysis, complemented by a video analysis, was performed whereby participants were requested to walk spontaneously on a 10m walkway. A patient's classification was established from clinical examination data and video analysis. This classification was made up of three groups, including two sub-groups, defined with key abnormalities observed whilst walking. Statistical analysis was achieved on the basis of 25 parameters resulting from the clinical gait analysis in order to assess the discriminatory characteristic of the classification as displayed by the walking speed and kinematic parameters. Results revealed that the parameters related to the discriminant criteria of the proposed classification were all significantly different between groups and subgroups. More generally, nearly two thirds of the 25 parameters showed significant differences (p<0.05) between the groups and sub-groups. However, prior to being fully validated, this classification must still be tested on a larger number of patients, and the repeatability of inter-operator measures must be assessed. This classification enables patients to be grouped on the basis of key abnormalities observed whilst walking and has the advantage of being able to be used in clinical routines without necessitating complex apparatus. In the midterm, this classification may allow a decision-tree of therapies to be developed on the basis of the group in which the patient has been categorised.

  9. Proposition of a Classification of Adult Patients with Hemiparesis in Chronic Phase

    PubMed Central

    Filipetti, Paul; Remacle, Angélique; Kolanowski, Elisabeth

    2016-01-01

    Background Patients who have developed hemiparesis as a result of a central nervous system lesion, often experience reduced walking capacity and worse gait quality. Although clinically, similar gait patterns have been observed, presently, no clinically driven classification has been validated to group these patients’ gait abnormalities at the level of the hip, knee and ankle joints. This study has thus intended to put forward a new gait classification for adult patients with hemiparesis in chronic phase, and to validate its discriminatory capacity. Methods and Findings Twenty-six patients with hemiparesis were included in this observational study. Following a clinical examination, a clinical gait analysis, complemented by a video analysis, was performed whereby participants were requested to walk spontaneously on a 10m walkway. A patient’s classification was established from clinical examination data and video analysis. This classification was made up of three groups, including two sub-groups, defined with key abnormalities observed whilst walking. Statistical analysis was achieved on the basis of 25 parameters resulting from the clinical gait analysis in order to assess the discriminatory characteristic of the classification as displayed by the walking speed and kinematic parameters. Results revealed that the parameters related to the discriminant criteria of the proposed classification were all significantly different between groups and subgroups. More generally, nearly two thirds of the 25 parameters showed significant differences (p<0.05) between the groups and sub-groups. However, prior to being fully validated, this classification must still be tested on a larger number of patients, and the repeatability of inter-operator measures must be assessed. Conclusions This classification enables patients to be grouped on the basis of key abnormalities observed whilst walking and has the advantage of being able to be used in clinical routines without necessitating complex apparatus. In the midterm, this classification may allow a decision-tree of therapies to be developed on the basis of the group in which the patient has been categorised. PMID:27271533

  10. Experimental Determination of Spinel/Melt, Olivine/Melt, and Pyroxene/Melt Partition Coefficients for Re, Ru, Pd, Au, and Pt

    NASA Technical Reports Server (NTRS)

    Righter, K.; Campbell, A. J.; Humayun, M.

    2003-01-01

    Experimental studies have identified spinels as important hosts phases for many of the highly siderophile elements (HSE). Yet experimental studies involving chromite or Cr-rich spinel have been lacking. Experimental studies of partitioning of HSEs between silicate, oxides and silicate melt are plagued by low solubilities and the presence of small metallic nuggets at oxygen fugacities relevant to magmas, which interfere with analysis of the phases of interest. We have circumvented these problems in two ways: 1) performing experiments at oxidized conditions, which are still relevant to natural systems but in which nuggets are not observed, and 2) analysis of run products with laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), allowing a combination of high sensitivity and good spatial resolution.

  11. MO-DE-207A-02: A Feature-Preserving Image Reconstruction Method for Improved Pancreaticlesion Classification in Diagnostic CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Tsui, B; Noo, F

    Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internalmore » features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The Senior Author receives financial support from Siemens GmbH Healthcare.« less

  12. Overview of data and conceptual approaches for derivation of quantitative structure-activity relationships for ecotoxicological effects of organic chemicals.

    PubMed

    Bradbury, Steven P; Russom, Christine L; Ankley, Gerald T; Schultz, T Wayne; Walker, John D

    2003-08-01

    The use of quantitative structure-activity relationships (QSARs) in assessing potential toxic effects of organic chemicals on aquatic organisms continues to evolve as computational efficiency and toxicological understanding advance. With the ever-increasing production of new chemicals, and the need to optimize resources to assess thousands of existing chemicals in commerce, regulatory agencies have turned to QSARs as essential tools to help prioritize tiered risk assessments when empirical data are not available to evaluate toxicological effects. Progress in designing scientifically credible QSARs is intimately associated with the development of empirically derived databases of well-defined and quantified toxicity endpoints, which are based on a strategic evaluation of diverse sets of chemical structures, modes of toxic action, and species. This review provides a brief overview of four databases created for the purpose of developing QSARs for estimating toxicity of chemicals to aquatic organisms. The evolution of QSARs based initially on general chemical classification schemes, to models founded on modes of toxic action that range from nonspecific partitioning into hydrophobic cellular membranes to receptor-mediated mechanisms is summarized. Finally, an overview of expert systems that integrate chemical-specific mode of action classification and associated QSAR selection for estimating potential toxicological effects of organic chemicals is presented.

  13. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  14. Motor Oil Classification using Color Histograms and Pattern Recognition Techniques.

    PubMed

    Ahmadi, Shiva; Mani-Varnosfaderani, Ahmad; Habibi, Biuck

    2018-04-20

    Motor oil classification is important for quality control and the identification of oil adulteration. In thiswork, we propose a simple, rapid, inexpensive and nondestructive approach based on image analysis and pattern recognition techniques for the classification of nine different types of motor oils according to their corresponding color histograms. For this, we applied color histogram in different color spaces such as red green blue (RGB), grayscale, and hue saturation intensity (HSI) in order to extract features that can help with the classification procedure. These color histograms and their combinations were used as input for model development and then were statistically evaluated by using linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM) techniques. Here, two common solutions for solving a multiclass classification problem were applied: (1) transformation to binary classification problem using a one-against-all (OAA) approach and (2) extension from binary classifiers to a single globally optimized multilabel classification model. In the OAA strategy, LDA, QDA, and SVM reached up to 97% in terms of accuracy, sensitivity, and specificity for both the training and test sets. In extension from binary case, despite good performances by the SVM classification model, QDA and LDA provided better results up to 92% for RGB-grayscale-HSI color histograms and up to 93% for the HSI color map, respectively. In order to reduce the numbers of independent variables for modeling, a principle component analysis algorithm was used. Our results suggest that the proposed method is promising for the identification and classification of different types of motor oils.

  15. Task-specific image partitioning.

    PubMed

    Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D

    2013-02-01

    Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.

  16. Geochemistry of the Spor Mountain rhyolite, western Utah, as revealed by laser ablation ICP-MS, cathodoluminescence, and electron microprobe analysis

    NASA Astrophysics Data System (ADS)

    Dailey, S. R.; Christiansen, E. H.; Dorais, M.; Fernandez, D. P.

    2015-12-01

    The Miocene topaz rhyolite at Spor Mountain in western Utah hosts one of the largest beryllium deposits in the world and was responsible for producing 85% of the beryllium mined worldwide in 2010 (Boland, 2012). The Spor Mountain rhyolite is composed primarily of Ca-poor plagioclase (An8), sodic sanidine (Or40), Fe-rich biotite (Fe/(Fe+Mg)>0.95; Al 1.2-1.4 apfu), and Ti-poor quartz, along with several trace-element rich accessory phases including zircon, monazite, thorite, columbite, and allanite. Cathodoluminescence (CL) studies of quartz show oscillatory zoning, with 80% of the examined crystals displaying euhedral edges and slightly darker rims. CL images were used to guide laser ablation (LA) ICP-MS analysis of quartz, along with analyses of plagioclase, sanidine, biotite, and glass. Ti concentrations in quartz are 20±6 ppm; there is no quantifiable variation of Ti from core to rim within the diameter of the laser spot (53 microns). Temperatures, calculated using Ti in quartz (at 2 kb, aTiO2=0.34), vary between 529±10 C (Thomas et al., 2011), 669±13 C (Huang and Audetat, 2012), and 691±13 C (Wark and Watson, 2006). Two feldspar thermometry yield temperatures of 686±33 C (Elkins and Grove, 1990) and 670±41 C (Benisek et al., 2010). Zr saturation temperatures (Watson and Harrison, 1983) average 711±28 C. Analysis of the glass reveal the Spor Mountain rhyolite is greatly enriched in rare elements (i.e. Li, Be, F, Ga, Rb, Nb, Mo, Sn, and Ta) compared to average continental crust (Rudnick and Gao, 2003). Be in the glass can have as much as 100 ppm, nearly 50 times the concentration in continental crust. REE partition coefficients for sanidine are 2 to 3 times higher in the Spor Mountain rhyolite when compared to other silicic magmas (Nash and Crecraft, 1985; Mahood and Hildreth, 1983), although plagioclase tends to have lower partition coefficients; biotite has lower partition coefficients for LREE and higher partition coefficients for HREE. The patterns of trace element enrichment and depletion are similar to those of the measured partition coefficients, consistent with a major role for extensive fractional crystallization in the origin of the Be enriched magma.

  17. Turgor-responsive starch phosphorylation in Oryza sativa stems: A primary event of starch degradation associated with grain-filling ability.

    PubMed

    Wada, Hiroshi; Masumoto-Kubo, Chisato; Tsutsumi, Koichi; Nonami, Hiroshi; Tanaka, Fukuyo; Okada, Haruka; Erra-Balsells, Rosa; Hiraoka, Kenzo; Nakashima, Taiken; Hakata, Makoto; Morita, Satoshi

    2017-01-01

    Grain filling ability is mainly affected by the translocation of carbohydrates generated from temporarily stored stem starch in most field crops including rice (Oryza sativa L.). The partitioning of non-structural stem carbohydrates has been recognized as an important trait for raising the yield ceiling, yet we still do not fully understand how carbohydrate partitioning occurs in the stems. In this study, two rice subspecies that exhibit different patterns of non-structural stem carbohydrates partitioning, a japonica-dominant cultivar, Momiroman, and an indica-dominant cultivar, Hokuriku 193, were used as the model system to study the relationship between turgor pressure and metabolic regulation of non-structural stem carbohydrates, by combining the water status measurement with gene expression analysis and a dynamic prefixed 13C tracer analysis using a mass spectrometer. Here, we report a clear varietal difference in turgor-associated starch phosphorylation occurred at the initiation of non-structural carbohydrate partitioning. The data indicated that starch degradation in Hokuriku 193 stems occurred at full-heading, 5 days earlier than in Momiroman, contributing to greater sink filling. Gene expression analysis revealed that expression pattern of the gene encoding α-glucan, water dikinase (GWD1) was similar between two varieties, and the maximum expression level in Hokuriku 193, reached at full heading (4 DAH), was greater than in Momiroman, leading to an earlier increase in a series of amylase-related gene expression in Hokuriku 193. In both varieties, peaks in turgor pressure preceded the increases in GWD1 expression, and changes in GWD1 expression was correlated with turgor pressure. Additionally, a threshold is likely to exist for GWD1 expression to facilitate starch degradation. Taken together, these results raise the possibility that turgor-associated starch phosphorylation in cells is responsible for the metabolism that leads to starch degradation. Because the two cultivars exhibited remarkable varietal differences in the pattern of non-structural carbohydrate partitioning, our findings propose that the observed difference in grain-filling ability originated from turgor-associated regulation of starch phosphorylation in stem parenchyma cells. Further understanding of the molecular mechanism of turgor-regulation may provide a new selection criterion for breaking the yield barriers in crop production.

  18. Turgor-responsive starch phosphorylation in Oryza sativa stems: A primary event of starch degradation associated with grain-filling ability

    PubMed Central

    Tsutsumi, Koichi; Nonami, Hiroshi; Tanaka, Fukuyo; Okada, Haruka; Erra-Balsells, Rosa; Hiraoka, Kenzo; Nakashima, Taiken; Hakata, Makoto; Morita, Satoshi

    2017-01-01

    Grain filling ability is mainly affected by the translocation of carbohydrates generated from temporarily stored stem starch in most field crops including rice (Oryza sativa L.). The partitioning of non-structural stem carbohydrates has been recognized as an important trait for raising the yield ceiling, yet we still do not fully understand how carbohydrate partitioning occurs in the stems. In this study, two rice subspecies that exhibit different patterns of non-structural stem carbohydrates partitioning, a japonica-dominant cultivar, Momiroman, and an indica-dominant cultivar, Hokuriku 193, were used as the model system to study the relationship between turgor pressure and metabolic regulation of non-structural stem carbohydrates, by combining the water status measurement with gene expression analysis and a dynamic prefixed 13C tracer analysis using a mass spectrometer. Here, we report a clear varietal difference in turgor-associated starch phosphorylation occurred at the initiation of non-structural carbohydrate partitioning. The data indicated that starch degradation in Hokuriku 193 stems occurred at full-heading, 5 days earlier than in Momiroman, contributing to greater sink filling. Gene expression analysis revealed that expression pattern of the gene encoding α-glucan, water dikinase (GWD1) was similar between two varieties, and the maximum expression level in Hokuriku 193, reached at full heading (4 DAH), was greater than in Momiroman, leading to an earlier increase in a series of amylase-related gene expression in Hokuriku 193. In both varieties, peaks in turgor pressure preceded the increases in GWD1 expression, and changes in GWD1 expression was correlated with turgor pressure. Additionally, a threshold is likely to exist for GWD1 expression to facilitate starch degradation. Taken together, these results raise the possibility that turgor-associated starch phosphorylation in cells is responsible for the metabolism that leads to starch degradation. Because the two cultivars exhibited remarkable varietal differences in the pattern of non-structural carbohydrate partitioning, our findings propose that the observed difference in grain-filling ability originated from turgor-associated regulation of starch phosphorylation in stem parenchyma cells. Further understanding of the molecular mechanism of turgor-regulation may provide a new selection criterion for breaking the yield barriers in crop production. PMID:28727805

  19. Discriminant forest classification method and system

    DOEpatents

    Chen, Barry Y.; Hanley, William G.; Lemmond, Tracy D.; Hiller, Lawrence J.; Knapp, David A.; Mugge, Marshall J.

    2012-11-06

    A hybrid machine learning methodology and system for classification that combines classical random forest (RF) methodology with discriminant analysis (DA) techniques to provide enhanced classification capability. A DA technique which uses feature measurements of an object to predict its class membership, such as linear discriminant analysis (LDA) or Andersen-Bahadur linear discriminant technique (AB), is used to split the data at each node in each of its classification trees to train and grow the trees and the forest. When training is finished, a set of n DA-based decision trees of a discriminant forest is produced for use in predicting the classification of new samples of unknown class.

  20. Tissue classification for laparoscopic image understanding based on multispectral texture analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena

    2016-03-01

    Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.

  1. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  2. The relationship between an advanced avionic system architecture and the elimination of the need for an Avionics Intermediate Shop (AIS)

    NASA Astrophysics Data System (ADS)

    Abraham, S. J.

    While Avionics Intermediate Shops (AISs) have in the past been required for military aircraft, the emerging VLSI/VHSIC technology has given rise to the possibility of novel, well partitioned avionics system architectures that obviate the high spare parts costs that formerly prompted and justified the existence of an AIS. Future avionics may therefore be adequately and economically supported by a two-level maintenance system. Algebraic generalizations are presented for the analysis of the spares costs implications of alternative design partitioning schemes for future avionics.

  3. Covariate-free and Covariate-dependent Reliability.

    PubMed

    Bentler, Peter M

    2016-12-01

    Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.

  4. 7 CFR 160.61 - Kinds of certificates issued.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... The kind of certificates issued are as follows: (a) Turpentine analysis and classification certificate. (b) Turpentine field classification certificate. (c) Rosin classification and grade certificate. (d...

  5. 7 CFR 160.61 - Kinds of certificates issued.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... The kind of certificates issued are as follows: (a) Turpentine analysis and classification certificate. (b) Turpentine field classification certificate. (c) Rosin classification and grade certificate. (d...

  6. 7 CFR 160.61 - Kinds of certificates issued.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... The kind of certificates issued are as follows: (a) Turpentine analysis and classification certificate. (b) Turpentine field classification certificate. (c) Rosin classification and grade certificate. (d...

  7. 7 CFR 160.61 - Kinds of certificates issued.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... The kind of certificates issued are as follows: (a) Turpentine analysis and classification certificate. (b) Turpentine field classification certificate. (c) Rosin classification and grade certificate. (d...

  8. Analysis of direct to diffuse partitioning of global solar irradiance at the radiometric station in Badajoz (Spain)

    NASA Astrophysics Data System (ADS)

    Sanchez, G.; Cancillo, M. L.; Serrano, A.

    2010-09-01

    This study is aimed at the analysis of the partitioning of global solar irradiance into its direct and diffuse components at the radiometric station in Badajoz (Spain). The detailed knowledge of the solar radiation field is of increasing interest in Southern Europe due to its use as renewable energy. In particular, the knowledge of the solar radiation partitioning into direct and diffuse radiation has become a major demand for the design and suitable orientation of solar panels in solar power plants. In this study the first measurements of solar diffuse irradiance performed in the radiometric station in Badajoz (Spain) are presented and analyzed in the framework of the partitioning of solar global radiation. Thus, solar global and diffuse irradiance were measured at one-minute basis from 23 November 2009 to 31 March 2010. Solar irradiances were measured by two Kipp&Zonen CMP11 pyranometers, using a Kipp&Zonen CM121 shadow ring for the measurements of solar diffuse irradiance. Diffuse measurements were corrected from the solid angle hidden by the ring and direct irradiance was calculated as the difference between global and diffuse measurements. Irradiance was obtained from the pyranomenters by applying calibration coefficients obtained in an inter-comparison campaign performed at INTA/El Arenosillo, in Huelva (Spain), last September 2009. There, calibration coefficients were calculated using as a reference a CMP11 pyranometer which had been previously calibrated by the Physikalisch-Meteorologisches Observatorium Davos/World Radiation Centre in Switzerland. In order to study the partitioning of the solar radiation, the global and diffuse irradiances have been analyzed for three typical different sky conditions: cloud-free, broken clouds and overcast. Particular days within the period of study have been selected by visual inspection. Along with the analysis of the global and diffuse irradiances themselves, ratios of these irradiances to the downward irradiance at the top of the atmosphere have also been analyzed. Several interesting features have been found. It is particularly worth to note the decreasing relative contribution of the direct component to the global irradiance as the solar zenith angle increases, due to a longer path crossed within the atmosphere. In broken clouds and overcast conditions, the diffuse component becomes the major contribution to the irradiance being the high-frequency variability the main difference between both type of cases. While in overcast conditions the global irradiance remains remarkably low, under broken clouds the global irradiance shows a very high variability frequently reaching values higher than the irradiance at the top of the atmosphere, due to multi-reflection phenomenon. The present study contributes to a better knowledge of the radiation field and its partitioning, involving original high-frequency measurements.

  9. Quantitative evaluation of variations in rule-based classifications of land cover in urban neighbourhoods using WorldView-2 imagery.

    PubMed

    Belgiu, Mariana; Dr Guţ, Lucian; Strobl, Josef

    2014-01-01

    The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules.

  10. Quantitative evaluation of variations in rule-based classifications of land cover in urban neighbourhoods using WorldView-2 imagery

    PubMed Central

    Belgiu, Mariana; Drǎguţ, Lucian; Strobl, Josef

    2014-01-01

    The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules. PMID:24623959

  11. Analysis of Landsat-4 Thematic Mapper data for classification of forest stands in Baldwin County, Alabama

    NASA Technical Reports Server (NTRS)

    Hill, C. L.

    1984-01-01

    A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.

  12. Quantitative evaluation of variations in rule-based classifications of land cover in urban neighbourhoods using WorldView-2 imagery

    NASA Astrophysics Data System (ADS)

    Belgiu, Mariana; ǎguţ, Lucian, , Dr; Strobl, Josef

    2014-01-01

    The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules.

  13. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  14. Hierarchically partitioned nonlinear equation solvers

    NASA Technical Reports Server (NTRS)

    Padovan, Joseph

    1987-01-01

    By partitioning solution space into a number of subspaces, a new multiply constrained partitioned Newton-Raphson nonlinear equation solver is developed. Specifically, for a given iteration, each of the various separate partitions are individually and simultaneously controlled. Due to the generality of the scheme, a hierarchy of partition levels can be employed. For finite-element-type applications, this includes the possibility of degree-of-freedom, nodal, elemental, geometric substructural, material and kinematically nonlinear group controls. It is noted that such partitioning can be continuously updated, depending on solution conditioning. In this context, convergence is ascertained at the individual partition level.

  15. Drug Distribution. Part 1. Models to Predict Membrane Partitioning.

    PubMed

    Nagar, Swati; Korzekwa, Ken

    2017-03-01

    Tissue partitioning is an important component of drug distribution and half-life. Protein binding and lipid partitioning together determine drug distribution. Two structure-based models to predict partitioning into microsomal membranes are presented. An orientation-based model was developed using a membrane template and atom-based relative free energy functions to select drug conformations and orientations for neutral and basic drugs. The resulting model predicts the correct membrane positions for nine compounds tested, and predicts the membrane partitioning for n = 67 drugs with an average fold-error of 2.4. Next, a more facile descriptor-based model was developed for acids, neutrals and bases. This model considers the partitioning of neutral and ionized species at equilibrium, and can predict membrane partitioning with an average fold-error of 2.0 (n = 92 drugs). Together these models suggest that drug orientation is important for membrane partitioning and that membrane partitioning can be well predicted from physicochemical properties.

  16. Some trees with partition dimension three

    NASA Astrophysics Data System (ADS)

    Fredlina, Ketut Queena; Baskoro, Edy Tri

    2016-02-01

    The concept of partition dimension of a graph was introduced by Chartrand, E. Salehi and P. Zhang (1998) [2]. Let G(V, E) be a connected graph. For S ⊆ V (G) and v ∈ V (G), define the distance d(v, S) from v to S is min{d(v, x)|x ∈ S}. Let Π be an ordered partition of V (G) and Π = {S1, S2, ..., Sk }. The representation r(v|Π) of vertex v with respect to Π is (d(v, S1), d(v, S2), ..., d(v, Sk)). If the representations of all vertices are distinct, then the partition Π is called a resolving partition of G. The partition dimension of G is the minimum k such that G has a resolving partition with k partition classes. In this paper, we characterize some classes of trees with partition dimension three, namely olive trees, weeds, and centipedes.

  17. Multiple-rule bias in the comparison of classification rules

    PubMed Central

    Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.

    2011-01-01

    Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390

  18. Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel-based "mouse pup syllable classification calculator".

    PubMed

    Grimsley, Jasmine M S; Gadziola, Marie A; Wenstrup, Jeffrey J

    2012-01-01

    Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified 10 syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.

  19. Online Learning for Classification of Alzheimer Disease based on Cortical Thickness and Hippocampal Shape Analysis.

    PubMed

    Lee, Ga-Young; Kim, Jeonghun; Kim, Ju Han; Kim, Kiwoong; Seong, Joon-Kyung

    2014-01-01

    Mobile healthcare applications are becoming a growing trend. Also, the prevalence of dementia in modern society is showing a steady growing trend. Among degenerative brain diseases that cause dementia, Alzheimer disease (AD) is the most common. The purpose of this study was to identify AD patients using magnetic resonance imaging in the mobile environment. We propose an incremental classification for mobile healthcare systems. Our classification method is based on incremental learning for AD diagnosis and AD prediction using the cortical thickness data and hippocampus shape. We constructed a classifier based on principal component analysis and linear discriminant analysis. We performed initial learning and mobile subject classification. Initial learning is the group learning part in our server. Our smartphone agent implements the mobile classification and shows various results. With use of cortical thickness data analysis alone, the discrimination accuracy was 87.33% (sensitivity 96.49% and specificity 64.33%). When cortical thickness data and hippocampal shape were analyzed together, the achieved accuracy was 87.52% (sensitivity 96.79% and specificity 63.24%). In this paper, we presented a classification method based on online learning for AD diagnosis by employing both cortical thickness data and hippocampal shape analysis data. Our method was implemented on smartphone devices and discriminated AD patients for normal group.

  20. Perinatal mortality classification: an analysis of 112 cases of stillbirth.

    PubMed

    Reis, Ana Paula; Rocha, Ana; Lebre, Andrea; Ramos, Umbelina; Cunha, Ana

    2017-10-01

    This was a retrospective cohort analysis of stillbirths that occurred from January 2004 to December 2013 in our institution. We compared Tulip and Wigglesworth classification systems on a cohort of stillbirths and analysed the main differences between these two classifications. In this period, there were 112 stillbirths of a total of 31,758 births (stillbirth rate of 3.5 per 1000 births). There were 99 antepartum deaths and 13 intrapartum deaths. Foetal autopsy was performed in 99 cases and placental histopathological examination in all of the cases. The Wigglesworth found 'unknown' causes in 47 cases and the Tulip classification allocated 33 of these. Fourteen cases remained in the group of 'unknown' causes. Therefore, the Wigglesworth classification of stillbirths results in a higher proportion of unexplained stillbirths. We suggest that the traditional Wigglesworth classification should be substituted by a classification that manages the available information.

Top